k8s 集群部署(1.14.1)
基础准备1. centos7 linux 操作系统2. docker 1.133. 关闭防火墙sudo systemctl stop firewalld.servicesudo systemctl disable firewalld.servicesudo firewall-cmd --state4. 禁用SELINUXsudo setenforce 0...
基础准备
1. centos7 linux 操作系统
2. docker 1.13
3. 关闭防火墙
sudo systemctl stop firewalld.service
sudo systemctl disable firewalld.service
sudo firewall-cmd --state
4. 禁用SELINUX
sudo setenforce 0
sudo vi /etc/selinux/config
修改SELINUX=disabled 如图:
5. 修改网络配置
sudo vi /etc/sysctl.d/k8s.conf
添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
使修改生效:
sudo sysctl -p /etc/sysctl.d/k8s.conf
6. 处理swap
swapoff -a
修改/etc/fstab文件,注释掉 SWAP 的自动挂载,防止机子重启后swap启用。
注释掉swap 哪一行,如图:
下载k8s(kubernetes)依赖包
1. 添加k8s源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2. 下载 kubelet、kubeadm、 kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet
初始化主节点
1. 执行kubeadm init --node-name=master-tiger --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.109.142
如图看到错误“[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-apiserver ... ”
这些镜像需要翻墙才可以拉取,可以‘曲线救国’的方法,在docker hub 里面的mirrorgooglecontainers项目中也有相应的镜像,我们只需要拉取下来重新打一下标签(tag)就可以了(拉取的版本需要对应)。
在拉取镜像之前需要给docker配置加速站点不然很慢甚至拉不下来。
mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["<your accelerate address>"]
}
systemctl daemon-reload
systemctl restart docker
附上国内加速站点地址
https://registry.docker-cn.com
https://3laho3y3.mirror.aliyuncs.com
https://mirror.ccs.tencentyun.com
docker pull docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.1
docker pull docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.1
docker pull docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.1
docker pull docker.io/mirrorgooglecontainers/kube-proxy:v1.14.1
docker pull docker.io/mirrorgooglecontainers/etcd:3.3.10
docker pull docker.io/coredns/coredns:1.3.1
docker pull docker.io/mirrorgooglecontainers/pause:3.1
2.重新tag 一下
docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
docker tag docker.io/mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag docker.io/coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
3.执行kubeadm init 结果
[root@localhost ~]# kubeadm init --node-name=master-tiger --pod-network-cidr=10.244.0.0/24 --token-ttl 0
I0507 10:49:16.001885 81252 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0507 10:49:16.002644 81252 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING Hostname]: hostname "master-tiger" could not be reached
[WARNING Hostname]: hostname "master-tiger": lookup master-tiger on 192.168.109.2:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master-tiger localhost] and IPs [192.168.109.142 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master-tiger localhost] and IPs [192.168.109.142 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master-tiger kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.109.142]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 69.504965 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master-tiger as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master-tiger as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 3u8apy.dela7zjqtmczjapp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.109.142:6443 --token 3u8apy.dela7zjqtmczjapp \
--discovery-token-ca-cert-hash sha256:65c71b425a54823702b987ead97fcdc316d258f80e15b618e5e502bc0675b873
如果需要让普通用户可以运行 kubectl,请运行如下命令,其实这也是 kubeadm init 输出的一部分:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
或者,如果您是 root 用户,则可以运行:
export KUBECONFIG=/etc/kubernetes/admin.conf
部署网络组件flannel
export KUBECONFIG=/etc/kubernetes/admin.conf kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
添加计算节点
执行主节点“kubeadm init” 得到的命令
kubeadm join 192.168.109.142:6443 --token 3u8apy.dela7zjqtmczjapp \
--discovery-token-ca-cert-hash sha256:65c71b425a54823702b987ead97fcdc316d258f80e15b618e5e502bc0675b873
执行结果
[root@localhost ~]# kubeadm join 192.168.109.142:6443 --token 3u8apy.dela7zjqtmczjapp \
> --discovery-token-ca-cert-hash sha256:65c71b425a54823702b987ead97fcdc316d258f80e15b618e5e502bc0675b873
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
trouble shooting
# kubectl get pods
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
解决
export KUBECONFIG=/etc/kubernetes/admin.conf
如错误信息: kube-router 启动报错
[root@4 ~]# docker logs -f 42679b98654b
I0513 08:36:19.796687 1 kube-router.go:207] Running /usr/local/bin/kube-router version v0.3.1, built on 2019-05-12T09:22:07+0000, go1.10.8
I0513 08:36:20.412033 1 network_policy_controller.go:142] Starting network policy controller
I0513 08:36:20.421198 1 network_routes_controller.go:983] Could not find annotation `kube-router.io/bgp-local-addresses` on node object so BGP will listen on node IP: 172.16.101.91 address.
I0513 08:36:20.421657 1 network_routes_controller.go:321] `subnet` in CNI conf file is empty so populating `subnet` in CNI conf file with pod CIDR assigned to the node obtained from node spec.
F0513 08:36:20.427171 1 network_routes_controller.go:328] Failed to get pod CIDR from node spec. kube-router relies on kube-controller-manager to allocate pod CIDR for the node or an annotation `kube-router.io/pod-cidr`. Error: node.Spec.PodCIDR not set for node: 4.novalocal
解决方法,如图添加配置:
附上参考大腿们的地址
https://blog.csdn.net/zzq900503/article/details/81710319
https://kubernetes.io/zh/docs/setup/independent/create-cluster-kubeadm/#Pod-network
https://www.cnblogs.com/wushuaishuai/p/9984228.html
https://www.cnblogs.com/gao88/p/9740305.html
最后不要脸了[捂脸]
更多推荐
所有评论(0)