kubenetes(k8s)集群部署使用
kubernetes中文社区:https://www.kubernetes.org.cn/3795.html
k8s中文社区:
k8s中文文档:
https://www.kubernetes.org.cn/k8s
k8s下载地址:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.8.md#downloads-for-v188
1.前提环境准备
至少3个节点;并且能够连接外网的环境;
##关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
##关闭selinux:
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
##关闭swap分区:
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
##添加主机host映射
vim /etc/hosts
192.168.30.23 k8s-master
192.168.30.24 k8s-node1
192.168.70.52 k8s-node2
##将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
2.所有节点安装Docker/kubeadm/kubelet
##安装docker
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker
systemctl start docker
docker info
docker --version ##安装成功;查看docker版本
##添加阿里云的yum软件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
##安装kubeadm,kubelet和kubectl
yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3
##可能出现报错:
##解决:修改kubrenetes.repo
cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
##再次安装即可
yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3 kubernetes-cni-0.6.0 ipvsadm
##启动kubelet
systemctl enable kubelet
3.部署Kubernetes Master
kubeadm init \
--apiserver-advertise-address=192.168.30.23 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.13.3 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
##可能遇到的报错1:
serviceSubnet: Invalid value: "10.2.0.0/16--pod-network-cidr=10.245.0.0/16": couldn't parse subnet
##解决:
--service-cidr=10.1.0.0/16\反斜杠必须空一格,否则直接连接到下一行;
##可能遇到的报错2:
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
##解决:
需要关闭swap分区 swapoff -a;
##查看证书存放路径
ls /etc/kubernetes/pki/
##使用kubectl工具:(master节点执行即可)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
ls .kube/
kubectl get nodes ##查看节点状态
4.安装Pod网络插件(CNI)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
或者:
wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
##查看pod启动情况
kubectl get pods -n kube-system
##手动拉取flannel镜像 (kube-flannel-ds-amd64-njk6r 拉取网络不畅时)
docker pull quay.io/coreos/flannel:v0.11.0-amd64
##删除pod(删除之后k8s会自动再次启动该pod):
kubectl delete pods pod名称 -n kube-system
kubectl delete pods kube-controller-manager-zhuboli23 -n kube-system
kubectl get nodes
kubectl get pods,svc -o wide
kubectl get pods --all-namespaces -o wide ##查看所有 Pod状态
5.加入Kubernetes Node(在两个node上执行)
向集群添加新节点,node上执行在kubeadm init输出的kubeadm join命令:
kubeadm join 192.168.30.23:6443 --token sebw7s.owkm9w4my818f29m --discovery-token-ca-cert-hash sha256:8f9812cff495262eadffa95aa369151d98a0807306aa8e6ce8b5abcb898401f7
kubectl get nodes ##查看nodes
##kubeadm生成的token重新获取
kubeadm token list | awk -F" " '{print $1}' |tail -n 1 ##列出token
kubeadm token create ##如果过期重新生成token
##获取CA公钥的哈希值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^ .* //'
##从节点加入集群
kubeadm join 192.168.30.23:6443 --token token值 --discovery-token-ca-cert-hash sha256:哈希值
6.测试kubernetes集群
##在Kubernetes集群中创建一个pod,验证是否正常运行:
##创建一个名为nginx的deployment;使用nginx镜像
docker pull nginx
kubectl create deployment nginx --image=nginx
kubectl get pods ##查看pods(需要一点时间先拉取nginx,再生成容器)
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
kubectl get pods,svc -o wide ##查看更详细的信息
##访问地址: http://NodeIP:Port
##任意node都行;需多等待一会,会自动拉取nginx镜像;
##或者手动拉取 docker pull nginx
7.部署 Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
或者:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl apply -f kubernetes-dashboard.yaml
##1.默认镜像国内无法访问,修改镜像地址为:
vi kubernetes-dashboard.yaml
k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
改为:
lizhenliang/kubernetes-dashboard-amd64:v1.10.1
##在所有节点上手动拉取所需镜像
docker pull lizhenliang/kubernetes-dashboard-amd64:v1.10.1
##2.默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001 ##此处可以固定端口,也可以不指定随机生成
selector:
k8s-app: kubernetes-dashboard
kubectl apply -f kubernetes-dashboard.yaml ##重新加载配置
kubectl get pods -n kube-system
kubectl get pods,svc -n kube-system
##访问地址:
##创建serviceaccount账户dashboard-admin并绑定默认cluster-admin管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl get secret -n kube-system
kubectl describe secrets dashboard-admin-token-595zh -n kube-system
##或者直接使用命令:
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
##使用输出的token登录Dashboard:
##查看命名空间
##查看总体概况:
更多推荐
所有评论(0)