k8s实践手册
文章目录使用kube-proxy让外部网络访问K8S service的ClusterIPhelm chart使用kube-proxy让外部网络访问K8S service的ClusterIPhttps://blog.csdn.net/liyingke112/article/details/76022267helm chartkubectl create ns [命名空间]helm install &
文章目录
k8s安装改端口
sed -i “s/service-node-port-range=32000-32767/service-node-port-range=1-65535/g” /etc/kubernetes/manifests/kube-apiserver.yaml
kubectl常用命令拾遗
https://blog.csdn.net/UsamaBinLaden6976498/article/details/108372629
# 查看pod资源描述文件
kubectl get pod xxxx -o yaml
报错排查
Back-off restarting failed container【启动失败,容器又是不断启动就会报错】
Kubernetes容器root权限
https://www.orchome.com/1305
k8s网络
Kubernetes网络三部曲之一~Pod网络
深入理解k8s网络
/etc/hosts主机名查询静态表(本地域名解析配置)
/etc/resolv.conf(-----dns域名解析服务配置)
kubectl proxy --address=‘0.0.0.0’ --accept-hosts=‘^*$’ --port=8009
kubectl proxy 让外部网络访问K8S service的ClusterIP
curl http://[k8s-master]:8009/api/v1/namespaces/[namespace-name]/services/[service-name]/proxy
不同节点的pod如何实现ip互通
eth0是实际的网卡,docker0(负责分配pod的网桥)和veth0是linux支持的虚拟网络设备
在pod容器内:
同一集群内所有pod的ip都能ping通
k8s一个服务暴露后,其集群内ip对应的域名是
{SVCNAME}.{NAMESPACE}.svc.{CLUSTER_DOMAIN}
flannel和calico
https://www.cnblogs.com/lizexiong/p/14916531.html
问题搜集
- 集群安装的时候有节点安装不上,重启在重新执行命令kubectl join
查看k8s dashbord的token
查看k8s 的 dashboard 的token
kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token >>token.txt
查看linux打印的k8s服务的日志
journalctl -f -u kubelet.service
详解k8s的4种Service类型
https://blog.csdn.net/weixin_40274679/article/details/107887678
apiVersion: v1
kind: Service
metadata:
name: service-python
spec:
ports:
- port: 3000 服务的端口
protocol: TCP
targetPort: 443 集群内的访问端口
nodePort: 30080 集群外的访问端口
selector:
run: pod-python
type: NodePort
ansible安装k8s
https://github.com/ReSearchITEng/kubeadm-playbook
问题搜集
网络
重启也需要一点时间
kubeadm ——k8S新节点加入集群
k8s安装部署问题、解决方案汇总‘’
network: failed to set bridge addr: “cni0“ already has an IP address different from 10.244.2.1/2
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl restart docker
kubernetes的cni0和flannel.1的关系?
当容器运行之后,节点之间多了个虚拟接口cni0,它是由flanneld创建的一个虚拟网桥叫cni0,供pod本地通信使用.flanneld为每个pod创建一对veth虚拟设备,一端放在容器接口上,一端放在cni0桥上.
删除了cnio,flanel没有重建,然后我卸载节点的k8s,在运行common.sh重装,之后运行kubeadm join
#!/bin/bash
red='\033[0;31m'
green='\033[0;32m'
yellow='\033[0;33m'
plain='\033[0m'
echo -e "
##############################################################
#${red}欢迎使用centos7一键安装k8s脚本${plain}
#${yellow}请先设置/etc/hosts主机名解析${plain}
# File Name: diskless.sh
# Version: V1.0
# Author: guoshao
# Blog Site: https://space.bilibili.com/302918169/video
# Created Time : 2021-09-03 00:46:18
# Environment: CentOS 7.2 Kernal 3.10.0
##############################################################
"
sleep 6
system_optimization(){
echo -e "${red}进行系统优化${plain}"
# 关闭防火墙
systemctl disable firewalld
systemctl stop firewalld
# 关闭selinux
# 临时禁用selinux
setenforce 0
# 永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# 禁用交换分区
swapoff -a
# 永久禁用,打开/etc/fstab注释掉swap那一行。
sed -i 's/.*swap.*/#&/' /etc/fstab
# 修改内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
# 执行配置k8s阿里云源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
}
docker_install(){
echo -e "${red}安装dokcer,并进行优化${plain}"
sleep 2
# 安装ifconfig
yum install -y wget net-tools
# 安装docker所需的工具
yum install -y yum-utils device-mapper-persistent-data lvm2 >>/dev/null
# 配置阿里云的docker源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#优化docker参数
mkdir -p /etc/docker
cat <<EOF >/etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://pzpl72fb.mirror.aliyuncs.com"],
"storage-driver": "overlay2",
"storage-opts": ["overlay2.override_kernel_check=true"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
}
}
EOF
# 指定安装这个版本的docker-ce
yum install -y docker-ce-18.09.9-3.el7
# 启动docker
systemctl enable docker && systemctl start docker
}
system_optimization
docker_install >> /dev/null
yum install -y kubectl-1.16.0-0 kubeadm-1.16.0-0 kubelet-1.16.0-0 >> /dev/null
# 启动kubelet服务
systemctl enable kubelet && systemctl start kubelet
k8s使从节点运行kubectl命令以及helm命令
在从节点yum install -y helm即可
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
# 将主节点(master节点)中的【/etc/kubernetes/admin.conf】文件拷贝到从节点相同目录下:
scp -r /etc/kubernetes/admin.conf ${node1}:/etc/kubernetes/admin.conf
# scp -r /etc/kubernetes/admin.conf 172.16.191.137:/etc/kubernetes/admin.conf
Kubernetes系列之二:将Slave节点加入集群
主机运行
kubeadm token create --print-join-command
然后将打印出的命令在从节点运行
kubeadm join 172.16.191.140:6443 --token 2wd0wr.9myr29xa7mkw7wtz --discovery-token-ca-cert-hash sha256:cef005c63beda21da666aace14961b059e21151113599961a9a11ceb49a91e3a
存储nfs
configmap挂载文件权限修改
configmap defaultMode
安装服务端(nfs-server,以及rpc的支持)—这个要在每台需要的机器上都执行该操作
yum install -y nfs-utils
mkdir -p /mnt/nfs
vim /etc/exports
/mnt/nfs/ xxx.xxx.xxx.0/24(rw,sync,fsid=0)
systemctl start rpcbind.service
systemctl start nfs-server.service
systemctl enable rpcbind.service
systemctl enable nfs-server.service
exportfs -r
exportfs
服务端搭建
k8s、nfs、mysql、changing ownership of ‘/var/lib/mysql/’: Operation notpermitted
集群每个节点都要挂载,不然mysql持久卷重启后飘逸(被重新部署)到另外一台机器上,如果那台机器没有nfs挂载,但是mysql有配置了nfs挂载,那么就会报错,说没有权限。
no_root_squash
部署mysql报错:
–initialize specified but the data directory has files in it. Aborting. 解决
- 遇到该问题是因为之前data文件夹中生成过文件,将data文件夹下的文件全部删除行重新执行命令即可
- I have come across this issue. Work around is to delete the pv and pvc and recreate them.(推荐这种)
kubectl delete pv pvc-14dab137-ad32-4b0d-9102-25b87a8344f9
k8s强制删除po&pv&pvc和ns&namespace方法
https://blog.csdn.net/qq_39565646/article/details/119923208?utm_medium=distribute.pc_relevant.none-task-blog-2~default~baidujs_baidulandingword~default-1.pc_relevant_default&spm=1001.2101.3001.4242.2&utm_relevant_index=4
- –ignore-db-dir=lost+found
静态存储:提前写死了分配了多大的内存
动态存储:可随时添加内存
Kubernetes(K8S)集群搭建,NFS静态/动态供给PV,Deployment/StatefulSet创建应用服务
https://www.bilibili.com/video/BV17p4y1C76m?p=2
k8s中级篇-Helm安装nfs-client-provisioner
使用案例
k8s卸载
kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum remove kubectl-1.16.0-0 kubeadm-1.16.0-0 kubelet-1.16.0-0 -y
k8s集群安装
sudo hostnamectl set-hostname <newhostname>
然后所有节点都绑定一下
vi /etc/hosts
172.16.191.150 master
172.16.191.151 node1
172.16.191.152 node2
【使用脚本一键部署k8s,支持多节点,默认安装k8s-dashboard-哔哩哔哩】 https://b23.tv/C7YWYQg
使用kube-proxy让外部网络访问K8S service的ClusterIP
https://blog.csdn.net/liyingke112/article/details/76022267
https://gitee.com/guoshaosong/scripts/
k8s-master,k8s-node1,k8s-node2设置成2核cpu,2G内存
在k8s-master,k8s-node1,k8s-node2机器上分别执行如下操作
vi /etc/hosts
主节点
mkdir /k8s
然后传文件到/k8s
helm chart
脚本语法
quote 以字符串输出
-A打印后面的500,查看渲染值
helm install weda-stack weda-stack-3.2.0.tgz --dry-run|grep “# Source: weda-stack/charts/unified-file-service/templates/db-helper/prod/job.yaml” -A 500
安装
安装脚本
curl -SLO https://get.helm.sh/helm-v3.7.1-linux-amd64.tar.gz
tar -zxvf helm-v3.7.1-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm version
# 添加国内 阿里云的 镜像源
helm repo remove stable
helm repo add stable http://mirror.azure.cn/kubernetes/charts/
helm repo update
# 搜索:
helm search repo redis
kubectl create ns [命名空间]
helm install <chart实例名> <chart所在父目录路径> -n [命名空间]
helm uninstall <chart实例名>
helm list #列出所有的chart实例
helm upgrade <chart实例名> <chart所在父目录路径> -n [命名空间]
# 打包charts
helm package ./
k8s常用命令
创建命名空间
kubectl get namespace
kubectl create namespace test-env
更多推荐
所有评论(0)