k8s常用命令及kubeshare单节点及集群安装部署
kubernetes常用命令及单节点下安装kubeshare与集群模式安装kubeshare
·
1. namespace
用来隔离资源 默认只隔离资源,不隔离网络 比如测试环境 生成环境等
- 查看命名空间
kubectl get ns
- 创建命名空间 其中hello为命名空间的名称
kubectl create ns hello
- 删除命名空间
kubectl delete ns hello
2. Pod
运行中的一组容器,Pod是kubernetes中应用的最小单位.
- 例子: 将nginx镜像放入pod(myngix为pod名称 nginx为docker镜像名称)
kubectl run mynginx --image=nginx
- 查看default名称空间的Pod
kubectl get pod
kubectl get pod -A
- 查看pod详细信息
kubectl describe pod 你自己的Pod名字
- 删除pod
kubectl delete pod Pod名字
- 查看Pod的运行日志
kubectl logs Pod名字
- 进入pod容器 (mynginx为自己的pod名称)
kubectl exec -it mynginx -- /bin/bash
- 监控pod
watch -n 1 kubectl get pod
kubectl get pod w
- 查看pod进程
kubectl top nodes
Deployment
控制Pod,使Pod拥有多副本,自愈,扩缩容等能力(即使有一个容器down机了他会立马重启一个)
- 创建Deployment (nginx为镜像名称)
kubectl create deployment my-dep --image=nginx
- 查看所有 deployment
kubectl get deploy
- 删除deployment
kubectl delete deploy名称
- 使Pod拥有多副本,自愈,扩缩容等能力 一次性跑三个 nginx
kubectl create deployment my-dep --image=nginx --replicas=3
- 扩缩容能力如果想部署三台改成三即可
kubectl scale --replicas=5 deployment/my-dep
Service
将一组 Pods 公开为网络服务的抽象方法。 相当于将一组里面的三个pod用负载均衡代理了,及通过一个ip端口就能负载到三个pod上面
- 暴露Deploy 只是服务内部访问 外部不行
kubectl expose deployment my-dep --port=8000 --target-port=80
Service NodePort 集群外也可以访问 (NodePort范围在 30000-32767 之间)
kubectl expose deployment my-dep --port=8000 --target-port=80 --type=NodePort
Linux单节点部署KubeSphere
要求:4c8g;centos7.9;防火墙放行 30000~32767;指定hostname
- 设置主机名
hostnamectl set-hostname k8s-master
- 安装 准备kubkey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -
chmod +x kk
- 使用KubeKey引导安装集群
#可能需要下面命令
yum install -y conntrack
./kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.1
- 安装后开启此功能
至此单机版安装完成
Linux多节点部署KubeSphere
准备三台服务器
要求:
-
4c8g (master)
-
8c16g * 2(worker)
-
centos7.9
-
内网互通
-
每个机器有自己域名
-
防火墙开放30000~32767端口
使用KubeKey创建集群
- 下载KubeKey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -
chmod +x kk
- 创建集群配置文件
./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.1
- 创建集群
./kk create cluster -f config-sample.yaml
config-sample.yaml示例文件
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master, address: 192.168.101.135, internalAddress: 192.168.101.135, user: root, password: abc123!@#123}
- {name: node1, address: 192.168.101.136, internalAddress: 192.168.101.136, user: root, password: abc123!@#123}
- {name: node2, address: 192.168.101.137, internalAddress: 192.168.101.137, user: root, password: abc123!@#123}
roleGroups:
etcd:
- master
master:
- master
worker:
- node1
- node2
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.20.4
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.1.1
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
redis:
enabled: false
redisVolumSize: 2Gi
openldap:
enabled: false
openldapVolumeSize: 2Gi
minioVolumeSize: 20Gi
monitoring:
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
es:
elasticsearchMasterVolumeSize: 4Gi
elasticsearchDataVolumeSize: 20Gi
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchUrl: ""
externalElasticsearchPort: ""
console:
enableMultiLogin: true
port: 30880
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
devops:
enabled: false
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: false
ruler:
enabled: true
replicas: 2
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
metrics_server:
enabled: false
monitoring:
storageClass: ""
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress:
- ""
nodeLimit: "100"
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
edgeWatcher:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
- 查看进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
至此集群模式安装完成
下篇文章更新使用kubshare的使用及devops流水线的使用
更多推荐
已为社区贡献1条内容
所有评论(0)