二进制安装Kubernetes(K8s)集群---从零安装教程(无证书)_kubernetes org
因为docker要使用。,所以在其配置中加入。
网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。
一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!
注意:
kubeconfig文件
用于kubelet连接master apiserver
[root@server2 ~]# vim /usr/local/kubernetes/config/kubelet.kubeconfig
apiVersion: v1
kind: Config
clusters:
- cluster:
server: http://10.10.10.1:8080
name: local
contexts:
- context:
cluster: local
name: local
current-context: local
(2)kubelet配置文件
[root@server2 ~]# cat /usr/local/kubernetes/config/kubelet
#启用日志标准错误
KUBE_LOGTOSTDERR="--logtostderr=true"
#日志级别
KUBE_LOG_LEVEL="--v=4"
#Kubelet服务IP地址(自身IP)
NODE_ADDRESS="--address=10.10.10.2"
#Kubelet服务端口
NODE_PORT="--port=10250"
#自定义节点名称(自身IP)
NODE_HOSTNAME="--hostname-override=10.10.10.2"
#kubeconfig路径,指定连接API服务器
KUBELET_KUBECONFIG="--kubeconfig=/usr/local/kubernetes/config/kubelet.kubeconfig"
#允许容器请求特权模式,默认false
KUBE_ALLOW_PRIV="--allow-privileged=false"
#DNS信息,跟上面给的地址段对应
KUBELET_DNS_IP="--cluster-dns=10.0.0.2"
KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local"
#禁用使用Swap
KUBELET_SWAP="--fail-swap-on=false"
(3)kubelet systemd配置文件
[root@server2 ~]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/usr/local/kubernetes/config/kubelet
ExecStart=/usr/local/kubernetes/bin/kubelet \
${KUBE_LOGTOSTDERR} \
${KUBE_LOG_LEVEL} \
${NODE_ADDRESS} \
${NODE_PORT} \
${NODE_HOSTNAME} \
${KUBELET_KUBECONFIG} \
${KUBE_ALLOW_PRIV} \
${KUBELET_DNS_IP} \
${KUBELET_DNS_DOMAIN} \
${KUBELET_SWAP}
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
(4)启动服务,并设置开机启动:
[root@server2 ~]# swapoff -a ###启动之前要先关闭swap
[root@server2 ~]# systemctl enable kubelet
[root@server2 ~]# systemctl start kubelet
3、安装proxy
(1)proxy配置文件
[root@server2 ~]# cat /usr/local/kubernetes/config/kube-proxy
#启用日志标准错误
KUBE_LOGTOSTDERR="--logtostderr=true"
#日志级别
KUBE_LOG_LEVEL="--v=4"
#自定义节点名称(自身IP)
NODE_HOSTNAME="--hostname-override=10.10.10.2"
#API服务地址(MasterIP)
KUBE_MASTER="--master=http://10.10.10.1:8080"
(2)proxy systemd配置文件
[root@server2 ~]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/usr/local/kubernetes/config/kube-proxy
ExecStart=/usr/local/kubernetes/bin/kube-proxy \
${KUBE_LOGTOSTDERR} \
${KUBE_LOG_LEVEL} \
${NODE_HOSTNAME} \
${KUBE_MASTER}
Restart=on-failure
[Install]
WantedBy=multi-user.target
[root@server2 ~]# systemctl enable kube-proxy
[root@server2 ~]# systemctl restart kube-proxy
七、安装flannel(3台)
1、移动二进制到bin目录
[root@server1 ~]# tar xf flannel-v0.7.1-linux-amd64.tar.gz
[root@server1 ~]# mv /root/{flanneld,mk-docker-opts.sh} /usr/local/kubernetes/bin/
2、flannel配置文件
[root@server1 kubernetes]# vim /usr/local/kubernetes/config/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs,自身IP
FLANNEL_ETCD="http://10.10.10.1:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment,etcd-key的目录
FLANNEL_ETCD_KEY="/atomic.io/network"
# Any additional options that you want to pass,根据自己的网卡名填写
FLANNEL_OPTIONS="--iface=eth0"
3、flannel systemd配置文件
[root@server1 kubernetes]# vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/usr/local/kubernetes/config/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/local/kubernetes/bin/flanneld -etcd-endpoints=${FLANNEL_ETCD} -etcd-prefix=${FLANNEL_ETCD_KEY} $FLANNEL_OPTIONS
ExecStartPost=/usr/local/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
4、设置etcd-key
注意: 在一台上设置即可,会同步过去!!!
[root@server1 ~]# etcdctl mkdir /atomic.io/network
###下面的IP跟你docker本身的IP地址一个网段
[root@server1 ~]# etcdctl mk /atomic.io/network/config "{ \"Network\": \"172.17.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
{ "Network": "172.17.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }
5、设置docker配置
因为docker要使用
flanneld
,所以在其配置中加入EnvironmentFile=-/etc/sysconfig/flanneld,EnvironmentFile=-/run/flannel/subnet.env,--bip=${FLANNEL_SUBNET}
[root@server1 ~]# vim /usr/lib/systemd/system/docker.service
6、启动flannel和docker
[root@server1 ~]# systemctl enable flanneld.service
[root@server1 ~]# systemctl restart flanneld.service
[root@server1 ~]# systemctl daemon-reload
[root@server1 ~]# systemctl restart docker.service
7、测试flannel
ip addr ###发现flanneld生成的IP和Docker的IP在同一个网段即完成
8、集群测试
如果Master中没有装kubelet,kubectl get nodes就看不到Master!!!
[root@server1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.10.10.1 Ready <none> 1d v1.8.13
10.10.10.2 Ready <none> 1d v1.8.13
10.10.10.3 Ready <none> 1d v1.8.13
[root@server1 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
这样一个简单的K8s集群就安装完成了,下面介绍下harbor安装以及插件的安装!!!
八、harbor安装
此处我是用harbor来管理我的镜像仓库,它的介绍我就不多介绍了!!!
注意:如果要使用本地镜像可以不用安装harbor,但是要安装busybox
,并设置yaml配置文件imagePullPolicy: Never
或imagePullPolicy: IfNotPresent
在image同级下加入即可,默认方式为always
使用网络的镜像。
1、安装docker-compose
(1)下载方法
https://github.com/docker/compose/releases/ ###官网下载地址
[root@server1 ~]# wget https://github.com/docker/compose/releases/download/1.22.0/docker-compose-Linux-x86_64
(2)安装
[root@server1 ~]# cp docker-compose-Linux-x86_64 /usr/local/kubernetes/bin/docker-compose
[root@server1 ~]# chmod +x /usr/local/kubernetes/bin/docker-compose
(3)查看版本
[root@server1 ~]# docker-compose -version
docker-compose version 1.22.0, build f46880fe
2、安装harbor
(1)下载方法
官网安装参考链接:https://github.com/vmware/harbor/blob/master/docs/installation_guide.md
https://github.com/vmware/harbor/releases#install ###下载地址
[root@server1 ~]# wget http://harbor.orientsoft.cn/harbor-v1.5.0/harbor-offline-installer-v1.5.0.tgz
(2)解压tar包
[root@server1 ~]# tar xf harbor-offline-installer-v1.5.0.tgz -C /usr/local/kubernetes/
(3)配置harbor.cfg
[root@server1 ~]# grep -v "^#" /usr/local/kubernetes/harbor/harbor.cfg
_version = 1.5.0
###修改为本机IP即可
hostname = 10.10.10.1
ui_url_protocol = http
max_job_workers = 50
customize_crt = on
ssl_cert = /data/cert/server.crt
ssl_cert_key = /data/cert/server.key
secretkey_path = /data
admiral_url = NA
log_rotate_count = 50
log_rotate_size = 200M
http_proxy =
https_proxy =
no_proxy = 127.0.0.1,localhost,ui
email_identity =
email_server = smtp.mydomain.com
email_server_port = 25
email_username = sample_admin@mydomain.com
email_password = abc
email_from = admin <sample_admin@mydomain.com>
email_ssl = false
email_insecure = false
harbor_admin_password = Harbor12345
auth_mode = db_auth
ldap_url = ldaps://ldap.mydomain.com
ldap_basedn = ou=people,dc=mydomain,dc=com
ldap_uid = uid
ldap_scope = 2
ldap_timeout = 5
ldap_verify_cert = true
ldap_group_basedn = ou=group,dc=mydomain,dc=com
ldap_group_filter = objectclass=group
ldap_group_gid = cn
ldap_group_scope = 2
self_registration = on
token_expiration = 30
project_creation_restriction = everyone
db_host = mysql
db_password = root123
db_port = 3306
db_user = root
redis_url =
clair_db_host = postgres
clair_db_password = password
clair_db_port = 5432
clair_db_username = postgres
clair_db = postgres
uaa_endpoint = uaa.mydomain.org
uaa_clientid = id
uaa_clientsecret = secret
uaa_verify_cert = true
uaa_ca_cert = /path/to/ca.pem
registry_storage_provider_name = filesystem
registry_storage_provider_config =
(4)安装
[root@server1 ~]# cd /usr/local/kubernetes/harbor/ ###一定要进入此目录,日志放在/var/log/harbor/
[root@server1 harbor]# ./prepare
[root@server1 harbor]# ./install.sh
(5)查看生成的镜像
[root@server1 harbor]# docker ps ###状态为up
此时你也可以通过浏览器输入http:10.10.10.1(Master的IP)进行登陆,默认账号:admin
默认密码:Harbor12345
!!!
(6)加入认证
<1> server1(Mster)
[root@server1 harbor]# vim /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry=10.10.10.1'
<2> 三台都加入
[root@server1 ~]# vim /etc/docker/daemon.json
{
"insecure-registries": [
"10.10.10.1"
]
}
(7)测试
<1> shell中登陆
[root@server1 ~]# docker login 10.10.10.1 ###账号:admin 密码:Harbor12345
Username: admin
Password:
Login Succeeded
登陆成功后,我们便可以使用harbor仓库了!!!
<2> 浏览器中登陆
(8)登陆报错:
<1> 报错:
[root@server1 ~]# docker login 10.10.10.1
Error response from daemon: Get http://10.10.10.1/v2/: unauthorized: authentication required
<2> 解决方法:
加入认证,上面有写,或者就是密码输入错误!!!
(9)harbor基本操作
<1> 下线
# cd /usr/local/kubernetes/harbor/
# docker-compose down -v 或 docker-compose stop
<2> 修改配置
修改harbor.cfg和docker-compose.yml
<3> 重新部署上线
# cd /usr/local/kubernetes/harbor/
# ./prepare
# docker-compose up -d 或 docker-compose start
3、harbor使用
用浏览器登陆你可以发现其默认是有个项目
library
,也可以自己创建,我们来使用此项目!!!
(1)对镜像进行处理
###打包及删除
docker tag gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3 10.10.10.1/library/kubernetes-dashboard-amd64:v1.8.3
docker rmi gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3
docker tag gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7 10.10.10.1/library/k8s-dns-sidecar-amd64:1.14.7
docker rmi gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
docker tag gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7 10.10.10.1/library/k8s-dns-kube-dns-amd64:1.14.7
docker rmi gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
docker tag gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7 10.10.10.1/library/k8s-dns-dnsmasq-nanny-amd64:1.14.7
docker rmi gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
###推送到harbor上
docker push 10.10.10.1/library/kubernetes-dashboard-amd64:v1.8.3
docker push 10.10.10.1/library/k8s-dns-sidecar-amd64:1.14.7
docker push 10.10.10.1/library/k8s-dns-kube-dns-amd64:1.14.7
docker push 10.10.10.1/library/k8s-dns-dnsmasq-nanny-amd64:1.14.7
(2)通过浏览器测试
九、kube-dns安装
下载地址:https://github.com/kubernetes/kubernetes/tree/release-1.8/cluster/addons/dns
(1)我们需要的是这几个包:
kubedns-sa.yaml
kubedns-svc.yaml.base
kubedns-cm.yaml
kubedns-controller.yaml.base
[root@server1 data]# unzip kubernetes-release-1.8.zip
[root@server1 dns]# pwd
/data/kubernetes-release-1.8/cluster/addons/dns
[root@server1 dns]# cp {kubedns-svc.yaml.base,kubedns-cm.yaml,kubedns-controller.yaml.base,kubedns-sa.yaml} /root
(2)clusterIP查看
[root@server2 ~]# cat /usr/local/kubernetes/config/kubelet
(3)4个yaml可以合并成此kube-dns.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.0.0.2 ###此地址是在kubelet的DNS地址
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
#imagePullSecrets:
#- name: registrykey-aliyun-vpc
containers:
- name: kubedns
image: 10.10.10.1/library/k8s-dns-kube-dns-amd64:1.14.7
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local
- --dns-port=10053
- --config-dir=/kube-dns-config
- --kube-master-url=http://10.10.10.1:8080 ###修改为集群地址
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: 10.10.10.1/library/k8s-dns-dnsmasq-nanny-amd64:1.14.7
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: 10.10.10.1/library/k8s-dns-sidecar-amd64:1.14.7
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
(4)创建及删除
# kubectl create -f kube-dns.yaml ###创建
# kubectl delete -f kube-dns.yaml ###删除,此步骤不用执行
(5)查看
[root@server1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-5855d8c4f7-sbb7x 3/3 Running 0
[root@server1 ~]# kubectl describe pod -n kube-system kube-dns-5855d8c4f7-sbb7x ###报错的话,可以查看报错信息
十、安装dashboard
1、配置kube-dashboard.yaml
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: 10.10.10.1/library/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 9090
protocol: TCP
args:
- --apiserver-host=http://10.10.10.1:8080 ###修改为Master的IP
volumeMounts:
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
nodePort: 30090
selector:
k8s-app: kubernetes-dashboard
2、创建dashboard
# kubectl create -f kube-dashboard.yaml
3、查看node的IP
[root@server1 dashboard]# kubectl get pods -o wide --namespace kube-system
NAME READY STATUS RESTARTS AGE IP NODE
kube-dns-5855d8c4f7-sbb7x 3/3 Running 0 1h 172.17.89.3 10.10.10.1
kubernetes-dashboard-7f8b5f54f9-gqjsh 1/1 Running 0 1h 172.17.39.2 10.10.10.3
4、测试
(1)浏览器
http://10.10.10.3:30090
网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。
一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!
0
selector:
k8s-app: kubernetes-dashboard
#### 2、创建dashboard
kubectl create -f kube-dashboard.yaml
#### 3、查看node的IP
[root@server1 dashboard]# kubectl get pods -o wide --namespace kube-system
NAME READY STATUS RESTARTS AGE IP NODE
kube-dns-5855d8c4f7-sbb7x 3/3 Running 0 1h 172.17.89.3 10.10.10.1
kubernetes-dashboard-7f8b5f54f9-gqjsh 1/1 Running 0 1h 172.17.39.2 10.10.10.3
#### 4、测试
###### (1)浏览器
http://10.10.10.3:30090
[外链图片转存中...(img-4YXYh1Yv-1715073540706)]
[外链图片转存中...(img-PLjiwUjl-1715073540706)]
**网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。**
**[需要这份系统化的资料的朋友,可以戳这里获取](https://bbs.csdn.net/topics/618608311)**
**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**
更多推荐
所有评论(0)