安装准备

环境

IPNameRole
192.168.10.162server1master
192.168.10.163server2work1
192.168.10.164server3work2

关闭selunix  ,关闭防火墙    ,关闭swap

setenforce 0
systemctl stop firewalld && systemctl disable firewalld
swapoff -a

更改三台计算机的hosts文件

echo <<EOF >>/etc/hosts
192.168.10.164 server1
192.168.10.165 server2
192.168.10.166 server3
EOF

安装docker

 卸载原有的docker相关的包

yum remove docker
rpm -e `rpm -qa |grep docker`
yum autoremove

安装docker阿里源并且安装docker v.19.0 

[root@localhost ~]# rm -rfv /etc/yum.repos.d/*
[root@localhost ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

使用aliyun源安装docker-ce及其他软件

 yum install vim bash-completion net-tools gcc -y
 yum install -y yum-utils device-mapper-persistent-data lvm2
 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce

 安装阿里云的docker加速器

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

 安装kubectl、kubelet、kubeadm

添加阿里kubernetes源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

 安装k8s组件

yum install kubectl kubelet kubeadm
symctl enable kubelet

初始化节点,注意这里必须修改成阿里云的谷歌镜像 ,另外kubernates-version要根据提示改成1.20.0,否则安装会有报错。

以上部分为3台机器均需要操作,下面部分是只需要在master节点上安装和设置。

 kubeadm init --kubernetes-version=1.20.0  \
    --apiserver-advertise-address=192.168.10.164   \
    --image-repository registry.aliyuncs.com/google_containers  \
    --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

apiserver-advertise-address 是k8s对外服务的IP地址,这个重要,现在选择放在server1,pod-network-cidr是K8s内部网络地址,保持不变

执行,方便将来在其他节点可以使用kubectl进行集群的管理

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

使用命令获取目前集群的运行情况,会发现集群有些Node为Not Ready状态,是因为我们没有给集群安装网络组件。

kubectl get node

安装网络组件,网络组件可以多种,我们选择其中一种 

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

执行下面命令可以可到pod所有状态为running。安装成功。 

# kubectl get pod --all-namespaces

 加入集群

先获取加入集群的命令,该命令行需要在2小时只能执行,否则token会过期


kubeadm token create --print-join-command
kubeadm join 192.168.10.164:6443 --token 5iaf1k.8atx3xx273ty7rvb     --discovery-token-ca-cert-hash sha256:b8eff8a5e8afae1315e1959e31823df83ecf6958e58e27e426d34f2b1f6bf35f
[root@server1 yum.repos.d]#

server2,server3 接上面kubeadm init 之前的安装,work节点分别执行,加入集群的命令

kubeadm join 192.168.10.164:6443 --token 5iaf1k.8atx3xx273ty7rvb     --discovery-token-ca-cert-hash sha256:b8eff8a5e8afae1315e1959e31823df83ecf6958e58e27e426d34f2b1f6bf35f
[root@server1 yum.repos.d]#

等待一会,二个work节点加入成功。

 安装kubernetes-dashboard-1.18

当前目录下增加如下recommend.yaml,此为dashboard 1.18版本自行修改而来,重要改动:

改成可以访问的image仓库

更改了Develop的apps/v2beta版本

更改Service的port类型为NodePort,增加nodePort 30002,将来我们可以直接访问。

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard
 
---
 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
 
---
 
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort      #鏂板type绫诲瀷涓篘odePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30002   #璁剧疆nodeport 绔彛
  selector:
    k8s-app: kubernetes-dashboard
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque
 
---
 
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard
 
---
 
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]
 
---
 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]
 
---
 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
 
---
 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
 
---
 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-rc4
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
 
---
 
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper
 
---
 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.3
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume

执行该文本

kubectl create -f recommend.yaml

等待安装成功。

访问https://192.168.10.64:30002.

这里需要token,我们创建一个权限足够的用户来访问,如果使用系统中的一些其他用户,将没有访问资源的权限。

创建user.yaml,内容如下

# admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
# admin-user-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

执行如下文件

kubectl create -f user.yaml

 执行命令获取 token

[root@server1 ~]#  kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-5ldxc
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 78c3a1ec-bcd4-495a-b898-bb243546f7e9

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IklfZUMwT1R5V1pkeUtOdDBPczFsSEFkZEtxbHM3Q2ZjOTVMZDY1YXdSSE0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTVsZHhjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3OGMzYTFlYy1iY2Q0LTQ5NWEtYjg5OC1iYjI0MzU0NmY3ZTkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Z63G6xHt014Ce7pX8uRpEg5PJqMFD1Ptc13n0ub8EQYDcrsPk2xqsIxFbapDRhSd13EAAhBDMRfFiYY3yqQA7nlguEe0nW6j_yp2QDZtCuGCe5yfMf7ITjiv6B10aP9kQQa5xT1xj_RMEm79iqgVHerf0Fj4yDuDc4o596gOWendYyaUeZTuuhLsMj_1X5eEJeh0LcxcgU2_lNx_qTl_VufGRiy1_vYABdab5BpxLGNiOtxtaWpb7yyjt9beM49XZn5hI-mrhjakCPgi2Fy-w1SqW4BNtiwuSFHIdd5ESx7nLUBVTSPcMGL03EK_mqGwrmMWkmJllLDXRwyTZW8-IQ
ca.crt:     1066 bytes
[root@server1 ~]#

看到如下界面

安装成功。 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐