环境介绍
设备IP:192.168.120.129 Centos7

步骤一、关闭防火墙、SELinux

# systemctl stop firewalld
# systemctl disable firewalld
# setenforce 0

步骤二、安装Docker

# yum -y install docker
# systemctl start docker
# systemctl enable docker

步骤三、部署k8s所需软件

# yum -y install kubernetes

1、etcd数据库
(1)安装etcd数据库

# yum -y install etcd

(2)配置etcd数据库

# vim /etc/etcd/etcd.conf
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.120.129:2379"

(3)启动etcd

# systemctl restart etcd
# systemctl enable etcd

2、配置apiserver、kube-controller-manager、kube-scheduler
(1)配置apiserver文件

# vim /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"          #api server监听地址

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"             #api server监听端口

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"            #与node节点的kubelet通信的端口

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.120.129:2379"      #配置etcd服务的监听地址

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"               #删除ServiceAccount

# Add your own!
KUBE_API_ARGS=""

(2)config配置

# cat config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"

(3)controller-manager、scheduler配置无需更改
(4)启动服务

# systemctl start kube-apiserver
# systemctl enable kube-apiserver
# systemctl start kube-controller-manager
# systemctl enable kube-controller-manager
# systemctl start kube-scheduler
# systemctl enable kube-scheduler
# systemctl start kubelet 
# systemctl enable kubelet 
# systemctl start kube-proxy
# systemctl enable kube-proxy

步骤四、部署flannel网络

# yum -y install flannel
# vim /etc/sysconfig/flanneld 
FLANNEL_ETCD_ENDPOINTS="http://192.168.120.129:2379"			//etcd服务器的地址
FLANNEL_ETCD_PREFIX="/atomic.io/network"			//etcd服务器上面要创建的索引目录
# etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'
# systemctl start flanneld
# systemctl enable flanneld
# systemctl restart kube-apiserver
# systemctl restart kube-controller-manager
# systemctl restart kube-scheduler

安装完flannel网络后机器上会多出来一块flannel0的网卡

# ifconfig
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 172.16.57.0  netmask 255.255.0.0  destination 172.16.57.0
        inet6 fe80::3325:8d5c:98dd:a381  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 144 (144.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

步骤五、测试k8s,创建nginx
1、启动nginx服务

# vim k8s_rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginxrc
spec:
  replicas: 2
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: nginx
        ports:
        - containerPort: 80

2、发布到k8s集群中

# kubectl create -f k8s_rc.yml

3、查看创建的RC

# kubectl get rc
NAME      DESIRED   CURRENT   READY     AGE
nginxrc   2         2         2         13m

4、查看pod的创建

# kubectl get pod
NAME            READY     STATUS    RESTARTS   AGE
nginxrc-c1tbx   1/1       Running   0          14m
nginxrc-tvc1g   1/1       Running   0          14m
# kubectl get pod -o wide
NAME            READY     STATUS    RESTARTS   AGE       IP           NODE
nginxrc-c1tbx   1/1       Running   0          14m       172.17.0.2   192.168.120.129
nginxrc-tvc1g   1/1       Running   0          14m       172.17.0.3   192.168.120.129

5、测试连通性

# ping 172.17.0.2
# ping 172.17.0.3

问题补充
创建RC服务出现的问题
执行kubectl get pod时会看到状态一直是“ContainerCreating”,是什么原因呢?
可以使用kubectl describe pod nginx命令,看到输出结果为:

failed to “StartContainer” for “POD” with ImagePullBackOff: “Back-off pulling image “registry.access.redhat.com/rhel7/pod-infrastructure:latest””

17m 10s 7 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to “StartContainer” for “POD” with ErrImagePull: “image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)”
查看/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt路径发现是一个链接文件,但是我本地没有/etc/rhsm/ca/redhat-uep.pem怎么办。

方法一:先检查yml文件里面的image地址是否正常

方法二:
先下载一个试试:

# yum -y install *rhsm*

安装完成后,重新创建nginx RC:

# kubectl delete -f k8s_rc.yml
# kubectl create -f k8s_rc.yml

如果发现rc的状态还是ContainerCreating,说明还是不成功。

方法三:
安装完成后,执行一下docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
如果依然报错,可参考下面的方案:

# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
# rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem

这两个命令会生成/etc/rhsm/ca/redhat-uep.pem文件.
顺得的话会得到下面的结果。

# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest

删除原来创建的rc,重新创建

# kubectl delete -f k8s_rc.yml
# kubectl create -f k8s_rc.yml

方法四:
执行下面命令,手动下载pause镜像:

# docker pull docker.io/kubernetes/pause
# docker tag docker.io/kubernetes/pause gcr.io/google_containers/pause-amd64:3.0
# docker rmi -f docker.io/kubernetes/pause

然后再重新创建rc

# kubectl delete -f k8s_rc.yml
# kubectl create -f k8s_rc.yml

隔几十秒后发现rc和pod的状态已经变成Running了。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐