k8s实验1
kubernetes的架构核心角色master (管理节点)node(计算节点)image(镜像仓库)一、etcmaster安装etc并更改其配置[root@k8s-master ~]# yum -y install etcd.x86_64[root@k8s-master ~]# vim /etc/etcd/etcd.conf...........ETCD_LISTEN_CLI...
·
kubernetes的架构
核心角色
master (管理节点)
node(计算节点)
image(镜像仓库)
一、etc
master安装etc并更改其配置
[root@k8s-master ~]# yum -y install etcd.x86_64
[root@k8s-master ~]# vim /etc/etcd/etcd.conf
...........
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
............
[root@k8s-master ~]# systemctl start etcd.service
[root@k8s-master ~]# systemctl enable etcd.service
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{"Network":"10.254.0.0/16","Backend":{"Type":"vxlan"}}' //不能改下任何参数
[root@k8s-master ~]# etcdctl ls /atomic.io/network/config
[root@k8s-master ~]# etcdctl get /atomic.io/network/config
制作docker镜像
[root@res ~]# yum -y install docker docker-distribution.x86_64
[root@res ~]# systemctl start docker-distribution.service
[root@res ~]# systemctl enable docker-distribution.service
[root@res ~]# systemctl start docker
[root@res ~]# systemctl enable docker
[root@res ~]# docker load -i centos.tar //导入镜像
[root@res ~]# docker run -it docker.io/centos:latest //创建容器
[root@176f9576aa42 /]# vi /etc/yum.repos.d/centos.repo
[root@176f9576aa42 /]# vi /etc/yum.repos.d/local.repo
[root@176f9576aa42 /]# yum repolist
[root@176f9576aa42 /]# yum -y install bash-completion net-tools iproute psmis rsync lftp vim
[root@res ~]# docker commit 176f9576aa42 myos:latest //自定义一个新的镜像
[root@res ~]# vim /etc/sysconfig/docker
..................
ADD_REGISTRY='--add-registry 192.168.1.90:5000' //指定路径
INSECURE_REGISTRY='--insecure-registry 192.168.1.90:5000' //不加密的方式
..................
[root@res ~]# iptables -nL FORWARD //新版本docker有默认的DROP设置,需要清理
[root@res ~]# vim /lib/systemd/system/docker.service //
..............
ExecStartPost=/sbin/iptables -P FORWARD ACCEPT
................
[root@res ~]# systemctl daemon-reload //先把所有容器都停止
[root@res ~]# systemctl restart docker //重启docker
[root@res ~]# docker push myos:latest //上传到仓库,使用默认路径
[root@res ~]# curl http://192.168.1.90:5000/v2/_catalog //验证
[root@res ~]# curl http://192.168.1.90:5000/v2/myos/tags/list //使用带标签验证
二、安装flannel
所有机器卸载防火墙 firewalld-*
安装 flannel
修改配置文件 /etc/sysconfig/flanneld
启动服务(master可以不用遵循如下操作)
systemctl enable flanneld.service
systemctl stop docker
systemctl start flanneld.service docker
所有机器上执行(master和node),已master为例
[root@k8s-master ~]# yum -y remove firewalld-*
[root@k8s-master ~]# yum -y install flannel.x86_64
[root@k8s-master ~]# vim /etc/sysconfig/flanneld
...................
FLANNEL_ETCD_ENDPOINTS="http://192.168.1.30:2379"
.....................
[root@k8s-master ~]# systemctl stop docker //master不用
[root@k8s-master ~]# systemctl start flanneld.service
[root@k8s-master ~]# systemctl enable flanneld.service
[root@k8s-master ~]# systemctl start docker //master不用
验证
在两台机node机器上开启容器,容器IP互ping能通即可
三、安装kub
安装配置master机器
[root@k8s-master ~]# yum list kubernetes-master.x86_64 kubernetes-client.x86_64 //查看版本
........
kubernetes-client.x86_64 1.10.3-0.el7 local_repo
kubernetes-master.x86_64 1.10.3-0.el7 local_repo
[root@k8s-master ~]# yum -y install kubernetes-master.x86_64 kubernetes-client.x86_64
[root@k8s-master ~]# vim /etc/kubernetes/config
................
KUBE_MASTER="--master=http://192.168.1.30:8080" //指定master
...............
[root@k8s-master ~]# vim /etc/kubernetes/apiserver
..........................
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" //监听地址
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.30:2379" //etcd地址
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota //去掉ServiceAccount认证
...............
启动服务并验证
[root@k8s-master ~]# systemctl enable kube-apiserver.service kube-controller-manager.service kube-scheduler.service
[root@k8s-master ~]# systemctl start kube-apiserver.service kube-controller-manager.service kube-scheduler.service
[root@k8s-master ~]# kubectl get cs //验证服务
NAME STATUS MESSAGE ERROR
etcd-0 Healthy {"health":"true"}
scheduler Healthy ok
controller-manager Healthy ok
安装三台的kube-node
[root@k8s-master ~]# ansible node -m shell -a 'yum -y install kubernetes-node'
[root@k8s-0002 ~]# vim /etc/kubernetes/kubelet //已该node为列
KUBELET_ADDRESS="--address=0.0.0.0" //监听地址
KUBELET_HOSTNAME="--hostname-override=k8s-0002" //配置本机名称
KUBELET_ARGS="--cgroup-driver=systemd --fail-swap-on=fale --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --pod-infra-container-image=pod-infrastructure:latest"
//kubelet.kubeconfig文件需要自己创建;pod-infra-container-imag是指镜像地址
KUBE_MASTER="--master=http://192.168.1.30:8080"
加载pod-infrastructure镜像
[root@res ~]# docker load -i pod-infrastructure.tar
[root@res ~]# docker push pod-infrastructure:latest
[root@k8s-0002 ~]# docker pull pod-infrastructure:latest //确认node节点能够下载镜像
[root@k8s-master ~]# kubectl config set-cluster local \
> --server="http://192.168.1.30:8080"
Cluster "local" set.
[root@k8s-master ~]# kubectl config set-context --cluster="local" local
Context "local" created.
[root@k8s-master ~]# kubectl config set current-context local
Property "current-context" set.
[root@k8s-master ~]# kubectl config view //自动生成配置
[root@k8s-master ~]# vim /etc/kubernetes/kubelet.kubeconfig //把自动生成配置拷贝到kubelet.kubeconfig文件
[root@k8s-master ~]# scp /etc/kubernetes/kubelet.kubeconfig 192.168.1.33:/etc/kubernetes/
//kubelet.kubeconfig文件的作用是让node节点知道如何连接master
启动服务并验证
[root@k8s-master ~]# ansible node -m shell -a 'systemctl enable kubelet kube-proxy'
[root@k8s-master ~]# ansible node -m shell -a 'systemctl start kube kube-proxy'
[root@k8s-master ~]# ansible node -m shell -a 'systemctl status kube kube-proxy'
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-0002 Ready <none> 2m v1.10.3
k8s-0003 Ready <none> 2m v1.10.3
k8s-0004 Ready <none> 2m v1.10.3
更多推荐
已为社区贡献1条内容
所有评论(0)