K8s的安装环境要求

1、linux 内核3.10以上

2、64位系统

3、内存4G

4、安装epel

5、安装docker

6、开启yum cache保存安装RPM包

K8s中有三种类型的ip

物理ip(宿主机ip)

集群ip(cluster ip):10.254.0.0/16

pod(容器的ip):172.16.0.0/16

回到顶部
K8s安装与使用
安装

环境:三台机器,两个node(计算节点),一个主节点(master)

yum源需要:repo:CentOS-Base.repo docker1.12

主节点(master)

主机名:K8s-master ip:10.0.0.11 系统:centos7.2

yum install etcd -y
yum install docker -y
yum install kubernetes -y
yum install flannel -y
计算节点(node)

K8s-node-1 10.0.0.12 centos7.2

K8s-node-2 10.0.0.13 centos7.2

yum install docker -y
yum install kubernetes -y
yum install flannel -y
在master节点上

修改配置

修改etcd配置文件

[root@k8s-master ~]# vim /etc/etcd/etcd.conf
ETCD_NAME=”default”
ETCD_LISTEN_CLIENT_URLS=”http://0.0.0.0:2379”
ETCD_ADVERTISE_CLIENT_URLS=”http://10.0.0.11:2379”
启动

systemctl enable etcd.service
systemctl start etcd.service
检查

[root@k8s-master ~]# etcdctl -C http://10.0.0.11:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.0.0.11:2379
cluster is healthy
修改/etc/kubernetes/apiserver

vim /etc/kubernetes/apiserver
8 KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0”
11 KUBE_API_PORT=”–port=8080”
17 KUBE_ETCD_SERVERS=”–etcd-servers=http://10.0.0.11:2379
23 KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”
修改/etc/kubernetes/config

vim /etc/kubernetes/config
22 KUBE_MASTER=”–master=http://10.0.0.11:8080
启动

systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
查看是否启动成功

systemctl status kube-apiserver.service kube-controller-manager.service kube-scheduler.service
在node节点

vim /etc/kubernetes/config
KUBE_MASTER=”–master=http://10.0.0.11:8080
node-1

vim /etc/kubernetes/kubelet
KUBELET_ADDRESS=”–address=0.0.0.0”
KUBELET_HOSTNAME=”–hostname-override=10.0.0.12”
KUBELET_API_SERVER=”–api-servers=http://10.0.0.11:8080
node-2

vim /etc/kubernetes/kubelet
KUBELET_ADDRESS=”–address=0.0.0.0”
KUBELET_HOSTNAME=”–hostname-override=10.0.0.13”
KUBELET_API_SERVER=”–api-servers=http://10.0.0.11:8080
启动检查

systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
检查:

[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
10.0.0.12 Ready 3m
10.0.0.13 Ready 3m
配置flannel网络

修改配置文件

master、node上均编辑/etc/sysconfig/flanneld

vim /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS=”http://10.0.0.11:2379”
配置flannel的网络范围

etcdctl mk /atomic.io/network/config ‘{ “Network”: “172.16.0.0/16” }’

实操:

[root@k8s-master ~]# etcdctl mk /atomic.io/network/config ‘{ “Network”: “172.16.0.0/16” }’
{ “Network”: “172.16.0.0/16” }
启动

在master执行:

systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
在node上执行:

systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
K8s常见命令操作

命令:kubectl create -f hello.yaml

文件内容:

复制代码
[root@k8s-master ~]# vim hello.yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
restartPolicy: Never
containers:
- name: hello
image: “docker.io/busybox:latest”
command: [“/bin/echo”,”hello”,”world”]
复制代码
实操:

复制代码
[root@k8s-master ~]# kubectl create -f hello.yaml
pod “hello-world” created

kubectl get pods 查看默认name信息

kubectl describe pods hello-world 查看hello-world的详细信息
kubectl delete pods hello-world 删除名叫hello-world
kubectl replace -f nginx-rc.yaml 对已有资源进行更新、替换
kubectl edit rc nginx 对现有资源直接进行修改,立即生效
kubectl logs nginx-gt1jd    查看访问日志
复制代码
存在的坑

因为没有证书,拉取图像失败。

拉取图像失败
解决:yum install python-rhsm* -y

创建:

[root@k8s-master ~]# kubectl create -f nginx.yaml
pod “hello-nginx” created
检查是否成功

[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
hello-nginx 1/1 Running 0 2h 172.16.42.2 10.0.0.13
RC:保证高可用

RC是K8s集群中最早的保证Pod高可用的API对象。通过监控运行中的Pod来保证集群中运行指定数目的Pod副本。
指定的数目可以是多个也可以是1个;少于指定数目,RC就会启动运行新的Pod副本;多于指定数目,RC就会杀死多余的Pod副本。
即使在指定数目为1的情况下,通过RC运行Pod也比直接运行Pod更明智,因为RC也可以发挥它高可用的能力,保证永远有1个Pod在运行。
始终保持一个在活着

rc版yaml编写:

复制代码
[root@k8s-master ~]# cat nginx-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 1
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
复制代码
启动rc版容器

[root@k8s-master ~]# kubectl create -f nginx-rc.yaml
replicationcontroller “nginx” created
检查:

[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-gt1jd 1/1 Running 0 2m 172.16.79.2 10.0.0.12
这样的话就算删除了这个容器RC也会立马在起一个

版本升级

复制代码
[root@k8s-master ~]# cat web-rc2.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb-2
spec:
replicas: 2
selector:
app: myweb-2
template:
metadata:
labels:
app: myweb-2
spec:
containers:
- name: myweb-2
image: kubeguide/tomcat-app:v2
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: ‘mysql’
- name: MYSQL_SERVICE_PORT
value: ‘3306’
复制代码
升级操作:

复制代码
[root@k8s-master ~]# kubectl rolling-update myweb -f web-rc2.yaml
Created myweb-2
Scaling up myweb-2 from 0 to 2, scaling down myweb from 2 to 0 (keep 2 pods available, don’t exceed 3 pods)
Scaling myweb-2 up to 1
Scaling myweb down to 1
Scaling myweb-2 up to 2
Scaling myweb down to 0
Update succeeded. Deleting myweb
replicationcontroller “myweb” rolling updated to “myweb-2”
复制代码
升级过程

复制代码
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-2-mmlcm 1/1 Running 0 32s 172.16.42.3 10.0.0.13
myweb-71438 1/1 Running 0 2m 172.16.42.2 10.0.0.13
myweb-cx9j2 1/1 Running 0 2m 172.16.79.3 10.0.0.12
nginx-gt1jd 1/1 Running 0 1h 172.16.79.2 10.0.0.12
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-2-0kmzf 1/1 Running 0 7s 172.16.79.4 10.0.0.12
myweb-2-mmlcm 1/1 Running 0 1m 172.16.42.3 10.0.0.13
myweb-cx9j2 1/1 Running 0 2m 172.16.79.3 10.0.0.12
nginx-gt1jd 1/1 Running 0 1h 172.16.79.2 10.0.0.12
复制代码
回滚

复制代码
[root@k8s-master ~]# cat web-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb
spec:
replicas: 2
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: kubeguide/tomcat-app:v1
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: ‘mysql’
- name: MYSQL_SERVICE_PORT
value: ‘3306’
复制代码
操作:

复制代码
[root@k8s-master ~]# kubectl rolling-update myweb-2 -f web-rc.yaml
Created myweb
Scaling up myweb from 0 to 2, scaling down myweb-2 from 2 to 0 (keep 2 pods available, don’t exceed 3 pods)
Scaling myweb up to 1
Scaling myweb-2 down to 1
Scaling myweb up to 2
Scaling myweb-2 down to 0
Update succeeded. Deleting myweb-2
replicationcontroller “myweb-2” rolling updated to “myweb”
复制代码
检查

[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-mbndc 1/1 Running 0 1m 172.16.79.3 10.0.0.12
myweb-qh38r 1/1 Running 0 2m 172.16.42.2 10.0.0.13
nginx-gt1jd 1/1 Running 0 1h 172.16.79.2 10.0.0.12
svc设置

复制代码
[root@k8s-master ~]# cat web-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
selector:
app: myweb
复制代码
[root@k8s-master ~]# kubectl create -f web-svc.yaml
[root@k8s-master ~]# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 443/TCP 6h
myweb 10.254.91.34 8080:30001/TCP 1m
然后取node节点检查30001端口是否启动

然后浏览器web访问node节点的ip:30001进行测试

web界面管理

复制代码
[root@k8s-master ~]# cat dashboard.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:

Keep the name in sync with image version and

gce/coreos/kube-manifests/addons/dashboard counterparts

name: kubernetes-dashboard-latest
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: latest
kubernetes.io/cluster-service: “true”
spec:
containers:
- name: kubernetes-dashboard
image: index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
args:
- –apiserver-host=http://10.0.0.11:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
复制代码
操作:

复制代码
[root@k8s-master ~]# cat dashboard-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: “true”
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
复制代码
启动:

kubectl create -f dashboard.yaml
kubectl create -f dashboard-svc.yaml

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐