ubuntu20.04下使用juju+maas环境部署k8s-2-部署charmed kubernetes #679
参考文档:Kubernetes documentationCharmed Kubernetes #679使用 Graylog 和 Prometheus 监视 Kubernetes 集群Charmed Kubernetes #679是一个高度可用的生产级 Kubernetes 集群。概述这是一个横向扩展的 Kubernetes 集群,由以下组件和功能组成:公共和私有云或裸机的深度集成使用标准的上游
参考文档:
Kubernetes documentation
使用 Graylog 和 Prometheus 监视 Kubernetes 集群
Charmed Kubernetes #679是一个高度可用的生产级 Kubernetes 集群。
概述
这是一个横向扩展的 Kubernetes 集群,由以下组件和功能组成:
- 公共和私有云或裸机的深度集成
- 使用标准的上游 Kubernetes
- 多个 Kubernetes 主节点和工作节点
- 广泛的 CNI 选项
- 默认为节点内 TLS
- GPGPU 支持高性能 AI/ML 提供托管选项
对于适合测试的更小型集群,请部署更小的 kubernetes-core 包。
对于轻量级的上游 K8s,试试 MicroK8s!
文档
有关如何部署和管理 Charmed Kubernetes 的详细说明,请访问 Charmed Kubernetes 官方文档。
实际部署过程:
juju deploy cs:bundle/charmed-kubernetes-679 --debug > /root/output/20210621.charmed-k8s.out
Series: "focal",
Channel: "",
Revision: 130,
Placement: "",
Offers: nil,
Units: nil,
},
},
Machines: {
"1": &bundlechanges.Machine{
ID: "1",
Series: "focal",
Annotations: {},
},
"0": &bundlechanges.Machine{
ID: "0",
Series: "focal",
Annotations: {},
},
},
Relations: nil,
ConstraintsEqual: func(string, string) bool {...},
ConstraintGetter: bundlechanges.ConstraintGetter {...},
Sequence: {"machine":2, "space":2, "subnet":3},
sequence: {},
MachineMap: {},
logger: nil,
}
14:19:05 INFO cmd bundlehandler.go:357 Located charm "easyrsa" in charm-store, revision 384
14:19:06 INFO cmd bundlehandler.go:357 Located charm "etcd" in charm-store, revision 594
14:19:06 INFO cmd bundlehandler.go:357 Located charm "flannel" in charm-store, revision 558
14:19:06 INFO cmd bundlehandler.go:357 Located charm "kubeapi-load-balancer" in charm-store, revision 798
14:19:06 INFO cmd bundlehandler.go:357 Located charm "kubernetes-master" in charm-store, revision 1008
14:19:07 INFO cmd bundlehandler.go:357 Located charm "kubernetes-worker" in charm-store, revision 768
Executing changes:
- upload charm easyrsa from charm-store for series focal
14:23:33 DEBUG juju.api monitor.go:35 RPC connection died
ERROR cannot deploy bundle: cannot add charm "easyrsa": cannot read entity archive: unexpected EOF
14:23:33 DEBUG cmd supercommand.go:537 error stack:
cannot read entity archive: unexpected EOF
/build/snapcraft-juju-35d6cf/parts/juju/src/rpc/client.go:178:
/build/snapcraft-juju-35d6cf/parts/juju/src/api/apiclient.go:1248:
/build/snapcraft-juju-35d6cf/parts/juju/src/api/client.go:453:
/build/snapcraft-juju-35d6cf/parts/juju/src/cmd/juju/application/store/store.go:44:
/build/snapcraft-juju-35d6cf/parts/juju/src/cmd/juju/application/deployer/bundlehandler.go:670: cannot add charm "easyrsa"
/build/snapcraft-juju-35d6cf/parts/juju/src/cmd/juju/application/deployer/bundlehandler.go:541:
/build/snapcraft-juju-35d6cf/parts/juju/src/cmd/juju/application/deployer/bundlehandler.go:103:
/build/snapcraft-juju-35d6cf/parts/juju/src/cmd/juju/application/deployer/bundle.go:172: cannot deploy bundle
但是由于网络问题,easyrsa无法下载,所以将提到的下列组件下载:
charmed kubernets #679
easyrsa # 384
etcd" in charm-store #594
flannel" in charm-store #558
kubeapi-load-balancer #798
kubernetes-master #1008
kubernetes-worker #768
并filezilla到到maas主机新建的的charmed-k8s目录,并逐个解压缩,类似命令如下:
unzip -d /root/charmed-k8s/charmed-kubernets charmed-kubernets.zip
然后将/root/charmed-k8s/charmed-kubernets/目录下bundle.yaml做本地部署修改:
cp /root/charmed-k8s/charmed-kubernets/bundle.yaml /root/charmed-k8s/charmed-kubernets/bundle.yaml.old
vim /root/charmed-k8s/charmed-kubernets/bundle.yaml
修改为:
description: A highly-available, production-grade Kubernetes cluster.
series: focal
services:
containerd:
annotations:
gui-x: '475'
gui-y: '800'
charm: /root/charmed-kubernetes-679/containerd
resources: {}
easyrsa:
annotations:
gui-x: '90'
gui-y: '420'
charm: /root/charmed-kubernetes-679/easyrsa
constraints: root-disk=8G
num_units: 1
resources:
easyrsa: 5
etcd:
annotations:
gui-x: '800'
gui-y: '420'
charm: /root/charmed-kubernetes-679/etcd
constraints: root-disk=8G
num_units: 3
options:
channel: 3.4/stable
resources:
core: 0
etcd: 3
snapshot: 0
flannel:
annotations:
gui-x: '475'
gui-y: '605'
charm: /root/charmed-kubernetes-679/flannel
resources:
flannel-amd64: 761
flannel-arm64: 758
flannel-s390x: 745
kubeapi-load-balancer:
annotations:
gui-x: '450'
gui-y: '250'
charm: /root/charmed-kubernetes-679/kubeapi-load-balancer
constraints: mem=4G root-disk=8G
expose: true
num_units: 1
resources: {}
kubernetes-master:
annotations:
gui-x: '800'
gui-y: '850'
charm: /root/charmed-kubernetes-679/kubernetes-master
constraints: cores=2 mem=4G root-disk=16G
num_units: 2
options:
channel: 1.21/stable
resources:
cdk-addons: 0
core: 0
kube-apiserver: 0
kube-controller-manager: 0
kube-proxy: 0
kube-scheduler: 0
kubectl: 0
kubernetes-worker:
annotations:
gui-x: '90'
gui-y: '850'
charm: /root/charmed-kubernetes-679/kubernetes-worker
constraints: cores=4 mem=4G root-disk=16G
expose: true
num_units: 3
options:
channel: 1.21/stable
resources:
cni-amd64: 797
cni-arm64: 788
cni-s390x: 800
core: 0
kube-proxy: 0
kubectl: 0
kubelet: 0
relations:
- - kubernetes-master:kube-api-endpoint
- kubeapi-load-balancer:apiserver
- - kubernetes-master:loadbalancer
- kubeapi-load-balancer:loadbalancer
- - kubernetes-master:kube-control
- kubernetes-worker:kube-control
- - kubernetes-master:certificates
- easyrsa:client
- - etcd:certificates
- easyrsa:client
- - kubernetes-master:etcd
- etcd:db
- - kubernetes-worker:certificates
- easyrsa:client
- - kubernetes-worker:kube-api-endpoint
- kubeapi-load-balancer:website
- - kubeapi-load-balancer:certificates
- easyrsa:client
- - flannel:etcd
- etcd:db
- - flannel:cni
- kubernetes-master:cni
- - flannel:cni
- kubernetes-worker:cni
- - containerd:containerd
- kubernetes-worker:container-runtime
- - containerd:containerd
- kubernetes-master:container-runtime
然后再次本地部署:
juju deploy /root/charmed-k8s/charmed-kubernets/bundle.yaml --debug
在maas上可以看到:
juju status --relations
Model Controller Cloud/Region Version SLA Timestamp
k8s maas-controller mymaas/default 2.8.10 unsupported 17:19:05+08:00
App Version Status Scale Charm Store Channel Rev OS Message
containerd go1.13.8 active 5 containerd local 0 ubuntu Container runtime available
easyrsa 3.0.1 active 1 easyrsa local 0 ubuntu Certificate Authority connected.
etcd 3.4.5 active 3 etcd local 0 ubuntu Healthy with 3 known peers
flannel blocked 5 flannel local 0 ubuntu Missing flannel resource.
kubeapi-load-balancer 1.18.0 active 1 kubeapi-load-balancer local 0 ubuntu Loadbalancer ready.
kubernetes-master 1.21.1 waiting 2 kubernetes-master local 0 ubuntu Waiting for auth-webhook service to start
kubernetes-worker 1.21.1 waiting 3 kubernetes-worker local 0 ubuntu Waiting for cluster credentials.
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 10.0.3.190 Certificate Authority connected.
etcd/0 active idle 1 10.0.3.191 2379/tcp Healthy with 3 known peers
etcd/1* active idle 2 10.0.3.192 2379/tcp Healthy with 3 known peers
etcd/2 active idle 3 10.0.3.193 2379/tcp Healthy with 3 known peers
kubeapi-load-balancer/0* active idle 4 10.0.3.194 443/tcp Loadbalancer ready.
kubernetes-master/0* waiting executing 5 10.0.3.195 Waiting for auth-webhook service to start
containerd/3 active idle 10.0.3.195 Container runtime available
flannel/3 blocked executing 10.0.3.195 Missing flannel resource.
kubernetes-master/1 waiting idle 6 10.0.3.198 Waiting for auth-webhook service to start
containerd/4 active idle 10.0.3.198 Container runtime available
flannel/4 blocked idle 10.0.3.198 Missing flannel resource.
kubernetes-worker/0 waiting idle 7 10.0.3.196 Waiting for cluster credentials.
containerd/2 active idle 10.0.3.196 Container runtime available
flannel/2 blocked idle 10.0.3.196 Missing flannel resource.
kubernetes-worker/1* waiting idle 8 10.0.3.199 Waiting for cluster credentials.
containerd/0* active idle 10.0.3.199 Container runtime available
flannel/0* blocked idle 10.0.3.199 Missing flannel resource.
kubernetes-worker/2 waiting idle 9 10.0.3.197 Waiting for cluster credentials.
containerd/1 active idle 10.0.3.197 Container runtime available
flannel/1 blocked idle 10.0.3.197 Missing flannel resource.
Machine State DNS Inst id Series AZ Message
0 started 10.0.3.190 humane-bee focal default Deployed
1 started 10.0.3.191 modern-moth focal default Deployed
2 started 10.0.3.192 subtle-cobra focal default Deployed
3 started 10.0.3.193 ready-ewe focal default Deployed
4 started 10.0.3.194 solid-dane focal default Deployed
5 started 10.0.3.195 stable-turtle focal default Deployed
6 started 10.0.3.198 native-rhino focal default Deployed
7 started 10.0.3.196 usable-walrus focal default Deployed
8 started 10.0.3.199 stable-doe focal default Deployed
9 started 10.0.3.197 above-owl focal default Deployed
更多推荐
所有评论(0)