helm部署openstack
Overview¶In order to drive towards a production-ready OpenStack solution, our goal is to provide containerized, yet stable persistent volumes that Kubernetes can use to schedule applications that re..
Overview¶
In order to drive towards a production-ready OpenStack solution, our goal is to provide containerized, yet stable persistent volumes that Kubernetes can use to schedule applications that require state, such as MariaDB (Galera). Although we assume that the project should provide a “batteries included” approach towards persistent storage, we want to allow operators to define their own solution as well. Examples of this work will be documented in another section, however evidence of this is found throughout the project. If you find any issues or gaps, please create a story to track what can be done to improve our documentation.
Note
Please see the supported application versions outlined in the source variable file.
Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other documentation as we explore these options further.
The installation procedures below, will take an administrator from a new kubeadm
installation to OpenStack-Helm deployment.
Note
Many of the default container images that are referenced across OpenStack-Helm charts are not intended for production use; for example, while LOCI and Kolla can be used to produce production-grade images, their public reference images are not prod-grade. In addition, some of the default images use latest
or master
tags, which are moving targets and can lead to unpredictable behavior. For production-like deployments, we recommend building custom images, or at minimum caching a set of known images, and incorporating them into OpenStack-Helm via values overrides.
Warning
Until the Ubuntu kernel shipped with 16.04 supports CephFS subvolume mounts by default the HWE Kernel is required to use CephFS.
Kubernetes Preparation¶
You can use any Kubernetes deployment tool to bring up a working Kubernetes cluster for use with OpenStack-Helm. For production deployments, please choose (and tune appropriately) a highly-resilient Kubernetes distribution, e.g.:
- Airship, a declarative open cloud infrastructure platform
- KubeADM, the foundation of a number of Kubernetes installation solutions
For a lab or proof-of-concept environment, the OpenStack-Helm gate scripts can be used to quickly deploy a multinode Kubernetes cluster using KubeADM and Ansible. Please refer to the deployment guide here.
Managing and configuring a Kubernetes cluster is beyond the scope of OpenStack-Helm and this guide.
Deploy OpenStack-Helm¶
Note
The following commands all assume that they are run from the /opt/openstack-helm
directory.
Setup Clients on the host and assemble the charts¶
The OpenStack clients and Kubernetes RBAC rules, along with assembly of the charts can be performed by running the following commands:
#!/bin/bash set -xe sudo -H -E pip install "cmd2<=0.8.7" sudo -H -E pip install python-openstackclient python-heatclient --ignore-installed sudo -H mkdir -p /etc/openstack sudo -H chown -R $(id -un): /etc/openstack tee /etc/openstack/clouds.yaml << EOF clouds: openstack_helm: region_name: RegionOne identity_api_version: 3 auth: username: 'admin' password: 'password' project_name: 'admin' project_domain_name: 'default' user_domain_name: 'default' auth_url: 'http://keystone.openstack.svc.cluster.local/v3' EOF #NOTE: Build charts make all
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/010-setup-client.sh
Deploy the ingress controller¶
#!/bin/bash set -xe #NOTE: Deploy global ingress : ${OSH_INFRA_PATH:="../openstack-helm-infra"} tee /tmp/ingress-kube-system.yaml << EOF pod: replicas: error_page: 2 deployment: mode: cluster type: DaemonSet network: host_namespace: true EOF helm upgrade --install ingress-kube-system ${OSH_INFRA_PATH}/ingress \ --namespace=kube-system \ --values=/tmp/ingress-kube-system.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_INGRESS_KUBE_SYSTEM} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh kube-system #NOTE: Display info helm status ingress-kube-system #NOTE: Deploy namespaced ingress controllers for NAMESPACE in openstack ceph; do # Allow $OSH_EXTRA_HELM_ARGS_INGRESS_ceph and $OSH_EXTRA_HELM_ARGS_INGRESS_openstack overrides OSH_EXTRA_HELM_ARGS_INGRESS_NAMESPACE="OSH_EXTRA_HELM_ARGS_INGRESS_${NAMESPACE}" #NOTE: Deploy namespace ingress tee /tmp/ingress-${NAMESPACE}.yaml << EOF pod: replicas: ingress: 2 error_page: 2 EOF helm upgrade --install ingress-${NAMESPACE} ${OSH_INFRA_PATH}/ingress \ --namespace=${NAMESPACE} \ --values=/tmp/ingress-${NAMESPACE}.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${!OSH_EXTRA_HELM_ARGS_INGRESS_NAMESPACE} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh ${NAMESPACE} #NOTE: Display info helm status ingress-${NAMESPACE} done
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/020-ingress.sh
Deploy Ceph¶
The script below configures Ceph to use filesystem directory-based storage. To configure a custom block device-based backend, please refer to the ceph-osd
values.yaml.
Additional information on Kubernetes Ceph-based integration can be found in the documentation for the CephFS and RBD storage provisioners, as well as for the alternative NFS provisioner.
Warning
The upstream Ceph image repository does not currently pin tags to specific Ceph point releases. This can lead to unpredictable results in long-lived deployments. In production scenarios, we strongly recommend overriding the Ceph images to use either custom built images or controlled, cached images.
Note
The ./tools/deployment/multinode/kube-node-subnet.sh script requires docker to run.
#!/bin/bash set -xe #NOTE: Deploy command [ -s /tmp/ceph-fs-uuid.txt ] || uuidgen > /tmp/ceph-fs-uuid.txt CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)" CEPH_CLUSTER_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)" CEPH_FS_ID="$(cat /tmp/ceph-fs-uuid.txt)" #NOTE(portdirect): to use RBD devices with Ubuntu kernels < 4.5 this # should be set to 'hammer' . /etc/os-release if [ "x${ID}" == "xubuntu" ] && \ [ "$(uname -r | awk -F "." '{ print $2 }')" -lt "5" ]; then CRUSH_TUNABLES=hammer else CRUSH_TUNABLES=null fi if [ "x${ID}" == "xcentos" ]; then CRUSH_TUNABLES=hammer fi tee /tmp/ceph.yaml << EOF endpoints: ceph_mon: namespace: ceph network: public: ${CEPH_PUBLIC_NETWORK} cluster: ${CEPH_CLUSTER_NETWORK} deployment: storage_secrets: true ceph: true rbd_provisioner: true cephfs_provisioner: true client_secrets: false bootstrap: enabled: true conf: ceph: global: fsid: ${CEPH_FS_ID} pool: crush: tunables: ${CRUSH_TUNABLES} target: # NOTE(portdirect): 5 nodes, with one osd per node osd: 5 pg_per_osd: 100 storage: osd: - data: type: directory location: /var/lib/openstack-helm/ceph/osd/osd-one journal: type: directory location: /var/lib/openstack-helm/ceph/osd/journal-one manifests: cronjob_checkPGs: true EOF : ${OSH_INFRA_PATH:="../openstack-helm-infra"} for CHART in ceph-mon ceph-osd ceph-client ceph-provisioners; do helm upgrade --install ${CHART} ${OSH_INFRA_PATH}/${CHART} \ --namespace=ceph \ --values=/tmp/ceph.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_CEPH_DEPLOY} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh ceph 1200 #NOTE: Validate deploy MON_POD=$(kubectl get pods \ --namespace=ceph \ --selector="application=ceph" \ --selector="component=mon" \ --no-headers | awk '{ print $1; exit }') kubectl exec -n ceph ${MON_POD} -- ceph -s done
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/030-ceph.sh
Activate the openstack namespace to be able to use Ceph¶
#!/bin/bash set -xe #NOTE: Deploy command CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)" CEPH_CLUSTER_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)" tee /tmp/ceph-openstack-config.yaml <<EOF endpoints: ceph_mon: namespace: ceph network: public: ${CEPH_PUBLIC_NETWORK} cluster: ${CEPH_CLUSTER_NETWORK} deployment: ceph: false rbd_provisioner: false cephfs_provisioner: false client_secrets: true bootstrap: enabled: false EOF : ${OSH_INFRA_PATH:="../openstack-helm-infra"} helm upgrade --install ceph-openstack-config ${OSH_INFRA_PATH}/ceph-provisioners \ --namespace=openstack \ --values=/tmp/ceph-openstack-config.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_CEPH_NS_ACTIVATE} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info helm status ceph-openstack-config
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/040-ceph-ns-activate.sh
Deploy MariaDB¶
#!/bin/bash set -xe #NOTE: Deploy command tee /tmp/mariadb.yaml << EOF pod: replicas: server: 3 ingress: 3 EOF : ${OSH_INFRA_PATH:="../openstack-helm-infra"} helm upgrade --install mariadb ${OSH_INFRA_PATH}/mariadb \ --namespace=openstack \ --values=/tmp/mariadb.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_MARIADB} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info helm status mariadb
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/050-mariadb.sh
Deploy RabbitMQ¶
#!/bin/bash set -xe #NOTE: Deploy command : ${OSH_INFRA_PATH:="../openstack-helm-infra"} : ${OSH_EXTRA_HELM_ARGS:=""} helm upgrade --install rabbitmq ${OSH_INFRA_PATH}/rabbitmq \ --namespace=openstack \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_RABBITMQ} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info helm status rabbitmq
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/060-rabbitmq.sh
Deploy Memcached¶
#!/bin/bash set -xe #NOTE: Lint and package chart : ${OSH_INFRA_PATH:="../openstack-helm-infra"} make -C ${OSH_INFRA_PATH} memcached tee /tmp/memcached.yaml <<EOF manifests: network_policy: true network_policy: memcached: ingress: - from: - podSelector: matchLabels: application: keystone - podSelector: matchLabels: application: heat - podSelector: matchLabels: application: glance - podSelector: matchLabels: application: cinder - podSelector: matchLabels: application: congress - podSelector: matchLabels: application: barbican - podSelector: matchLabels: application: ceilometer - podSelector: matchLabels: application: horizon - podSelector: matchLabels: application: ironic - podSelector: matchLabels: application: magnum - podSelector: matchLabels: application: mistral - podSelector: matchLabels: application: nova - podSelector: matchLabels: application: neutron - podSelector: matchLabels: application: senlin ports: - protocol: TCP port: 11211 EOF #NOTE: Deploy command : ${OSH_EXTRA_HELM_ARGS:=""} helm upgrade --install memcached ${OSH_INFRA_PATH}/memcached \ --namespace=openstack \ --values=/tmp/memcached.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_MEMCACHED} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info helm status memcached
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/070-memcached.sh
Deploy Keystone¶
#!/bin/bash set -xe #NOTE: Deploy command helm upgrade --install keystone ./keystone \ --namespace=openstack \ --set pod.replicas.api=2 \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_KEYSTONE} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info helm status keystone export OS_CLOUD=openstack_helm sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx openstack endpoint list helm test keystone --timeout 900
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/080-keystone.sh
Deploy Rados Gateway for object store¶
#!/bin/bash set -xe #NOTE: Deploy command CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)" CEPH_CLUSTER_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)" tee /tmp/radosgw-openstack.yaml <<EOF endpoints: identity: namespace: openstack object_store: namespace: openstack ceph_mon: namespace: ceph network: public: ${CEPH_PUBLIC_NETWORK} cluster: ${CEPH_CLUSTER_NETWORK} deployment: ceph: true rgw_keystone_user_and_endpoints: true bootstrap: enabled: false conf: rgw_ks: enabled: true EOF : ${OSH_INFRA_PATH:="../openstack-helm-infra"} helm upgrade --install radosgw-openstack ${OSH_INFRA_PATH}/ceph-rgw \ --namespace=openstack \ --values=/tmp/radosgw-openstack.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_HEAT} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info helm status radosgw-openstack export OS_CLOUD=openstack_helm openstack service list openstack container create 'mygreatcontainer' curl -L -o '/tmp/superimportantfile.jpg' 'https://imgflip.com/s/meme/Cute-Cat.jpg' ( cd /tmp && openstack object create 'mygreatcontainer' 'superimportantfile.jpg' )
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/090-ceph-radosgateway.sh
Deploy Glance¶
#!/bin/bash set -xe #NOTE: Deploy command : ${OSH_OPENSTACK_RELEASE:="newton"} #NOTE(portdirect), this could be: radosgw, rbd, swift or pvc : ${GLANCE_BACKEND:="swift"} tee /tmp/glance.yaml <<EOF storage: ${GLANCE_BACKEND} pod: replicas: api: 2 registry: 2 EOF if [ "x${OSH_OPENSTACK_RELEASE}" == "xnewton" ]; then # NOTE(portdirect): glance APIv1 is required for heat in Newton tee -a /tmp/glance.yaml <<EOF conf: glance: DEFAULT: enable_v1_api: true enable_v2_registry: true manifests: deployment_registry: true ingress_registry: true pdb_registry: true service_ingress_registry: true EOF fi helm upgrade --install glance ./glance \ --namespace=openstack \ --values=/tmp/glance.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_GLANCE} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info helm status glance export OS_CLOUD=openstack_helm openstack service list sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx openstack image list openstack image show 'Cirros 0.3.5 64-bit' helm test glance --timeout 900
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/100-glance.sh
Deploy Cinder¶
#!/bin/bash #NOTE: Deploy command tee /tmp/cinder.yaml << EOF pod: replicas: api: 2 volume: 1 scheduler: 1 backup: 1 conf: cinder: DEFAULT: backup_driver: cinder.backup.drivers.swift EOF helm upgrade --install cinder ./cinder \ --namespace=openstack \ --values=/tmp/cinder.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_CINDER} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info export OS_CLOUD=openstack_helm openstack service list sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx openstack volume type list helm test cinder --timeout 900
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/110-cinder.sh
Deploy OpenvSwitch¶
#!/bin/bash #NOTE: Deploy command : ${OSH_INFRA_PATH:="../openstack-helm-infra"} helm upgrade --install openvswitch ${OSH_INFRA_PATH}/openvswitch \ --namespace=openstack \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_OPENVSWITCH} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info helm status openvswitch
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/120-openvswitch.sh
Deploy Libvirt¶
#!/bin/bash #NOTE: Deploy libvirt : ${OSH_INFRA_PATH:="../openstack-helm-infra"} helm upgrade --install libvirt ${OSH_INFRA_PATH}/libvirt \ --namespace=openstack \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_LIBVIRT} #NOTE(portdirect): We don't wait for libvirt pods to come up, as they depend # on the neutron agents being up. #NOTE: Validate Deployment info helm status libvirt
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/130-libvirt.sh
Deploy Compute Kit (Nova and Neutron)¶
#!/bin/bash #NOTE: Deploy nova tee /tmp/nova.yaml << EOF labels: api_metadata: node_selector_key: openstack-helm-node-class node_selector_value: primary pod: replicas: api_metadata: 1 placement: 2 osapi: 2 conductor: 2 consoleauth: 2 scheduler: 1 novncproxy: 1 EOF if [ "x$(systemd-detect-virt)" == "xnone" ]; then echo 'OSH is not being deployed in virtualized environment' helm upgrade --install nova ./nova \ --namespace=openstack \ --values=/tmp/nova.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_NOVA} else echo 'OSH is being deployed in virtualized environment, using qemu for nova' helm upgrade --install nova ./nova \ --namespace=openstack \ --values=/tmp/nova.yaml \ --set conf.nova.libvirt.virt_type=qemu \ --set conf.nova.libvirt.cpu_mode=none \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_NOVA} fi #NOTE: Deploy neutron #NOTE(portdirect): for simplicity we will assume the default route device # should be used for tunnels NETWORK_TUNNEL_DEV="$(sudo ip -4 route list 0/0 | awk '{ print $5; exit }')" tee /tmp/neutron.yaml << EOF network: interface: tunnel: "${NETWORK_TUNNEL_DEV}" labels: agent: dhcp: node_selector_key: openstack-helm-node-class node_selector_value: primary l3: node_selector_key: openstack-helm-node-class node_selector_value: primary metadata: node_selector_key: openstack-helm-node-class node_selector_value: primary pod: replicas: server: 2 conf: neutron: DEFAULT: l3_ha: False max_l3_agents_per_router: 1 l3_ha_network_type: vxlan dhcp_agents_per_network: 1 plugins: ml2_conf: ml2_type_flat: flat_networks: public openvswitch_agent: agent: tunnel_types: vxlan ovs: bridge_mappings: public:br-ex EOF helm upgrade --install neutron ./neutron \ --namespace=openstack \ --values=/tmp/neutron.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_NEUTRON} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info export OS_CLOUD=openstack_helm openstack service list sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx openstack compute service list openstack network agent list helm test nova --timeout 900 helm test neutron --timeout 900
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/140-compute-kit.sh
Deploy Heat¶
#!/bin/bash #NOTE: Deploy command tee /tmp/heat.yaml << EOF pod: replicas: api: 2 cfn: 2 cloudwatch: 2 engine: 2 EOF helm upgrade --install heat ./heat \ --namespace=openstack \ --values=/tmp/heat.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_HEAT} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info export OS_CLOUD=openstack_helm openstack service list sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx openstack orchestration service list helm test heat --timeout 900
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/150-heat.sh
Deploy Barbican¶
#!/bin/bash #NOTE: Deploy command helm upgrade --install barbican ./barbican \ --namespace=openstack \ --set pod.replicas.api=2 \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_BARBICAN} #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack #NOTE: Validate Deployment info export OS_CLOUD=openstack_helm openstack service list sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx helm test barbican
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/160-barbican.sh
Configure OpenStack¶
Configuring OpenStack for a particular production use-case is beyond the scope of this guide. Please refer to the OpenStack Configuration documentation for your selected version of OpenStack to determine what additional values overrides should be provided to the OpenStack-Helm charts to ensure appropriate networking, security, etc. is in place.
更多推荐
所有评论(0)