作者:张华 发表于:2016-08-05
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
http://blog.csdn.net/quqi99 )

理论基础

  1. iscsi还不能运行在容器里(因为netlink还不支持namesapce),本文采用rbd使用ceph代替iscsi. iscsiadm mgmt tool与iscsid daemon之间通过unix socket通讯, iscsid daemon与kernel之间通过netlink socket通讯. It’s worth mentioning that the netlink control code in the kernel is not network namespace aware - 但这篇文章介绍了三种在容器里使用iscsi的方法 - https://engineering.docker.com/2019/07/road-to-containing-iscsi/
  2. ovs, kvm通过定义profile支持运行在容器里。设置security.privileged=true 和security.nesting=true可让容器里支持再创建namespace
sudo lxc launch ubuntu test -c security.privileged=true
sudo lxc config set test security.privileged true
sudo lxc config set test security.nesting true
#sudo lxc config set test linux.kernel_modules <modules>
sudo lxc exec test bash
#ip netns add ns1  #inside container

配置LXD##

参考Play with LXD一文 在ubuntu 16.04上部署LXD环境。

LXD上部署OpenStack

1, 从这个链接下载 ‘openstack-base.zip’ ,里面有下面要用到的bundle.yaml

curl https://api.jujucharms.com/charmstore/v5/openstack-base/archive -o openstack-base.zip
unzip openstack-base.zip

2, 运行’juju bootstrap’,注意:运行这一步时先不要修改profile

sudo snap install lxd --classic
#export PATH=/snap/bin:$PATH
git clone https://github.com/openstack-charmers/openstack-on-lxd.git
cd openstack-on-lxd/
sudo lxc network set lxdbr0 ipv6.address none
sudo chown -R $USER ~/.config
sudo apt install -y apt-cacher-ng    #config apt proxy cache to speed up (/var/cache/apt-cacher-ng/)
echo 'Acquire::http::Proxy "http://127.0.0.1:3142";' | sudo tee /etc/apt/apt.conf.d/01acng
MY_IP=$(ip addr show lxdbr0 |grep global |awk '{print $2}' |awk -F '/' '{print $1}')
#juju kill-controller lxd-controller
#need to visit streams.canonical.com and cloud-images.ubuntu.com here but there is no juju mirror in domestic so slow
#see https://chubuntu.com/questions/26997/how-do-i-prepare-maas-to-serve-images-on-openstack.html
#juju bootstrap --debug --config default-series=bionic --config apt-http-proxy=`echo $MY_IP`:3142 --config apt-https-proxy=`echo $MY_IP`:3142 localhost lxd-controller
juju bootstrap --debug --config default-series=bionic --config apt-mirror=http://mirrors.aliyun.com/ubuntu/ localhost lxd-controller

no_proxy_192=$(echo 192.168.151.{1..255})
juju bootstrap --debug --config default-series=bionic --config apt-mirror=http://mirrors.aliyun.com/ubuntu/ localhost lxd-controller  \
    --model-default logging-config='<root>=INFO;unit=DEBUG' \                   
    --config http-proxy=http://192.168.151.1:8118 \                             
    --config https-proxy=http://192.168.151.1:8118 \                            
    --config no-proxy="localhost,127.0.0.1,127.0.0.53,${no_proxy_192// /,}" 

更新:
对于lxd, 可以使用lxc exec -t lxd-controller – tail -f -n+1 /var/log/cloud-init-output.log | ts 来monitor输出.
对于maas管理的机器,可以先设置"maas admin maas set-config name=kernel_opts value=‘console=tty0 console=ttyS0,115200n8’",若机器是virsh管理的在deploy时可以通过’ virsh console '或’minicom -D unix#/tmp/guest.monitor’来看启动过程.

千万注意:

千万注意:
上面'juju bootstrap'命令总出错是因为防火墙的原因,它在运行"snap install  --channel 4.0/stable juju-db"命令就要访问api.snapcraft.io.

ubuntu@useful-yeti:~$ snap debug connectivity
Connectivity status:
 * api.snapcraft.io: unreachable
error: 1 servers unreachable

$ host api.snapcraft.io
api.snapcraft.io has address 91.189.92.40
api.snapcraft.io has address 91.189.92.19
api.snapcraft.io has address 91.189.92.39
api.snapcraft.io has address 91.189.92.20
api.snapcraft.io has address 91.189.92.41
api.snapcraft.io has address 91.189.92.38

而上面的api.snapcraft.io相关的6个IP是随机被封的(如果一段时间全封了也就无解了).注意,因为是随机轮流被封,所以可以这样,在quick-maas上中的/var/lib/libvirt/dnsmasq/maas.conf添加一行:expand-hosts

root@quick-maas:~# ps -ef |grep dnsma |grep -v grep
libvirt+    7602       1  0 02:09 ?        00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/maas.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
root@quick-maas:~# cat /var/lib/libvirt/dnsmasq/maas.conf |tail -n1
expand-hosts

在/etc/hosts中添加一行:
91.189.92.41 api.snapcraft.io #注意:91.189.92.41是随机被封的,也许在你添加前的那一刻它没有被封,然后接着它就被封了.正因为此,此方法也不能保证一定成功.

然后kill -9 7602,然后再运行:/usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/maas.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper

所以最后得通过proxy解决:

no_proxy_192= ( e c h o 192.168.151.1..255 ) j u j u b o o t s t r a p − − d e b u g − − c o n f i g d e f a u l t − s e r i e s = b i o n i c − − c o n f i g a p t − m i r r o r = h t t p : / / m i r r o r s . a l i y u n . c o m / u b u n t u / l o c a l h o s t l x d − c o n t r o l l e r   − − m o d e l − d e f a u l t l o g g i n g − c o n f i g = ′ < r o o t > = I N F O ; u n i t = D E B U G ′   − − c o n f i g h t t p − p r o x y = h t t p : / / 192.168.151.1 : 8118   − − c o n f i g h t t p s − p r o x y = h t t p : / / 192.168.151.1 : 8118   − − c o n f i g n o − p r o x y = " l o c a l h o s t , 127.0.0.1 , 127.0.0.53 , (echo 192.168.151.{1..255}) juju bootstrap --debug --config default-series=bionic --config apt-mirror=http://mirrors.aliyun.com/ubuntu/ localhost lxd-controller \ --model-default logging-config='<root>=INFO;unit=DEBUG' \ --config http-proxy=http://192.168.151.1:8118 \ --config https-proxy=http://192.168.151.1:8118 \ --config no-proxy="localhost,127.0.0.1,127.0.0.53, (echo192.168.151.1..255)jujubootstrapdebugconfigdefaultseries=bionicconfigaptmirror=http://mirrors.aliyun.com/ubuntu/localhostlxdcontroller modeldefaultloggingconfig=<root>=INFO;unit=DEBUG confighttpproxy=http://192.168.151.1:8118 confighttpsproxy=http://192.168.151.1:8118 confignoproxy="localhost,127.0.0.1,127.0.0.53,{no_proxy_192// /,}"


几个问题:
1, 为什么此时使用lxc controller没有问题呢?因为maas admin subnet update设置了dns_servers=192.168.151.1, 这样pod内的kvm虚机使用了dns_servers=192.168.151.1, 而若是lxd则无此问题.

maas admin subnet update 192.168.151.0/24 gateway_ip=192.168.151.1 dns_servers=192.168.151.1

2, juju bootstrap时可以调试吗?

可以,一是通过串口连接,使用'virsh console <id>'连接机器查看日志.
另外,可以通过ssh连接.如果设置了ssh key(sudo maas createadmin --username admin --password password --email admin@quqi.com --ssh-import lp:zhhuabj), 
直接可以通过此key连接.若没有设置ssh key,也可以考虑~/.local/share/juju/ssh/juju_id_rsa
ssh连接之后,监控/var/log/cloud-init-output.log即可.

3, 可以maas gui控制器deploy pod中的机器,登录进去之后运行snap debug connectivity即可.

3, 创建model,且它会自动生成juju-openstack-model profile ('juju add-model’会自动执行这一句‘lxc profile create juju-openstack-model 2>/dev/null || echo “juju-openstack-model profile already exists”’), 如果不定义model,就会有一个名为default的model,那么这时下面第4步要编辑juju-default profile

juju add-model openstack-model
juju models
lxc profile show juju-openstack-model

4, 编辑juju-openstack-model profile, 如果这步出错, 多半是kvm-ok不工作.

#enable kvm nested, if the result is [N], change like follows and reboot the system
cat /sys/module/kvm_intel/parameters/nested
sudo modprobe -r kvm_intel
sudo modprobe kvm_intel nested=1
echo 'options kvm_intel nested=1' >> /etc/modprobe.d/qemu-system-x86.conf

#and need to change to use <cpu mode='host-passthrough'>
sudo virsh edit openstack
sudo virsh destroy openstack && sudo virsh start openstack

#sudo apt-get install --reinstall linux-image-extra-$(uname -r)
sudo modprobe nbd
sudo modprobe ip_tables
sudo modprobe openvswitch

cat << EOF > juju-openstack-model.yaml
name: juju-openstack-model
config:
  boot.autostart: "true"
  security.nesting: "true"
  security.privileged: "true"
  raw.lxc: lxc.apparmor.profile=unconfined
  linux.kernel_modules: openvswitch,nbd,ip_tables,ip6_tables,ebtables,netlink_diag,nf_nat,overlay
devices:
  eth0:
    mtu: "9000"
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  eth1:
    mtu: "9000"
    name: eth1
    nictype: bridged
    parent: lxdbr0
    type: nic
  kvm:
    path: /dev/kvm
    type: unix-char
  mem:
    path: /dev/mem
    type: unix-char
  root:
    path: /
    pool: default
    type: disk
  tun:
    path: /dev/net/tun
    type: unix-char
EOF
cat juju-openstack-model.yaml | lxc profile edit juju-openstack-model
#其他命令演示
#lxc profile set juju-openstack-model raw.lxc lxc.aa_profile=unconfined
#lxc profile device add juju-openstack-model fuse unix-char path=/dev/fuse
#sudo lxc network create lxdbr1 ipv4.address=auto ipv4.nat=true ipv6.address=none

5, 使用juju一键部署openstack

git clone https://github.com/openstack-charmers/openstack-on-lxd
cd openstack-on-lxd
juju deploy bundle-xenial-mitaka.yaml
juju status
juju debug-log

juju config neutron-gateway data-port=br-ex:eth1
juju resolved neutron-gateway/0

注: 20180703使用bundle.yaml不成功,原因是它通过–to: "lxd:0"部署service到lxd容器时,lxd容器lxdbr0没有和物理网卡关联。根据这个网页(https://blog.ubuntu.com/2016/04/07/lxd-networking-lxdbr0-explained)需要在4个LXD容器(yaml是先准备4个LXD容器,然后将某些openstack组件又嵌套部署在4个LXD容器里面的LXD容器里)里面运行:
lxc profile device set default eth0 parent eth0
lxc profile device set default eth0 nictype macvlan
因为是需要在里面运行所以不好操作。最好改成使用这个网页里的openstack yaml(git clone https://github.com/openstack-charmers/openstack-on-lxd),它里面都去掉了to: - lxd:1 之类的行。参考:https://docs.openstack.org/charm-guide/latest/openstack-on-lxd.html
其他juju命令演示 - https://paste.ubuntu.com/p/TTg8kcVhms/

安装过程中遇到的问题

  • 如果报这个错 - failed to bootstrap model: cannot start bootstrap instance: The container’s root device is missing the pool property, 那是要在profile中的root元素下添加:pool: default
  • bootstrap时报这个错 - FATAL: Module ip6_tables,ebtables,netlink_diag not found in directory /lib/modules/4.4.0-98-generic - 运行‘sudo apt-get install --reinstall linux-image-extra- ( u n a m e − r ) ’安装模块( / l i b / m o d u l e s / (uname -r)’安装模块(/lib/modules/ (unamer)安装模块(/lib/modules/(uname -r)/kernel/net/netlink/netlink_diag.ko)。此外是因为profile中写的这些模块名是从网页拷过来的存在乱码

配置使用OpenStack

source novarc
$ cat novarc 
#!/bin/bash
_keystone_unit=$(juju status keystone --format yaml | \
    awk '/units:$/ {getline; gsub(/:$/, ""); print $1}')
_keystone_ip=$(juju run --unit ${_keystone_unit} 'unit-get private-address')
_password=$(juju run --unit ${_keystone_unit} 'leader-get admin_passwd')
export OS_USERNAME=admin
export OS_PASSWORD=${_password}
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${_keystone_ip}:5000/v2.0
export OS_AUTH_TYPE=password

wget http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
glance image-create --name xenial --file xenial-server-cloudimg-amd64-disk1.img --visibility public --progress --container-format bare --disk-format qcow2
glance image-list

cd openstack-on-lxd
./neutron-ext-net --network-type flat -g 10.0.8.1 -c 10.0.8.0/24 -f 10.0.8.201:10.0.8.254 ext_net
./neutron-tenant-net -t admin -r provider-router -N 10.0.8.1 internal 192.168.20.0/24
neutron net-list

nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey
nova boot --image xenial --flavor m1.small --key-name mykey --nic net-id=$(neutron net-list | grep internal | awk '{ print $2 }') i1
nova list

neutron floatingip-create ext_net
neutron floatingip-associate $(neutron floatingip-list |grep 10.0.8.202 |awk '{print $2}') $(neutron port-list |grep '192.168.20.3' |awk '{print $2}')

for i in $(openstack security group list | awk '/default/{ print $2 }'); do \
    openstack security group rule create $i --protocol icmp --remote-ip 0.0.0.0/0; \
    openstack security group rule create $i --protocol tcp --remote-ip 0.0.0.0/0 --dst-port 22; \
done

ssh ubuntu@<new-floating-ip>
#juju ssh neutron-gateway/0 -- sudo ip netns exec qrouter-d0e7bf5c-c0ac-4980-b042-68b4550230e5 ping 10.0.8.202

cinder --os-volume-api-version 2 create --name testvolume 1
nova volume-attach xenial $(cinder list | grep testvolume | awk '{ print $2 }') /dev/vdc
cinder --os-volume-api-version 2 create --image-id $(glance image-list |grep trusty |awk '{print $2}') --display-name bootvol 8
nova boot --key-name mykey --image trusty --flavor m1.small --nic net-id=$(neutron net-list |grep ' private ' |awk '{print $2}')  --block-device-mapping vda=$(cinder --os-volume-api-version 2 list |grep bootvol |awk '{print $2}'):::0 i1

又一例 - 部署opencontrail在lxd单机上

下面的yaml是juju2.0的,如果是juju1.x可见:http://pastebin.ubuntu.com/24170320/
实际上,opencontrail vrouter部署在容器里会报下列错,此例子只是说明yaml怎么写。

2017-03-13 11:46:06 INFO juju-log Loading kernel module vrouter
2017-03-13 11:46:06 INFO install modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.8.0-34-generic/modules.dep.bin'
2017-03-13 11:46:06 INFO juju-log vrouter kernel module failed to load, clearing pagecache and retrying
series: trusty
services:
  # openstack
  ubuntu:
    charm: cs:trusty/ubuntu
    num_units: 1
  ntp:
    charm: cs:trusty/ntp
  mysql:
    charm: cs:trusty/mysql
    options:
      dataset-size: 15%
      max-connections: 1000
    num_units: 1
  rabbitmq-server:
    charm: cs:trusty/rabbitmq-server
    num_units: 1
  keystone:
    charm: cs:~sdn-charmers/trusty/keystone
    options:
      admin-password: password
      admin-role: admin
      openstack-origin: cloud:trusty-mitaka
    num_units: 1
  nova-cloud-controller:
    charm: cs:trusty/nova-cloud-controller
    options:
      network-manager: Neutron
      openstack-origin: cloud:trusty-mitaka
    num_units: 1
  neutron-api:
    charm: cs:trusty/neutron-api
    options:
      manage-neutron-plugin-legacy-mode: false
      openstack-origin: cloud:trusty-mitaka
    num_units: 1
  glance:
    charm: cs:trusty/glance
    options:
      openstack-origin: cloud:trusty-mitaka
    num_units: 1
  openstack-dashboard:
    charm: cs:trusty/openstack-dashboard
    options:
      openstack-origin: cloud:trusty-mitaka
    num_units: 1
  nova-compute:
    charm: cs:trusty/nova-compute
    options:
      openstack-origin: cloud:trusty-mitaka
    num_units: 1
  # contrail
  cassandra:
    charm: cs:trusty/cassandra
    options:
      authenticator: AllowAllAuthenticator
      install_sources: |
        - deb http://www.apache.org/dist/cassandra/debian 22x main
        - ppa:openjdk-r/ppa
        - ppa:stub/cassandra
    num_units: 1
  zookeeper:
    charm: cs:~charmers/trusty/zookeeper
    num_units: 1
  kafka:
    charm: cs:~sdn-charmers/trusty/apache-kafka
    num_units: 1
  contrail-configuration:
    charm: cs:~sdn-charmers/trusty/contrail-configuration
    options:
      openstack-origin: cloud:trusty-mitaka
    num_units: 1
  contrail-control:
    charm: cs:~sdn-charmers/trusty/contrail-control
    num_units: 1
  contrail-analytics:
    charm: cs:~sdn-charmers/trusty/contrail-analytics
    num_units: 1
  contrail-webui:
    charm: cs:~sdn-charmers/trusty/contrail-webui
    num_units: 1
  neutron-api-contrail:
    charm: cs:~sdn-charmers/trusty/neutron-api-contrail
    num_units: 0
  neutron-contrail:
    charm: cs:~sdn-charmers/trusty/neutron-contrail
    num_units: 0

relations:
  # openstack
 - [ ubuntu, ntp ]
 - [ keystone, mysql ]
 - [ glance, mysql ]
 - [ glance, keystone ]
 - [ nova-cloud-controller, mysql ]
 - [ nova-cloud-controller, rabbitmq-server ]
 - [ nova-cloud-controller, keystone ]
 - [ nova-cloud-controller, glance ]
 - [ neutron-api, mysql ]
 - [ neutron-api, rabbitmq-server ]
 - [ neutron-api, nova-cloud-controller ]
 - [ neutron-api, keystone ]
 - [ neutron-api, neutron-api-contrail ]
 - [ "nova-compute:shared-db", "mysql:shared-db" ]
 - [ "nova-compute:amqp", "rabbitmq-server:amqp" ]
 - [ nova-compute, glance ]
 - [ nova-compute, nova-cloud-controller ]
 - [ nova-compute, ntp ]
 - [ openstack-dashboard, keystone ]
  # contrail
 - [ kafka, zookeeper ]
 - [ "contrail-configuration:cassandra", "cassandra:database" ]
 - [ contrail-configuration, zookeeper ]
 - [ contrail-configuration, rabbitmq-server ]
 - [ "contrail-configuration:identity-admin", "keystone:identity-admin" ]
 - [ "contrail-configuration:identity-service", "keystone:identity-service" ]
 - [ neutron-api-contrail, contrail-configuration ]
 - [ neutron-api-contrail, keystone ]
 - [ "contrail-control:contrail-api", "contrail-configuration:contrail-api" ]
 - [ "contrail-control:contrail-discovery", "contrail-configuration:contrail-discovery" ]
 - [ "contrail-control:contrail-ifmap", "contrail-configuration:contrail-ifmap" ]
 - [ contrail-control, keystone ]
 - [ "contrail-analytics:cassandra", "cassandra:database" ]
 - [ contrail-analytics, kafka ]
 - [ contrail-analytics, zookeeper ]
 - [ "contrail-analytics:contrail-api", "contrail-configuration:contrail-api" ]
 - [ "contrail-analytics:contrail-discovery", "contrail-configuration:contrail-discovery" ]
 - [ "contrail-analytics:identity-admin", "keystone:identity-admin" ]
 - [ "contrail-analytics:identity-service", "keystone:identity-service" ]
 - [ "contrail-configuration:contrail-analytics-api", "contrail-analytics:contrail-analytics-api" ]
 - [ nova-compute, neutron-contrail ]
 - [ "neutron-contrail:contrail-discovery", "contrail-configuration:contrail-discovery" ]
 - [ "neutron-contrail:contrail-api", "contrail-configuration:contrail-api" ]
 - [ neutron-contrail, keystone ]
 - [ contrail-webui, keystone ]
 - [ "contrail-webui:cassandra", "cassandra:database" ]

通过conjure-up安装OpenStack

我们也可以通过conjure-up安装OpenStack,

#Install a lxd container
sudo lxc init ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch, nbd"
printf "lxc.cap.drop=\nlxc.aa_profile=unconfined\n" | sudo lxc config set openstack raw.lxc -
sudo lxc config get openstack raw.lxc
lxc config device add openstack mem unix-char path=/dev/mem
lxc start openstack
lxc list

#Install conjure-up inside the lxd container
#lxc exec openstack bash
lxc exec openstack -- apt update
#lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install squashfuse -y
lxc exec openstack -- ln -s /bin/true /usr/local/bin/udevadm
lxc exec openstack -- snap install conjure-up --classic

#Init lxd container
#Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
#Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
#lxc exec openstack -- lxd init
lxc exec openstack -- snap install lxd
sleep 10  #avoid the error 'Unable to talk to LXD: Get http://unix.socket/1.0'
lxc exec openstack -- /snap/bin/lxd init --auto
lxc exec openstack -- /snap/bin/lxc network create lxdbr0 ipv4.address=auto ipv4.nat=true ipv6.address=none
lxc exec openstack -- /snap/bin/lxc profile show default

#Deploying OpenStack with conjure-up in nested LXD
#conjure-up is a nice, user friendly, tool that interfaces with Juju to deploy complex services.
#Step 1, select “OpenStack with NovaLXD”
#Step 2, select “localhost” as the deployment target (uses LXD)
#Step 3, select default in all middle steps, and click “Deploy all remaining applications”
lxc exec openstack -- sudo -u ubuntu -i conjure-up
hua@node1:~$ sudo lxc list
+-----------+---------+--------------------------------+------+------------+-----------+
|   NAME    |  STATE  |              IPV4              | IPV6 |    TYPE    | SNAPSHOTS |
+-----------+---------+--------------------------------+------+------------+-----------+
| openstack | RUNNING | 10.73.227.154 (eth0)           |      | PERSISTENT | 0         |
|           |         | 10.164.92.1 (lxdbr0)           |      |            |           |
|           |         | 10.101.0.1 (conjureup0)        |      |            |           |
+-----------+---------+--------------------------------+------+------------+-----------+

#Or deploy OpenStack with conjure-up in physical node
sudo snap install lxd
export PATH=/snap/bin:$PATH
sudo /snap/bin/lxd init --auto
sudo /snap/bin/lxc network create lxdbr0 ipv4.address=auto ipv4.nat=true ipv6.address=none
sudo -i
conjure-up openstack #but I hit the error 'This should _not_ be run as root or with sudo' even though I've already used root

下面粘一些使用conjure-up过程中的截图:
这里写图片描述
这里写图片描述
这里写图片描述

20240824 - 又试了一次

#fix too many open files - https://docs.openstack.org/charm-guide/victoria/openstack-on-lxd.html
echo fs.inotify.max_queued_events=1048576 | sudo tee -a /etc/sysctl.conf
echo fs.inotify.max_user_instances=1048576 | sudo tee -a /etc/sysctl.conf
echo fs.inotify.max_user_watches=1048576 | sudo tee -a /etc/sysctl.conf
echo vm.max_map_count=262144 | sudo tee -a /etc/sysctl.conf
echo vm.swappiness=1 | sudo tee -a /etc/sysctl.conf

lxd init --auto
lxc network set lxdbr0 ipv4.address 10.0.8.1/24
lxc network set lxdbr0 ipv4.dhcp.ranges 10.0.8.2-10.0.8.200
lxc network set lxdbr0 bridge.mtu 9000
lxc network unset lxdbr0 ipv6.address
lxc network unset lxdbr0 ipv6.nat
#sudo zpool create lxd-zfs sdb sdc sdd && lxc storage create lxd-zfs zfs source=lxd-zfs

lxc remote add mymirror https://minipc.lan --protocol simplestreams --public
sudo cp ~/ca/ca.crt /usr/local/share/ca-certificates/ca.crt
sudo chmod 644 /usr/local/share/ca-certificates/ca.crt
sudo update-ca-certificates --fresh && wget https://minipc.lan/streams/v1/index.json
sudo cp /tmp/ca/ca.crt /etc/ssl/certs/ssl-cert-snakeoil.pem && sudo systemctl restart snap.lxd.daemon.service
lxc launch mymirror:jammy test

#there is no way to use the local image mirror created by sstream-mirror
http_proxy="http://192.168.99.186:9311" https_proxy="http://192.168.99.186:9311" no_proxy="127.0.0.1,localhost,::1,10.0.0.0/8,192.168.0.0/16" juju bootstrap localhost lxd-controller --debug
juju model-defaults apt-http-proxy=http://minipc.lan:3128 apt-https-proxy=http://minipc.lan:3128
juju model-defaults juju-http-proxy=http://minipc.lan:3128 juju-https-proxy=http://minipc.lan:3128
juju model-defaults juju-no-proxy=127.0.0.1,localhost,::1,192.168.0.0/16,10.0.0.0/8,172.16.0.0/16
juju model-defaults no-proxy=127.0.0.1,localhost,::1,192.168.0.0/16,10.0.0.0/8,172.16.0.0/16
juju model-defaults snap-http-proxy=http://minipc.lan:3128 snap-https-proxy=http://minipc.lan:3128
#cannot specify both legacy proxy values and juju proxy values
#juju model-defaults --reset http-proxy
#juju model-defaults --reset https-proxy
juju model-defaults |grep proxy
#don't need these ones because we don't use lxd inside vm
#juju model-defaults container-image-metadata-url=https://minipc.lan:443
#juju model-defaults image-metadata-url=https://minipc.lan:443
#juju model-defaults ssl-hostname-verification=false

git clone https://github.com/openstack-charmers/openstack-on-lxd.git ~/openstack-on-lxd
cd ~/openstack-on-lxd
#lxc profile create juju-default 2>/dev/null || echo “juju-openstack-model profile already exists
#cat lxd-profile.yaml | lxc profile edit juju-default
#lxc profile device set juju-default root pool=lxd-zfs
juju add-model mymodel
lxc profile list |grep juju-mymodel
cat ~/openstack-on-lxd//lxd-profile.yaml | lxc profile edit juju-mymodel
#the bundle.yaml in openstack-on-lxd is outdated, so use openstack-bundles/development/openstack-base-jammy-yoga instead
#but need to remove all machine and lxd and ceph related configurations
#但是machine stuck在local error: tls: bad record MAC


#上面失败了, 若回到vm + lxd模式的话,需要下面workaround, vm用/home/ubuntu这个目录, 关于使用cloudinit-userdata来加速lxd image的方法见: https://blog.csdn.net/quqi99/article/details/129286737

参考

Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐