Play with LXD (by quqi99)
**作者:张华发表于:2016-08-05版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明http://blog.csdn.net/quqi99 )**安装LXDsudo apt install juju lxd zfsutils-linux bridge-utils squid-deb-proxy python-novaclient python-
作者:张华 发表于:2016-08-05
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
http://blog.csdn.net/quqi99 )
安装LXD
安装LXD, 并且设置使用静态子网, 通过指定静态IP来创建LXD容器.
# install lxd, refer - https://blog.csdn.net/quqi99/article/details/52131486
sudo snap install lxd --classic
#sudo groupadd --system lxd
sudo usermod -a -G lxd $USER #then re-login ssh
#sudo systemctl restart snap.lxd.daemon.service
# MUST NOT use sudo, so must cd to home dir to run it
cd ~ && lxd init --auto
sudo chown -R $USER ~/.config/
export EDITOR=vim
# use static subnet 192.168.122.0/24 (qemu also uses this subnet) for lxd as well
sudo virsh net-destroy default
lxc network show lxdbr0
lxc network set lxdbr0 ipv4.address=192.168.122.1/24
lxc network set lxdbr0 ipv6.address none
ip addr show lxdbr0
sudo iptables-save |grep 192.168.122
ps -ef |grep 192.168.122
# set lxc profile - https://github.com/openstack-charmers/openstack-on-lxd.git
cat << EOF | tee ./lxd-profile.yaml
config:
boot.autostart: "true"
linux.kernel_modules: openvswitch,nbd,ip_tables,ip6_tables
security.nesting: "true"
security.privileged: "true"
description: ""
devices:
ens3:
mtu: "9000"
name: ens3
nictype: bridged
parent: lxdbr0
type: nic
ens8:
mtu: "9000"
name: ens8
nictype: bridged
parent: lxdbr0
type: nic
kvm:
path: /dev/kvm
type: unix-char
mem:
path: /dev/mem
type: unix-char
root:
path: /
pool: default
type: disk
tun:
path: /dev/net/tun
type: unix-char
name: juju-default
used_by: []
EOF
lxc profile create juju-default 2>/dev/null || echo "juju-default profile already exists"
cat ./lxd-profile.yaml |lxc profile edit juju-default
#lxc profile device set juju-default root pool=default
lxc profile show juju-default
# create two test lxd containers
lxc network show lxdbr0
cat << EOF | tee network.yml
version: 1
config:
- type: physical
name: ens3
subnets:
- type: static
ipv4: true
address: 192.168.122.20
netmask: 255.255.255.0
gateway: 192.168.122.1
control: auto
- type: nameserver
address: 8.8.8.8
EOF
lxc launch ubuntu:focal master -p juju-default --config=user.network-config="$(cat network.yml)"
cat << EOF | tee network.yml
version: 1
config:
- type: physical
name: ens3
subnets:
- type: static
ipv4: true
address: 192.168.122.21
netmask: 255.255.255.0
gateway: 192.168.122.1
control: auto
- type: nameserver
address: 192.168.99.1
EOF
lxc launch ubuntu:focal node1 -p juju-default --config=user.network-config="$(cat network.yml)"
lxc exec `lxc list |grep master |awk -F '|' '{print $2}'` bash
lxc exec `lxc list |grep node1 |awk -F '|' '{print $2}'` bash
$ lxc list
+--------+---------+-----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+-----------------------+------+-----------+-----------+
| master | RUNNING | 192.168.122.20 (ens3) | | CONTAINER | 0 |
+--------+---------+-----------------------+------+-----------+-----------+
| node1 | RUNNING | 192.168.122.21 (ens3) | | CONTAINER | 0 |
+--------+---------+-----------------------+------+-----------+-----------+
更改默认的存储路径
默认存储池使用 /var/snap/lxd/common/lxd/storage-pools, 可能缺乏空间. 例如, 下列将默认存储池路径mount -o bind到/images/lxd
#just change storage-pools director
lxc profile show default
lxc profile device remove default root
lxc storage delete default
cat << EOF | sudo tee -a /etc/fstab
#https://serverfault.com/questions/763201/mount-bind-not-taking-affect-in-fstab-centos-6
#sudo mount -o bind /images/lxd /var/snap/lxd/common/lxd/storage-pools
/images/lxd /var/snap/lxd/common/lxd/storage-pools none bind 0 0
#/images/pbuilder /var/cache/pbuilder none bind 0 0
#/images/schroot /var/cache/schroot none bind 0 0
EOF
mkdir -p /images/lxd && sudo mount -a
sudo mount |grep -E 'images|storage-pools'
#restart snap.lxd.daemon, and create default storage again on '-o bind /images/lxd'
sudo systemctl restart snap.lxd.daemon
lxc storage create default dir && lxc storage show default
lxc profile device add default root disk path=/ pool=default
lxd sql global "SELECT * FROM storage_pools_config"
20230327更新 - 根目录又没有可用空间了,最后查出来是snapd的snapshot占用了太多空间,按上面同样的方法来将它挪个目录:
#reduce the space in /var/lib/snapd/snapshots/
sudo snap saved & sudo snap forget 86
sudo systemctl stop snapd.service
sudo systemctl stop snapd.socket
sudo mv /var/lib/snapd /var/lib/snapd_old
$ grep -r 'snapd' /etc/fstab
/images/snapd /var/lib/snapd none bind 0 0
sudo mkdir /var/lib/snapd && mkdir /images/snapd
sudo mount -a
sudo rsync -avz /var/lib/snapd_old/* /var/lib/snapd/
sudo rm -rf /var/lib/snapd_old
sudo systemctl start snapd.service
sudo systemctl start snapd.socket
20230331更新 - 如果用zfs是这样的, zfs是支持CoW的这样意味着它能减小snatshot占用的磁盘空间. 'lxd init --storage-backend=zfs --storage-create-device=/dev/nvme0n1p5 --storage-pool zfs --auto’带了–storage-create-device也能自动创建zfspool, 但是这个这个是调用’zpool create -m none -O compression=on default /dev/nvme0n1p5’创建的是没有mount点的,我们用下面的命令创建支持mount点更方便
sudo snap set system snapshots.automatic.retention=no
#Both zfs and btrfs support Cow, which can reduce snapshot space
sudo apt install zfsutils-linux -y
sudo umount /images
sudo zpool create -m /zfs -O compression=on default /dev/nvme0n1p5 -f #striped pool, raid-0
#sudo zpool add zfs /dev/sdY
#sudo zpool create zfs mirror /dev/sdX /dev/sdY #mirrored pool, raid-1
#zpool create -m none -O compression=on default /dev/nvme0n1p5 #lxd use '-m none'
sudo zpool status
zfs list
ls /zfs
#sudo zpool destroy default
#sudo zfs snapshot default
#sudo zfs rollback default
#lxc move <container_name> <new_container_name> --storage default
#lxc launch ubuntu:18.04 my-container -s zfs
sudo snap install lxd --classic
sudo usermod -aG $USER lxd
sudo chown -R $USER ~/.config/
export EDITOR=vim
# MUST NOT use sudo, so must cd to home dir to run it, and we don't use --storage-create-device because it will use 'zpool create -m none'
#cd ~ && lxd init --storage-backend=zfs --storage-create-device=/dev/nvme0n1p5 --storage-pool zfs --auto
cd ~ && lxd init --storage-backend=zfs --storage-pool default --auto
#lxc profile device remove default root && lxc storage delete default
#lxc profile device add default root disk path=/ pool=default
#lxc network set lxdbr0 ipv4.address=10.10.10.1/24
#lxc network set lxdbr0 ipv6.address none
一种方法更改IP
可以:
sudo sed -i "s/10.178.64.89/10.178.64.90/g" /var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.leases
systemctl restart snap.lxd.daemon
ssh -i ~/.local/share/juju/ssh/juju_id_rsa ubuntu@10.178.64.90
还可以:
lxc stop c1
lxc network attach lxdbr0 c1 eth0 eth0
lxc config device set c1 eth0 ipv4.address 10.99.10.42
lxc start c1
[已过时] Debian包安装LXD中可能遇到的问题
下面方法已经过时,因为采用的是debian包安装的lxd, 目前都是snap安装.
上面的命令会自动调用下列命令配置lxdbr0,但是有时候出错时可以使用下列命令分步调试。
sudo dpkg-reconfigure lxd
cat /etc/default/lxd-bridge #/usr/lib/lxd/lxd-bridge
sudo service lxd-bridge restart
sudo systemctl status lxd-bridge
配置lxd-bridge时出错比较多,首先会遇到这么一个错误“Unable to connect to Upstart”,那是因为ubuntu 16.04里同时安装了upstart与systemd,使用下列命令禁用upstart的启动方式,今后启动程序采用init.d或systemd的方式。
sudo dpkg-divert --local --rename --add /sbin/initctl
sudo ln -s /bin/true /sbin/initctl
#sudo apt-get --reinstall install upstart
#sudo dpkg-diver --local --remove /sbin/initctl
#sudo rm /sbin/initctl
另外一个是/var/log/syslog中报这种错误“Aug 5 17:02:54 localhost lxd-bridge.start[23556]: Error: ??? prefix is expected rather than “10.0.8.1/24/24”.",这是采用sudo dpkg-reconfigure lxd命令生成的/etc/default/lxd-bridge文件中配置的下列LXD_IPV4_ADDR参数多出了/24。
## IPv4 address (e.g. 10.0.8.1)
LXD_IPV4_ADDR="10.0.8.1/24"
从而导致/usr/lib/lxd/lxd-bridge中的下列命令就会报上述错误。
ifup "${LXD_BRIDGE}" "${LXD_IPV4_ADDR}" "${LXD_IPV4_NETMASK}"
20220824更新
cat << EOF |tee lxd-profile-config.yaml
#cloud-config
packages:
- avahi-daemon
- vim
- jq
- silversearcher-ag
- libnss-mdns
- git
- build-essential
- ubuntu-dev-tools
users:
- name: zhhuabj
ssh-authorized-keys:
- "ssh-rsa XXXXXX user@host" #you can just copy this line from ~/.ssh/id_rsa.pub
groups: admin
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
passwd: "$6$XXXXXXX" this is a crypted password from /etc/shadow
EOF
lxc profile set default user.user-data - < lxd-profile-config.yaml
lxc profile device add default src disk source=/home/zhhuabj/src path=/home/hua/src
20230220 - LXD使用物理网络
wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64-lxd.tar.xz
wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.squashfs
lxc image import ./ubuntu-22.04-server-cloudimg-amd64-lxd.tar.xz ./ubuntu-22.04-server-cloudimg-amd64.squashfs --alias jammy
lxc image list
#make your LXD containers get IP from your LAN using macvlan
lxc profile create macvlan
ip route show default 0.0.0.0/0
lxc profile device add macvlan eth1 nic nictype=macvlan parent=enp0s25
lxc launch jammy i1 --profile default --profile macvlan
lxc config show i1 --expanded
#Failed start validation for device "eth0": Instance DNS name "i1" already used on network
#If you are connecting an instance to multiple NICs then this wouldn’t be using a managed LXD network and wouldn’t trigger the error.
#https://discuss.linuxcontainers.org/t/error-failed-start-validation-for-device-enp3s0f0-instance-dns-name-net17-nicole-munoz-marketing-already-used-on-network/15586/23
#so the workaround is to use 'lxc profile edit juju-default' to remove the second NIC eth1
lxc profile create juju-default 2>/dev/null || echo "juju-default profile already exists"
cat /bak/work/openstack-on-lxd/lxd-profile.yaml |lxc profile edit juju-default
lxc network show lxdbr0
cat << EOF | tee /tmp/network.yml
version: 1
config:
- type: physical
name: eth1
subnets:
- type: static
ipv4: true
address: 10.72.198.122
netmask: 255.255.255.0
gateway: 10.72.198.1
control: auto
- type: nameserver
address: 192.168.99.1
EOF
lxc launch jammy i1 -p juju-default --config=user.network-config="$(cat /tmp/network.yml)"
$ lxc profile show juju-default
config:
boot.autostart: "true"
linux.kernel_modules: openvswitch,nbd,ip_tables,ip6_tables
security.nesting: "true"
security.privileged: "true"
description: ""
devices:
eth0:
nictype: macvlan
parent: enp0s25
type: nic
eth1:
mtu: "9000"
name: eth1
nictype: bridged
parent: lxdbr0
type: nic
kvm:
path: /dev/kvm
type: unix-char
mem:
path: /dev/mem
type: unix-char
root:
path: /
pool: default
type: disk
tun:
path: /dev/net/tun
type: unix-char
name: juju-default
used_by:
- /1.0/instances/i1
20230220 - juju环境中如何让lxd使用自定义镜像
如果lxd环境无法访问外网,lxd可以使用下列方法导入镜像:
wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64-lxd.tar.xz
wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.squashfs
lxc image import ./ubuntu-22.04-server-cloudimg-amd64-lxd.tar.xz ./ubuntu-22.04-server-cloudimg-amd64.squashfs --alias jammy
lxc image list
或者提供simplestreams metadata(将镜像存储在glance中,然后juju通过image-metadata-url/container-image-metadata-url从simplestreams知道glance中的image id)
eg: https://cloud-images.ubuntu.com/releases/streams/v1/com.ubuntu.cloud:released:download.json
mkdir -p ~/simplestreams/images
IMAGE_ID=b09541be-845a-4af6-8d48-e53e60dabb97
SERIES=jammy
juju metadata generate-image -d ~/simplestreams -i $IMAGE_ID -s $SERIES -r RegionOne -u $OS_AUTH_URL
ls ~/simplestreams/*/streams/*
生成的simplestrems metadata可以存储在swift中(也可使用glance-simplestreams-sync charm代替),见( https://juju.is/docs/olm/cloud-image-metadata),然后juju controller可以image-metadata-url来使用它:
juju bootstrap stsstack --no-gui --config image-stream=released --config image-metadata-url=http://10.230.19.58/swift/v1/simplestreams/data/ --config use-default-secgroup=true --config network=zhhuabj_admin_net zhhuabj
juju models中的虚机可以通过image-metadata-url来使用, 容器则可以通过container-image-metadata-url(必须使用https)来使用:
juju model-defaults use-default-secgroup=true network=zhhuabj_admin_net
#remember add quqi.com into /etc/hosts
juju model-config image-metadata-url=https://quqi.com:443/images/streams/v1 container-image-metadata-url=https://quqi.com:443/images/streams/v1
juju model-config |grep image
这里我们不使用swift, 只使用nginx来支持https(可参考: https://blog.csdn.net/quqi99/article/details/104278572)
openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -out ca.crt -keyout ca.key -subj "/C=CN/ST=BJ/O=STS/CN=CA"
for DOMAIN in quqi.com
do
openssl genrsa -out $DOMAIN.key
openssl req -new -key $DOMAIN.key -out $DOMAIN.csr -subj "/C=CN/ST=BJ/O=STS/CN=$DOMAIN"
openssl x509 -req -in $DOMAIN.csr -out $DOMAIN.crt -sha256 -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650
done
#add 'user ubuntu;' into /etc/nginx/nginx.conf to avoid forbidden error
curl --resolve quqi.com:443:10.230.65.104 --cacert ~/ca/ca.crt https://quqi.com:443/images/streams/v1/index.json
$ cat /etc/nginx/sites-available/default
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name quqi.com;
ssl_certificate /home/ubuntu/ca/quqi.com.crt;
ssl_certificate_key /home/ubuntu/ca/quqi.com.key;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
location / {
root /home/ubuntu/simplestreams;
index index.html;
}
}
搭建juju测试环境:
juju add-machine --series focal --constraints "root-disk=20G mem=12G"
juju model-config logging-config="<root>=DEBUG"
juju model-config image-metadata-url=https://quqi.com:443/images/streams/v1 container-image-metadata-url=https://quqi.com:443/images/streams/v1
#need to restart jujud-machine server after chaing model config
juju ssh 0 -- sudo systemctl restart jujud-machine-0.service
juju remove-application ceph-radosgw --force && juju remove-machine 0/lxd/2 --force
juju ssh 0 -- lxc image delete juju/focal/amd64 #need to delete image for test as well
juju deploy ceph-radosgw --series=focal --to="lxd:0"
juju ssh 0 -- sudo tail -f /var/log/juju/machine-0.log
但是实验不成功,它总还是从cloud-images.ubuntu.com取数据。
20240824 - local lxd image mirror
#then configure nginx by this page https://blog.csdn.net/quqi99/article/details/129445116
sudo sstream-mirror --verbose --keyring /usr/share/keyrings/ubuntu-cloudimage-keyring.gpg --max 1 --path streams/v1/index.json https://cloud-images.ubuntu.com/releases/ /home/hua/simplestreams 'arch=amd64' 'release~(focal|jammy|noble)' 'ftype~(lxd.tar.xz|squashfs|root.tar.xz|disk-kvm.img)'
lxc remote add mymirror https://minipc.lan --protocol simplestreams --public
sudo cp ~/ca/ca.crt /usr/local/share/ca-certificates/ca.crt
sudo chmod 644 /usr/local/share/ca-certificates/ca.crt
sudo update-ca-certificates --fresh && wget https://minipc.lan/streams/v1/index.json
sudo cp /home/hua/ca/ca.crt /etc/ssl/certs/ssl-cert-snakeoil.pem && sudo systemctl restart snap.lxd.daemon.service
lxc launch mymirror:jammy test
20241030 - lxd remote
如果在local machine x1 上访问remote machine rotom 上的lxd服务呢?
我们可以使用有密码方式:
lxc config set core.https_address [::]:8443
lxc config set core.trust_password password
lxc config show
但一般也不用密码方式,改用证书吧。
#client
sudo snap install lxd
sudo usermod -a -G lxd $USER #then re-login ssh
cd ~ && lxd init --auto
sudo chown -R $USER ~/.config/
export EDITOR=vim
lxc launch ubuntu:24.04 --vm
scp /home/hua/snap/lxd/common/config/client.crt zhhuabj@rotom:~/
lxc config trust add client.crt #on rotom
lxc remote add r1 rotom:8443
lxc remote switch r1 #or local
lxc remote get-default
lxc config trust list
怎么用呢?一是直接在x1上运行’lxc launch ubuntu:22.04 i1’之后这个容器会在rotom创建。另外,也可以使用下列方式:
#maas admin vm-hosts create type=lxd power_address=https://172.16.0.254:8443 project=maas name=maas1 password=password
maas admin vm-hosts create type=lxd power_address=https://172.16.0.254:8443 project=maas name=maas1
VM_HOST_ID=$(maas admin vm-hosts read |jq -r '.[] |.id')
#https://maas.io/docs/how-to-manage-virtual-machines
maas admin vm-host compose $VM_HOST_ID name=node1 cores=2 memory=6000 architecture=amd64/generic disks=1:size=20G interfaces=0:space=vsw0
参考
更多推荐
所有评论(0)