In Part II, we configure the xen hypervisor. The hypervisor enables VMs to run, and is provided with various utilities to configure VMs and perform management and maintenance tasks. I've stuck with xen here although there seems to be a shift towards KVM during 2018 with some public cloud providers. The basic xen configuration guide is here.
In what follows, all VM storage is going to be hosted on the NAS and presented as iSCSI LUNs to the hypervisor. This means I will get access to more storage than what may be available on the server itself, and it will be more resilient. Your NAS may even have some helpful features to clone and back up LUNs.
I must admit that I spent a long time on Ubuntu 16 before I moved to 17 or 18 so I wasn't so familiar with some of the new networking features like netplan. I did try briefly to work around some of the issues I encountered but ultimately resorted to the traditional ways of doing things via /etc/network/interfaces. You can disable netplan by following the instructions here. The verified steps are:
You can diverge here if you want to use local LVM2 storage for your guest VMs but I'll continue to describe how to set up the iscsi storage for the hypervisor to make available to our guest VMs. I think this offers more flexibility. Some details on this are available for Ubuntu here - follow these before proceeding. Once the tools are installed, we need to discover the LUNs. Set up of the LUNs is particular to your storage service so I won't cover that here but assume that you have set up a target and LUN of 20GB.
Next: Flexible Server Part III: Setting up a guest VM
In what follows, all VM storage is going to be hosted on the NAS and presented as iSCSI LUNs to the hypervisor. This means I will get access to more storage than what may be available on the server itself, and it will be more resilient. Your NAS may even have some helpful features to clone and back up LUNs.
I must admit that I spent a long time on Ubuntu 16 before I moved to 17 or 18 so I wasn't so familiar with some of the new networking features like netplan. I did try briefly to work around some of the issues I encountered but ultimately resorted to the traditional ways of doing things via /etc/network/interfaces. You can disable netplan by following the instructions here. The verified steps are:
- Check the actual interface names you are interested in with ip l for the links (aka interfaces) and with ip afor addresses.
- Install ifupdown with sudo apt -y install ifupdown.
- Purge netplan with sudo apt -y purge netplan.io.
- Configure /etc/network/interfaces and/or /etc/network/interfaces.d accordingly to your needs (man 5 interfaces can be of some help with examples).
- Restart the networking service with sudo systemctl restart networking; systemctl status networking or sudo /etc/init.d/networking restart; /etc/init.d/networking status. The output of the status command should mention active as its status.
- The command ip a will show whether the expected network configuration has been applied.
- Optionally, manually purge the remants of the netplan configuration files with sudo rm -vfr /usr/share/netplan /etc/netplan.
1
2
3
4
5
6
7
| gsw@goat-lin:~$ sudo add-apt-repository main 'main' distribution component is already enabled for all sources. gsw@goat-lin:~$ sudo add-apt-repository universe 'universe' distribution component enabled for all sources. gsw@goat-lin:~$ apt-get install bridge-utils gsw@goat-lin:~$ apt-get install xen-tools gsw@goat-lin:~$ apt-get install xen-hypervisor-amd64 |
gsw@goat-lin:~$ sudo add-apt-repository main 'main' distribution component is already enabled for all sources. gsw@goat-lin:~$ sudo add-apt-repository universe 'universe' distribution component enabled for all sources. gsw@goat-lin:~$ apt-get install bridge-utils gsw@goat-lin:~$ apt-get install xen-tools gsw@goat-lin:~$ apt-get install xen-hypervisor-amd64We then set up the network accordingly:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
| gsw@goat-lin:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces .d/* auto lo iface lo inet loopback auto enp5s0 iface enp5s0 inet manual auto xenbr0 iface xenbr0 inet dhcp bridge_stp off # disable Spanning Tree Protocol bridge_waitport 0 # no delay before a port becomes available bridge_fd 0 # no forwarding delay bridge_ports enp5s0 gsw@goat-lin:~$ ifconfig enp5s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 1c:6f:65:31:99:88 txqueuelen 1000 (Ethernet) RX packets 11453 bytes 13618025 (13.6 MB) RX errors 0 dropped 46 overruns 0 frame 0 TX packets 4203 bytes 327006 (327.0 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 124 bytes 7904 (7.9 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 124 bytes 7904 (7.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 xenbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.99 netmask 255.255.255.0 broadcast 192.168.1.255 ether 1c:6f:65:31:99:88 txqueuelen 1000 (Ethernet) RX packets 6527 bytes 13194999 (13.1 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4203 bytes 327006 (327.0 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 |
gsw@goat-lin:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* auto lo iface lo inet loopback auto enp5s0 iface enp5s0 inet manual auto xenbr0 iface xenbr0 inet dhcp bridge_stp off # disable Spanning Tree Protocol bridge_waitport 0 # no delay before a port becomes available bridge_fd 0 # no forwarding delay bridge_ports enp5s0 gsw@goat-lin:~$ ifconfig enp5s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 1c:6f:65:31:99:88 txqueuelen 1000 (Ethernet) RX packets 11453 bytes 13618025 (13.6 MB) RX errors 0 dropped 46 overruns 0 frame 0 TX packets 4203 bytes 327006 (327.0 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 124 bytes 7904 (7.9 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 124 bytes 7904 (7.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 xenbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.99 netmask 255.255.255.0 broadcast 192.168.1.255 ether 1c:6f:65:31:99:88 txqueuelen 1000 (Ethernet) RX packets 6527 bytes 13194999 (13.1 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4203 bytes 327006 (327.0 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0xen enables Domain-0 (Dom-0) which is essentially the hypervisor, Dom-Us are user domains created per guest VM. Correct memory config is important as by default, Dom-0 will be assigned all available memory. You don't really expect to run anything but essential services here (ie. you won't be running any apps here like databases) so you can restrict the memory assigned to Dom-0 to 512MB as a starting point and adjust from there based on your workload. Failure to make available memory will prevent you from creating Dom-Us/VMs due to lack of memory. In the below example you can see how my server's 24GB of physical memory is fully allocated to Dom-0 until I reconfigure the boot time Dom-0 memory.
01
02
03
04
05
06
07
08
09
10
11
12
13
14
| For grub2 /etc/default/grub GRUB_CMDLINE_XEN_DEFAULT= "dom0_mem=512M,max:512M" update-grub root@goat-lin: /boot/grub > xl list Name ID Mem VCPUs State Time(s) Domain-0 0 22252 6 r----- 2391.2 root@goat-lin: /boot/grub > shutdown -P now root@goat-lin: /home/gsw > xl list Name ID Mem VCPUs State Time(s) Domain-0 0 512 6 r----- 44.4 |
For grub2 /etc/default/grub GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=512M,max:512M" update-grub root@goat-lin:/boot/grub> xl list Name ID Mem VCPUs State Time(s) Domain-0 0 22252 6 r----- 2391.2 root@goat-lin:/boot/grub> shutdown -P now root@goat-lin:/home/gsw> xl list Name ID Mem VCPUs State Time(s) Domain-0 0 512 6 r----- 44.4
You can diverge here if you want to use local LVM2 storage for your guest VMs but I'll continue to describe how to set up the iscsi storage for the hypervisor to make available to our guest VMs. I think this offers more flexibility. Some details on this are available for Ubuntu here - follow these before proceeding. Once the tools are installed, we need to discover the LUNs. Set up of the LUNs is particular to your storage service so I won't cover that here but assume that you have set up a target and LUN of 20GB.
01
02
03
04
05
06
07
08
09
10
11
12
13
| gsw@goat-lin: /etc/iscsi $ sudo iscsiadm -m discoverydb -P1 SENDTARGETS: DiscoveryAddress: 192.168.1.2,3260 Target: iqn.2000-01.com.synology:nas.Target-1.c178f67a2e Portal: 192.168.1.2:3260,1 Iface Name: default Portal: [fe80::211:32ff:fe0e:6c76]:3260,1 Iface Name: default iSNS: No targets found. STATIC: No targets found. FIRMWARE: No targets found. |
gsw@goat-lin:/etc/iscsi$ sudo iscsiadm -m discoverydb -P1 SENDTARGETS: DiscoveryAddress: 192.168.1.2,3260 Target: iqn.2000-01.com.synology:nas.Target-1.c178f67a2e Portal: 192.168.1.2:3260,1 Iface Name: default Portal: [fe80::211:32ff:fe0e:6c76]:3260,1 Iface Name: default iSNS: No targets found. STATIC: No targets found. FIRMWARE: No targets found.I have no IPv6 setup on my local network, and I noticed that the iscsi service will hang at boot time waiting for the IPv6 portal to come up. To avoid this, remove the IPv6 portal.
01
02
03
04
05
06
07
08
09
10
11
12
| root@goat-lin: /etc/iscsi > iscsiadm -m node -o delete -T "iqn.2000-01.com.synology:nas.Target-1.c178f67a2e" --portal [fe80::211:32ff:fe0e:6c76]:3260 root@goat-lin: /etc/iscsi > iscsiadm -m discoverydb -P1 SENDTARGETS: DiscoveryAddress: 192.168.1.2,3260 Target: iqn.2000-01.com.synology:nas.Target-1.c178f67a2e Portal: 192.168.1.2:3260,1 Iface Name: default iSNS: No targets found. STATIC: No targets found. FIRMWARE: No targets found. |
root@goat-lin:/etc/iscsi> iscsiadm -m node -o delete -T "iqn.2000-01.com.synology:nas.Target-1.c178f67a2e" --portal [fe80::211:32ff:fe0e:6c76]:3260 root@goat-lin:/etc/iscsi> iscsiadm -m discoverydb -P1 SENDTARGETS: DiscoveryAddress: 192.168.1.2,3260 Target: iqn.2000-01.com.synology:nas.Target-1.c178f67a2e Portal: 192.168.1.2:3260,1 Iface Name: default iSNS: No targets found. STATIC: No targets found. FIRMWARE: No targets found.It should be possible to see the new sdb device after --login.
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
| gsw@goat-lin:~$ sudo iscsiadm -m node --login gsw@goat-lin:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 86.9M 1 loop /snap/core/4917 loop1 7:1 0 87.9M 1 loop /snap/core/5328 sda 8:0 0 111.8G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 110.8G 0 part ├─ubuntu--vg-ubuntu--lv-real 253:0 0 4G 0 lvm │ ├─ubuntu--vg-ubuntu--lv 253:1 0 4G 0 lvm / │ └─ubuntu--vg-snap_test 253:3 0 4G 0 lvm ├─ubuntu--vg-snap_test-cow 253:2 0 4G 0 lvm │ └─ubuntu--vg-snap_test 253:3 0 4G 0 lvm └─ubuntu--vg-restore_test 253:4 0 4G 0 lvm sdb 8:16 0 20G 0 disk |
gsw@goat-lin:~$ sudo iscsiadm -m node --login gsw@goat-lin:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 86.9M 1 loop /snap/core/4917 loop1 7:1 0 87.9M 1 loop /snap/core/5328 sda 8:0 0 111.8G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 110.8G 0 part ├─ubuntu--vg-ubuntu--lv-real 253:0 0 4G 0 lvm │ ├─ubuntu--vg-ubuntu--lv 253:1 0 4G 0 lvm / │ └─ubuntu--vg-snap_test 253:3 0 4G 0 lvm ├─ubuntu--vg-snap_test-cow 253:2 0 4G 0 lvm │ └─ubuntu--vg-snap_test 253:3 0 4G 0 lvm └─ubuntu--vg-restore_test 253:4 0 4G 0 lvm sdb 8:16 0 20G 0 diskThe kernel should also be reporting the new iscsi disk:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
| gsw@goat-lin:~$ dmesg | grep sd [ 2.628113] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 2.628604] sd 0:0:0:0: [sda] 234455040 512-byte logical blocks: (120 GB /112 GiB) [ 2.628801] sd 0:0:0:0: [sda] Write Protect is off [ 2.628908] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 2.629186] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 2.635707] sda: sda1 sda2 sda3 [ 2.637267] sd 0:0:0:0: [sda] Attached SCSI disk [ 7.233611] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 4710.677791] sd 6:0:0:1: Attached scsi generic sg1 type 0 [ 4710.680245] sd 6:0:0:1: [sdb] 20971520 512-byte logical blocks: (10.7 GB /10 .0 GiB) [ 4710.680517] sd 6:0:0:1: [sdb] Write Protect is off [ 4710.680519] sd 6:0:0:1: [sdb] Mode Sense: 43 00 10 08 [ 4710.682492] sd 6:0:0:1: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 4710.726013] sd 6:0:0:1: [sdb] Attached SCSI disk |
gsw@goat-lin:~$ dmesg | grep sd [ 2.628113] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 2.628604] sd 0:0:0:0: [sda] 234455040 512-byte logical blocks: (120 GB/112 GiB) [ 2.628801] sd 0:0:0:0: [sda] Write Protect is off [ 2.628908] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 2.629186] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 2.635707] sda: sda1 sda2 sda3 [ 2.637267] sd 0:0:0:0: [sda] Attached SCSI disk [ 7.233611] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 4710.677791] sd 6:0:0:1: Attached scsi generic sg1 type 0 [ 4710.680245] sd 6:0:0:1: [sdb] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB) [ 4710.680517] sd 6:0:0:1: [sdb] Write Protect is off [ 4710.680519] sd 6:0:0:1: [sdb] Mode Sense: 43 00 10 08 [ 4710.682492] sd 6:0:0:1: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 4710.726013] sd 6:0:0:1: [sdb] Attached SCSI diskWe also want the disk to be available at boot.
01
02
03
04
05
06
07
08
09
10
11
12
| gsw@goat-lin: /etc/xen $ sudo vi /etc/iscsi/iscsid .conf ... # To request that the iscsi initd scripts startup a session set to "automatic". # node.startup = automatic # # To manually startup the session set to "manual". The default is manual. #node.startup = manual node.startup = automatic # I've seen this not work on reboot - try: gsw@goat-lin: /etc/xen $ sudo iscsiadm -m node -T "iqn.2000-01.com.synology:nas.Target-1.c178f67a2e" -o update -n node.startup - v automatic |
gsw@goat-lin:/etc/xen$ sudo vi /etc/iscsi/iscsid.conf ... # To request that the iscsi initd scripts startup a session set to "automatic". # node.startup = automatic # # To manually startup the session set to "manual". The default is manual. #node.startup = manual node.startup = automatic # I've seen this not work on reboot - try: gsw@goat-lin:/etc/xen$ sudo iscsiadm -m node -T "iqn.2000-01.com.synology:nas.Target-1.c178f67a2e" -o update -n node.startup -v automaticNow that the raw storage is in place, we basically create a volume group on it. This gives us the ability to create all the root disks on a more redundant storage device (NAS) and keep the ability to take snapshots and run dd backups. You'll note that I already have a VM server name in mind goat-lin001.weber.net, and I've set up my DNS etc to know about this ahead of time. Using the server name makes it easier to associate the storage with the server later.
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
| root@goat-lin: /etc/xen # pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created gsw@goat-lin: /etc/xen-tools $ sudo vgcreate roots-vg /dev/sdb Volume group "roots-vg" successfully created gsw@goat-lin: /etc/xen-tools $ sudo vgdisplay --- Volume group --- VG Name ubuntu-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 10 VG Access read /write VG Status resizable MAX LV 0 Cur LV 3 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 110.79 GiB PE Size 4.00 MiB Total PE 28363 Alloc PE / Size 3072 / 12.00 GiB Free PE / Size 25291 / 98.79 GiB VG UUID MCtcE9-UF6R-zGNe-Cwwc-V1UT-AS8f-etXjbj --- Volume group --- VG Name roots-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read /write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size <100.00 GiB PE Size 4.00 MiB Total PE 25599 Alloc PE / Size 0 / 0 Free PE / Size 25599 / <100.00 GiB VG UUID 504DuZ-9rSM-Mh5N-JpgC-zcwq-F0fw-qmjQbm root@goat-lin: /etc/xen # lvdisplay /dev/vg_goat-lin001 --- Logical volume --- LV Path /dev/vg_goat-lin001/goat-lin001 .webber.net-swap LV Name goat-lin001.webber.net-swap VG Name vg_goat-lin001 LV UUID QQXeK9-hw9g-YSXq-9uLf-DeMH-x3GD-QSaAvy LV Write Access read /write LV Creation host, time goat-lin.webber.net, 2018-03-18 22:35:50 +1100 LV Status available # open 0 LV Size 128.00 MiB Current LE 32 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:4 --- Logical volume --- LV Path /dev/vg_goat-lin001/goat-lin001 .webber.net-disk LV Name goat-lin001.webber.net-disk VG Name vg_goat-lin001 LV UUID t02Y3G-cFri-7mdQ-1wzJ-SUtx-cB30-lLdIML LV Write Access read /write LV Creation host, time goat-lin.webber.net, 2018-03-18 22:35:51 +1100 LV Status available # open 0 LV Size 4.00 GiB Current LE 1024 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:5 |
root@goat-lin:/etc/xen# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created gsw@goat-lin:/etc/xen-tools$ sudo vgcreate roots-vg /dev/sdb Volume group "roots-vg" successfully created gsw@goat-lin:/etc/xen-tools$ sudo vgdisplay --- Volume group --- VG Name ubuntu-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 10 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 110.79 GiB PE Size 4.00 MiB Total PE 28363 Alloc PE / Size 3072 / 12.00 GiB Free PE / Size 25291 / 98.79 GiB VG UUID MCtcE9-UF6R-zGNe-Cwwc-V1UT-AS8f-etXjbj --- Volume group --- VG Name roots-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size <100.00 GiB PE Size 4.00 MiB Total PE 25599 Alloc PE / Size 0 / 0 Free PE / Size 25599 / <100.00 GiB VG UUID 504DuZ-9rSM-Mh5N-JpgC-zcwq-F0fw-qmjQbm root@goat-lin:/etc/xen# lvdisplay /dev/vg_goat-lin001 --- Logical volume --- LV Path /dev/vg_goat-lin001/goat-lin001.webber.net-swap LV Name goat-lin001.webber.net-swap VG Name vg_goat-lin001 LV UUID QQXeK9-hw9g-YSXq-9uLf-DeMH-x3GD-QSaAvy LV Write Access read/write LV Creation host, time goat-lin.webber.net, 2018-03-18 22:35:50 +1100 LV Status available # open 0 LV Size 128.00 MiB Current LE 32 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:4 --- Logical volume --- LV Path /dev/vg_goat-lin001/goat-lin001.webber.net-disk LV Name goat-lin001.webber.net-disk VG Name vg_goat-lin001 LV UUID t02Y3G-cFri-7mdQ-1wzJ-SUtx-cB30-lLdIML LV Write Access read/write LV Creation host, time goat-lin.webber.net, 2018-03-18 22:35:51 +1100 LV Status available # open 0 LV Size 4.00 GiB Current LE 1024 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:5
Next: Flexible Server Part III: Setting up a guest VM
No comments:
Post a Comment