Xen
根據Xen 概覽:
- Xen 是一套開源的類型一 hypervisor(又稱為裸機 hypervisor),可以在單個設備/宿主機上同時運行多個相同或不同的作業系統實例。Xen 是目前唯一的開源類型一 hypervisor。
Xen hypervisor 是一層輕量的軟件,它模擬了計算機的架構,使得可以同時運行多個作業系統。Hypervisor 由計算機上的引導加載器啟動;當 hypervisor 加載完成後,它會啟動 dom0(即「domain 0」的簡稱,有時被稱為宿主或私有域),在當前案例中使用 Arch Linux。在 dom0 啟動完成後,可以從 dom0 啟動並控制一個或多個 domU(用户域的簡稱,有時被稱為 VMs 或是客户域)。對於 domU 來説,Xen 支持半虛擬化(PV)域,硬件虛擬化域(HVM)及硬件虛擬化包裝器中的半虛擬化域(PVH)。更多詳細信息可以參考 Xen.org。
Xen hypervisor 需部署在一套安裝完整的底層作業系統上。在安裝 Xen hypervisor 前,宿主機上需有一套完全可用並更新到最新的 Arch Linux 系統。系統可以是只包含基礎軟件包的最小化安裝,且無需桌面環境或是 Xorg。
如果你在從頭開始搭建宿主機,可以參考安裝指南來安裝 Arch Linux。
安裝[編輯 | 編輯原始碼]
系統要求[編輯 | 編輯原始碼]
Xen hypervisor 需要包含於較新 Linux 內核中的內核級支持,相關功能已包含在 linux包 及 linux-lts包 Arch 內核軟件包中。如需運行 HVM domU,物理硬件也需要包含 VT-x 或 AMD-V(SVM)虛擬化支持功能。可以在 Xen hypervisor 未運行時執行如下命令來進行驗證:
$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo
如果上述命令無結果輸出,則説明硬件虛擬化不受支持,即你的硬件無法運行 HVM 類型的 domU(或是 Xen hypervisor 正在運行中)。如果你認為你的 CPU 支持上述任一種功能,則可以在啟動時查看宿主機的 BIOS 設置菜單,檢查虛擬化相關的選項是否被禁用。如果相關的選項存在並被禁用,則將其啟用,接着啟動到系統中並再次使用上述命令進行檢查。當 PCI 設備支持時,Xen hypervisor 也可以使用 PCI 直通,不需要 dom0 的硬件支持也可以將 PCI 設備直接連接到 domU。如需使用 PCI 直通,需要 CPU 支持 IOMMU/VT-d 功能。
Installation of the Xen hypervisor[編輯 | 編輯原始碼]
To install the Xen hypervisor, install the xenAUR package. It provides the Xen hypervisor, current xl interface and all configuration and support files, including systemd services. To run most VMs, you will also need to install xen-qemuAUR.
For BIOS support in VMs, install seabios包. For UEFI support, install edk2-ovmf包. To boot VM-local kernels inside of a PVH VM, install xen-pvhgrubAUR.
Building xen[編輯 | 編輯原始碼]
It is recommended that xen and its components are built in a clean environment, either in a VM or a chroot. When building Xen, there are environmental variables that can be passed to makepkg.
- build_stubdom -- Build the components to run Xen stubdoms, mainly for dom0 disaggregation. Components for stubdom are broken off into xen-stubdom if built. Defaults to false.
- boot_dir-- Your boot directory. Defaults to /boot.
- efi_dir, efi_mountpoint -- Your EFI directory and mountpoint. Defaults to /boot.
Pass these arguments to makepkg as variables:
$ build_stubdom=true efi_dir="/boot/EFI" makepkg
xen-docsAUR will be also built for the man pages and documentation. If you choose to build stubdom support, a xen-stubdom package will be built.
Modification of the bootloader[編輯 | 編輯原始碼]
The boot loader must be modified to load a special Xen kernel (xen.gz
or in the case of UEFI xen.efi
) which is then used to boot the normal kernel. To do this a new bootloader entry is needed.
UEFI[編輯 | 編輯原始碼]
Xen supports booting from UEFI as specified in Xen EFI systems. It also might be necessary to use efibootmgr to set boot order and other parameters.
First, ensure the xen.efi
file is in the EFI system partition along with your kernel and ramdisk files.
Second, Xen requires an ASCII (no UTF-8, UTC-16, etc) configuration file that specifies what kernel should be booted as dom0. This file must be placed in the same EFI system partition as the binary. Xen looks for several configuration files and uses the first one it finds. The order of search starts with the .efi
extension of the binary's name replaced by .cfg
, then drops trailing name components at .
, -
and _
until a match is found. Typically, a single file named xen.cfg
is used with the system requirements, such as:
xen.cfg
[global] default=xen [xen] options=console=vga iommu=force:true,qinval:true,debug:true loglvl=all noreboot=true reboot=no vga=ask ucode=scan kernel=vmlinuz-linux root=/dev/sdaX rw add_efi_memmap #earlyprintk=xen ramdisk=initramfs-linux.img
xen.cfg
line for options
to specify the additional parameters.Systemd-boot[編輯 | 編輯原始碼]
/boot
as this is where the XenAUR package and EFI binaries were configured and built for, not /boot/efi
.Add a new EFI-type loader entry. See Systemd-boot#EFI Shells or other EFI applications for more details. For example:
/boot/loader/entries/10-xen.conf
title Xen Hypervisor efi /xen.efi
efi
line of the loader's entry. However, the Xen documentation states that -cfg=file.cfg
can be used as an UEFI Shell parameter which is not true for the efi line option. For now, you can only have one Xen EFI entry which limits you to only one configuration file.EFISTUB[編輯 | 編輯原始碼]
It is possible to boot an EFI kernel directly from UEFI by using EFISTUB.
Drop to the build-in UEFI shell and call the EFI file directly. For example:
Shell> fs0: FS0:\> xen.efi
Note that a xen.cfg
configuration file in the EFI system partition is still required as outlined above. In addition, a different configuration file may be specified with the -cfg=file.cfg
parameter. For example:
Shell> fs0: FS0:\> xen.efi -cfg=xen-rescue.cfg
These additional configuration files must reside in the same directory as the Xen EFI binary and linux stub files.
BIOS[編輯 | 編輯原始碼]
Xen supports booting from system firmware configured as BIOS.
GRUB[編輯 | 編輯原始碼]
For GRUB users, install the grub-xen-gitAUR package for booting dom0 as well as building PvGrub2 images for booting user domains.
The file /etc/default/grub
can be edited to customize the Xen boot commands. For example, to allocate 512 MiB of RAM to dom0 at boot, modify /etc/default/grub
by replacing the line:
#GRUB_CMDLINE_XEN_DEFAULT=""
with
GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=512M"
More information on GRUB configuration keys for Xen can be found in the GRUB documentation.
After customizing the options, update the bootloader configuration with the following command:
# grub-mkconfig -o /boot/grub/grub.cfg
More information on using the GRUB bootloader is available at GRUB.
Building GRUB images for booting guests[編輯 | 編輯原始碼]
Besides the usual platform targets, the grub-xen-gitAUR package builds GRUB for three additional targets that can be used to boot Xen guests: i386-xen, i386-xen_pvh, and x86_64-xen. To create a boot image from one of these targets, first create a GRUB configuration file. Depending on your preference, this file can either locate and load a GRUB configuration file in the guest or it could manage more of the boot process from dom0. Assuming all that is needed is to locate and load a configuration file in the guest, add the following to a file,
grub.cfg
search -s root -f /boot/grub/grub.cfg configfile /boot/grub/grub.cfg
and then create a GRUB/Tips and tricks#GRUB standalone image that will incorporate that file:
# grub-mkstandalone -O x86_64-xen -o /usr/lib/xen/boot/pv-grub2-x86_64-xen "/boot/grub/grub.cfg=./grub.cfg"
Lastly, add that image as value of the kernel in the domU configuration file (for a 64-bit guest in this example):
kernel = "/usr/lib/xen/boot/pv-grub2-x86_64-xen"
More examples of configuring GRUB images for GRUB guests can be found in the Xen Project's PvGrub2 documentation.
Syslinux[編輯 | 編輯原始碼]
For Syslinux users, add a stanza like this to your /boot/syslinux/syslinux.cfg
:
LABEL xen MENU LABEL Xen KERNEL mboot.c32 APPEND ../xen-X.Y.Z.gz --- ../vmlinuz-linux console=tty0 root=/dev/sdaX ro --- ../initramfs-linux.img
where X.Y.Z
is your xen version and /dev/sdaX
is your root partition.
This also requires mboot.c32
(and libcom32.c32
) to be in the same directory as syslinux.cfg
. If you do not have mboot.c32
in /boot/syslinux
, copy it from:
# cp /usr/lib/syslinux/bios/mboot.c32 /boot/syslinux
Creation of a network bridge[編輯 | 編輯原始碼]
Xen requires that network communications between domU and the dom0 (and beyond) be set up manually. The use of both DHCP and static addressing is possible, and the choice should be determined by the network topology. Complex setups are possible, see the Networking article on the Xen wiki for details and /etc/xen/scripts
for scripts for various networking configurations. A basic bridged network, in which a virtual switch is created in dom0 that every domU is attached to, can be set up by creating a network bridge with the expected name xenbr0
.
See Network bridge#Creating a bridge for details.
Systemd-networkd[編輯 | 編輯原始碼]
See Systemd-networkd#Bridge interface for details.
Network Manager[編輯 | 編輯原始碼]
Gnome's Network Manager can sometime be troublesome. If following the bridge creation section outlined in the bridges section of the wiki are unclear or do not work, then the following steps may work.
Open the Network Settings and disable the interface you wish to use in your bridge (ex enp5s0). Edit the setting to off and uncheck "connect automatically."
Create a new bridge connection profile by clicking on the "+" symbol in the bottom left of the network settings. Optionally, run:
# nm-connection-editor
to bring up the window immediately. Once the window opens, select Bridge.
Click "Add" next to the "Bridged Connections" and select the interface you wished to use in your bridge (ex. Ethernet). Select the device mac address that corresponds to the interface you intend to use and save the settings
If your bridge is going to receive an IP address via DHCP, leave the IPv4/IPv6 sections as they are. If DHCP is not running for this particular connection, make sure to give your bridge an IP address. Needless to say, all connections will fail if an IP address is not assigned to the bridge. If you forget to add the IP address when you first create the bridge, it can always be edited later.
Now, as root, run:
# nmcli con show
You should see a connection that matches the name of the bridge you just created. Highlight and copy the UUID on that connection, and then run (again as root):
# nmcli con up <UUID OF CONNECTION>
A new connection should appear under the network settings. It may take 30 seconds to a minute. To confirm that it is up and running, run:
# brctl show
to show a list of active bridges.
Reboot. If everything works properly after a reboot (ie. bridge starts automatically), then you are all set.
<optional> In your network settings, remove the connection profile on your bridge interface that does NOT connect to the bridge. This just keeps things from being confusing later on.
Installation of Xen systemd services[編輯 | 編輯原始碼]
The Xen dom0 requires the xenstored.service
, xenconsoled.service
, xendomains.service
and xen-init-dom0.service
to be started and possibly enabled.
Confirming successful installation[編輯 | 編輯原始碼]
Reboot your dom0 host and ensure that the Xen kernel boots correctly and that all settings survive a reboot. A properly set up dom0 should report the following when you run xl list
as root:
# xl list
Name ID Mem VCPUs State Time(s) Domain-0 0 511 2 r----- 41652.9
Of course, the Mem, VCPUs and Time columns will be different depending on machine configuration and uptime. The important thing is that dom0 is listed.
In addition to the required steps above, see best practices for running Xen which includes information on allocating a fixed amount of memory and how to dedicate (pin) a CPU core for dom0 use. It also may be beneficial to create a xenfs filesystem mount point by including in /etc/fstab
none /proc/xen xenfs defaults 0 0
Configure Best Practices[編輯 | 編輯原始碼]
Review Xen Project Best Practices before using Xen.
Using Xen[編輯 | 編輯原始碼]
Xen supports both paravirtualized (PV) and hardware virtualized (HVM) domU. In the following sections the steps for creating HVM and PV domU running Arch Linux are described. In general, the steps for creating an HVM domU are independent of the domU OS and HVM domU support a wide range of operating systems including Microsoft Windows. To use HVM domU the dom0 hardware must have virtualization support. Paravirtualized domU do not require virtualization support, but instead require modifications to the guest operating system making the installation procedure different for each operating system (see the Guest Install page of the Xen wiki for links to instructions). Some operating systems (e.g., Microsoft Windows) cannot be installed as a PV domU. In general, HVM domU often run slower than PV domU since HVMs run on emulated hardware. While there are some common steps involved in setting up PV and HVM domU, the processes are substantially different. In both cases, for each domU, a "hard disk" will need to be created and a configuration file needs to be written. Additionally, for installation each domU will need access to a copy of the installation ISO stored on the dom0 (see the Download Page to obtain the Arch Linux ISO).
Create a domU "hard disk"[編輯 | 編輯原始碼]
Xen supports a number of different types of "hard disks" including Logical Volumes, raw partitions, and image files. To create a sparse file, that will grow to a maximum of 10GiB, called domU.img
, use:
$ truncate -s 10G domU.img
If file IO speed is of greater importance than domain portability, using Logical Volumes or raw partitions may be a better choice.
Xen may present any partition / disk available to the host machine to a domain as either a partition or disk. This means that, for example, an LVM partition on the host can appear as a hard drive (and hold multiple partitions) to a domain. Note that making sub-partitons on a partition will make accessing those partitions on the host machine more difficult. See kpartx(8) for information on how to map out partitions within a partition.
Create a domU configuration[編輯 | 編輯原始碼]
Each domU requires a separate configuration file that is used to create the virtual machine. Full details about the configuration files can be found at the Xen Wiki or the xl.cfg(5) man page. Both HVM and PV domU share some components of the configuration file. These include
name = "domU" memory = 512 disk = [ "file:/path/to/ISO,sdb,r", "phy:/path/to/partition,sda1,w" ] vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]
The name=
is the name by which the xl tools manage the domU and needs to be unique across all domU. The disk=
includes information about both the installation media (file:
) and the partition created for the domU phy
. If an image file is being used instead of a physical partition, the phy:
needs to be changed to file:
. The vif=
defines a network controller. The 00:16:3e
MAC block is reserved for Xen domains, so the last three digits of the mac=
must be randomly filled in (hex values 0-9 and a-f only).
Managing a domU[編輯 | 編輯原始碼]
If a domU should be started on boot, create a symlink to the configuration file in /etc/xen/auto
and ensure the xendomains
service is set up correctly. Some useful commands for managing domU are:
# xl top # xl list # xl console domUname # xl shutdown domUname # xl destroy domUname
Configuring a hardware virtualized (HVM) Arch domU[編輯 | 編輯原始碼]
In order to use HVM domU install the mesa包, numactl包 and bluez-libs包 packages.
A minimal configuration file for a HVM Arch domU is:
name = 'HVM_domU' builder = 'hvm' memory = 512 vcpus = 2 disk = [ 'phy:/dev/vg0/hvm_arch,xvda,w', 'file:/path/to/ISO,hdc:cdrom,r' ] vif = [ 'mac=00:16:3e:00:00:00,bridge=xenbr0' ] vnc = 1 vnclisten = '0.0.0.0' vncdisplay = 1
Since HVM machines do not have a console, they can only be connected to via a vncviewer. The configuration file allows for unauthenticated remote access of the domU vncserver and is not suitable for unsecured networks. The vncserver will be available on port 590X
, where X is the value of vncdisplay
, of the dom0. The domU can be created with:
# xl create /path/to/config/file
and its status can be checked with
# xl list
Once the domU is created, connect to it via the vncserver and install Arch Linux as described in the Installation guide.
Configuring a paravirtualized (PV) Arch domU[編輯 | 編輯原始碼]
A minimal configuration file for a PV Arch domU is:
name = "PV_domU" kernel = "/mnt/arch/boot/x86_64/vmlinuz-linux" ramdisk = "/mnt/arch/boot/x86_64/initramfs-linux.img" extra = "archisobasedir=arch archisodevice=UUID=YYYY-mm-dd-HH-MM-SS-00" memory = 512 disk = [ "phy:/path/to/partition,sda1,w", "file:/path/to/ISO,sdb,r" ] vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]
This file needs to tweaked for your specific use. Most importantly, the archisodevice=UUID=YYYY-mm-dd-HH-MM-SS-00
line must be edited to use the creation date and time of the ISO being used.
Before creating the domU, the installation ISO must be loop-mounted. To do this, ensure the directory /mnt
exists and is empty, then run the following command (being sure to fill in the correct ISO path):
# mount -o loop /path/to/iso /mnt
Once the ISO is mounted, the domU can be created with:
# xl create -c /path/to/config/file
The "-c" option will enter the domU's console when successfully created. Then you can install Arch Linux as described in the Installation guide, but with the following deviations. The block devices listed in the disks line of the cfg file will show up as /dev/xvd*
. Use these devices when partitioning the domU. After installation and before the domU is rebooted, the xen-blkfront
, xen-fbfront
, xen-netfront
, xen-kbdfront
modules must be added to Mkinitcpio. Without these modules, the domU will not boot correctly. For booting, it is not necessary to install Grub. Xen has a Python-based grub emulator, so all that is needed to boot is a grub.cfg
file: (It may be necessary to create the /boot/grub
directory)
/boot/grub/grub.cfg
menuentry 'Arch GNU/Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-__UUID__' { insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 __UUID__ else search --no-floppy --fs-uuid --set=root __UUID__ fi echo 'Loading Linux core repo kernel ...' linux /boot/vmlinuz-linux root=UUID=__UUID__ ro echo 'Loading initial ramdisk ...' initrd /boot/initramfs-linux.img }
This file must be edited to match the UUID of the root partition. From within the domU, run the following command:
# blkid
Replace all instances of __UUID__
with the real UUID of the root partition (the one that mounts as /
).:
# sed -i 's/__UUID__/12345678-1234-1234-1234-123456789abcd/g' /boot/grub/grub.cfg
Shutdown the domU with the poweroff
command. The console will be returned to the hypervisor when the domain is fully shut down, and the domain will no longer appear in the xl domains list. Now the ISO file may be unmounted:
# umount /mnt
The domU cfg file should now be edited. Delete the kernel =
, ramdisk =
, and extra =
lines and replace them with the following line:
bootloader = "pygrub"
Also remove the ISO disk from the disk =
line.
The Arch domU is now set up. It may be started with the same line as before:
# xl create -c /etc/xen/archdomu.cfg
常見錯誤[編輯 | 編輯原始碼]
"xl list" complains about libxl[編輯 | 編輯原始碼]
Either you have not booted into the Xen system, or xen modules listed in xencommons
script are not installed.
"xl create" fails[編輯 | 編輯原始碼]
Check the guest's kernel is located correctly, check the pv-xxx.cfg
file for spelling mistakes (like using initrd
instead of ramdisk
).
Creating HVM fails[編輯 | 編輯原始碼]
If creating HVM fails with:
libxl: error: libxl_dm.c:3131:device_model_spawn_outcome: Domain 33:domain 33 device model: spawn failed (rc=-3) libxl: error: libxl_dm.c:3351:device_model_postconfig_done: Domain 33:Post DM startup configs failed, rc=-3 libxl: error: libxl_create.c:1837:domcreate_devmodel_started: Domain 33:device model did not start: -3 libxl: error: libxl_aoutils.c:646:libxl__kill_xs_path: Device Model already exited
You have missed to install numactl包.
Arch Linux guest hangs with a ctrl-d message[編輯 | 編輯原始碼]
Press ctrl-d
until you get back to a prompt, rebuild its initramfs described.
failed to execute '/usr/lib/udev/socket:/org/xen/xend/udev_event' 'socket:/org/xen/xend/udev_event': No such file or directory[編輯 | 編輯原始碼]
This is caused by /etc/udev/rules.d/xend.rules
. Xend is deprecated and not used, so it is safe to remove that file.