Michael Rutter's QEMU Notes

The use of virtual machines is widespread. In TCM we started using virtual machines in 2008, and soon had our print server and licence server virtualised, with a reduction in the number of dedicated boxes littering our server room.

We chose to use KVM, the VM system integrated into the Linux kernel. It did all we needed, was free, worked well with Linux hosts and guests (which is all we wanted), and would be updated along with the kernel. We use QEMU with KVM.

We started before libvirt was in widespread use, and have never switched to using it, as it is unclear that it provides functionality which cannot be access by using QEMU directly. It may provide a more friendly and consistent interface, but when one already has a working infrastructure, why change? Its graphics performance may be poorer than some of the alternatives, but we tend to virtualise headless servers.

Paravirtualisation of eth0

(This section assumes that the guest has a single virtual ethernet adapter and uses the tap system to bridge onto the host's physical adapter. It also uses English spellings. Paravirtualise or Paravirtualize? Paravirtualisation or Paravirtualization? It seems to matter to Google, and perhaps other search engines.)

QEMU can emulate an Intel e1000 fairly efficiently. However, it can also offer a much more efficient paravirtualised network device. This is quite easy to experiment with. Indeed, arguably too easy, for even configurations which are incorrect as far as efficiency is concerned still work correctly.

Assuming your guest is running Linux, has a single network adapter, and does not use persistent device naming for its ethernet adapters, then switching between the e1000, other hardware which QEMU can emulate, and the paravirtualised device, needs no changes at all to the guest. It will automatically detect the ethernet device when booted, load the correct drivers, call the device eth0, and all will work normally. The change is to the qemu command line.

One might be used to QEMU networking options of the form

-net nic,macaddr=52:54:00:01:02:03 -net tap

If so, now is the time to switch to the newer syntax, for it seems impossible to obtain the full advantages of paravirtualisation with the above syntax. One needs something like

-device virtio-net-pci,netdev=net0,mac=52:54:00:01:02:03
-netdev type=tap,id=net0,vhost=on

The label net0 one is free to choose, but it must be the same on the device and netdev options, in order to link them correctly in cases where the guest has multiple network devices.

How does one check that this has worked? The guest will appear to have a very basic ethernet adapter, and the e1000 module will no longer be loaded.

guest:~$ ethtool eth0
Settings for eth0:
Cannot get wake-on-lan settings: Operation not permitted
        Link detected: yes

But this would also happen if one had launched QEMU with an option of

-net nic,model=virtio,macaddr=52:54:00:01:02:03 -net tap

which gives negligible (or even negative) performance gains over the e1000. The check needs to be done on the host:

host:~$ ps aux | grep vhost
root      5356  0.4  3.1 1496220 382116 ?      Sl   Mar28  10:25 /usr/bin/qemu-system-x86_64
root      5377  0.0  0.0      0     0 ?        S    Mar28   0:20 [vhost-5356]

A vhost kernel thread should be seen, with a number after it corresponding to the PID of the qemu process for the VM with virtualised network. The thread should also consume CPU time, though at quite a modest rate, of order 1s per GB of traffic, so this will not be immediately noticeable.

With luck one should also see a significant reduction in CPU use by the VM. For the VM above, which is a webserver most of whose files are imported via NFS, the reduction in CPU use, even after including the vhost thread, seems to be about a factor of three. A more quantitative test of

remote:~$ dd if=/dev/zero bs=1024k count=10240 | rsh guest dd of=/dev/null

gave a CPU time reduction of the qemu process as recorded on the host of just 40%, which is still worthwhile.

(Virtual) CPU Optimisation

If launching qemu with no specific CPU selection, one ends up with a generic x86_64. With QEMU version 2.0.0 and newer, this includes support for the x2apic. With earlier versions of QEMU, explicitly enabling this option is advantageous, particularly if the guest does a lot of I/O (including networking).

-cpu qemu64,+x2apic

This should be done even if the host system does not have an x2apic. Emulating an x2apic for guests is faster than having them use the older form of APIC.

This Document

This document was written in 2016, at which time the host machines were using OpenSuSE 13.1, and the guests 13.1 or Leap 42.1.