KVMKVM is configured as the default hypervisor for Compute. There are several sections about hypervisor
selection in this document. If you are reading this
document linearly, you do not want to load the KVM
module prior to installing nova-compute. The
nova-compute service depends on qemu-kvm which
installs
/lib/udev/rules.d/45-qemu-kvm.rules,
which sets the correct permissions on the /dev/kvm
device node.To enable KVM explicitly, add the following configuration
options
/etc/nova/nova.conf:compute_driver=libvirt.LibvirtDriver
libvirt_type=kvm
The KVM hypervisor supports the following virtual machine
image formats:RawQEMU Copy-on-write (qcow2)QED Qemu Enhanced DiskVMWare virtual machine disk format (vmdk)The rest of this section describes how to enable KVM on your system. You may also wish to
consult distribution-specific documentation:Fedora: Getting started with virtualization from the Fedora project
wiki.Ubuntu:
KVM/Installation from the Community Ubuntu documentation.Debian: Virtualization with KVM from the Debian handbook.RHEL: Installing virtualization packages on
an existing Red Hat Enterprise Linux
system from the Red Hat Enterprise
Linux Virtualization Host Configuration and Guest
Installation Guide.openSUSE: Installing KVM from the openSUSE Virtualization with KVM
manual.SLES: Installing KVM from the SUSE
Linux Enterprise Server Virtualization with KVM
manual.Checking for hardware virtualization supportThe processors of your compute host need to support virtualization technology (VT)
(mainly Intel VT -x or AMD AMD-v technologies) to use KVM.In order to check if your processor has VT support (which has to be enabled in the
BIOS), issue as
root:#apt-get install cpu-checker#kvm-ok$egrep '(vmx|svm)' --color=always /proc/cpuinfoIf KVM is supported, the output should look something like:INFO: /dev/kvm exists
KVM acceleration can be usedflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm arat dtherm tpr_shadow vnmi flexpriority ept vpidIf KVM is not supported, the output should look something
like:INFO: Your CPU does not support KVM extensions
KVM acceleration can NOT be usedIf KVM is not supported, you should get no output.Some systems require that you enable VT support in the system BIOS. If you believe
your processor supports hardware acceleration but the above command produced no output,
you may need to reboot your machine, enter the system BIOS, and enable the VT
option.In the case that KVM acceleration is not supported, Compute should be configured to
use a different hypervisor, such as QEMU or Xen.Enabling KVMKVM requires the kvm and either kvm-intel or
kvm-amd modules to be loaded. This may have been configured
automatically on your distribution when KVM is installed.You can check that they have been loaded using lsmod, as follows,
with expected output for Intel-based
processors:$lsmod | grep kvmkvm_intel 137721 9
kvm 415459 1 kvm_intelThe
following sections describe how to load the kernel modules for Intel-based and AMD-based
processors if they were not loaded automatically by your distribution's KVM installation
process.Intel-based processorsIf your compute host is Intel-based, run the following as root to load the kernel
modules:#modprobe kvm#modprobe kvm-intel
Add the following lines to /etc/modules so that these modules
will load on reboot:kvm
kvm-intelAMD-based processorsIf your compute host is AMD-based, run the following as root to load the kernel
modules:#modprobe kvm#modprobe kvm-amd
Add the following lines to /etc/modules so that these modules
will load on reboot:kvm
kvm-amdSpecifying the CPU model of KVM guestsThe Compute service allows you to control the guest CPU model that is exposed to KVM
virtual machines. Use cases include:To maximize performance of virtual machines by exposing new host CPU
features to the guestTo ensure a consistent default CPU across all machines, removing reliance
of variable QEMU defaultsIn libvirt, the CPU is specified by providing a base CPU model name (which is a
shorthand for a set of feature flags), a set of additional feature flags, and the
topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard
CPU model names. Examples of model names include:"486", "pentium", "pentium2", "pentiumpro", "coreduo", "n270", "pentiumpro",
"qemu32", "kvm32", "cpu64-rhel5", "cpu64-rhel5", "kvm64", "pentiumpro", "Conroe"
"Penryn", "Nehalem", "Westmere", "pentiumpro", "cpu64-rhel5", "cpu64-rhel5",
"Opteron_G1", "Opteron_G2", "Opteron_G3", "Opteron_G4"These models are defined in the file
/usr/share/libvirt/cpu_map.xml. Check this file to determine
which models are supported by your local installation.There are two Compute configuration options that determine the type of CPU model
exposed to the hypervisor when using KVM, libvirt_cpu_mode and
libvirt_cpu_model.The libvirt_cpu_mode option can take one of four values:
none, host-passthrough,
host-model and custom.Host model (default for KVM & QEMU)If your nova.conf contains
libvirt_cpu_mode=host-model, libvirt will identify the CPU
model in /usr/share/libvirt/cpu_map.xml which most closely
matches the host, and then request additional CPU flags to complete the match. This
should give close to maximum functionality/performance, which maintaining good
reliability/compatibility if the guest is migrated to another host with slightly
different host CPUs.Host passthroughIf your nova.conf contains
libvirt_cpu_mode=host-passthrough, libvirt will tell KVM to
passthrough the host CPU with no modifications. The difference to host-model,
instead of just matching feature flags, every last detail of the host CPU is
matched. This gives absolutely best performance, and can be important to some apps
which check low level CPU details, but it comes at a cost with respect to migration:
the guest can only be migrated to an exactly matching host CPU.CustomIf your nova.conf file contains libvirt_cpu_mode=custom, you can
explicitly specify one of the supported named model using the libvirt_cpu_model
configuration option. For example, to configure the KVM guests to expose Nehalem
CPUs, your nova.conf should contain:libvirt_cpu_mode=custom
libvirt_cpu_model=NehalemNone (default for all libvirt-driven hypervisors other than KVM &
QEMU)If your nova.conf contains
libvirt_cpu_mode=none, then
libvirt will not specify any CPU model at all. It will
leave it up to the hypervisor to choose the default
model. This setting is equivalent to the Compute
service behavior prior to the Folsom release.KVM Performance TweaksA recommended resource to help you improve
the performance of KVM is
the VHostNet
kernel module. This module improves network performance. To load the
kernel module, as root:
#modprobe vhost_netTroubleshootingTrying to launch a new virtual machine instance fails
with the ERROR state, and the following
error appears in
/var/log/nova/nova-compute.loglibvirtError: internal error no supported architecture for os type 'hvm'
This is a symptom that the KVM kernel modules have not
been loaded.If you cannot start VMs after installation without
rebooting, it's possible the permissions are not correct.
This can happen if you load the KVM module before you've
installed nova-compute. To check the permissions, run
ls -l /dev/kvm to see whether
the group is set to kvm. If not, run sudo
udevadm trigger.