KVMKVM is configured as the default hypervisor for Compute.This document contains several sections about hypervisor selection. If you are reading
this document linearly, you do not want to load the KVM module before you install
nova-compute. The nova-compute service depends on qemu-kvm, which
installs /lib/udev/rules.d/45-qemu-kvm.rules, which sets the
correct permissions on the /dev/kvm device node.To enable KVM explicitly, add the following configuration options to the
/etc/nova/nova.conf file:compute_driver=libvirt.LibvirtDriver
libvirt_type=kvmThe KVM hypervisor supports the following virtual machine image formats:RawQEMU Copy-on-write (qcow2)QED Qemu Enhanced DiskVMWare virtual machine disk format (vmdk)This section describes how to enable KVM on your system. For more information, see the
following distribution-specific documentation:Fedora: Getting started with virtualization from the Fedora project
wiki.Ubuntu:
KVM/Installation from the Community Ubuntu documentation.Debian: Virtualization with KVM from the Debian handbook.Red Hat Enterprise Linux: Installing virtualization packages on an existing Red
Hat Enterprise Linux system from the Red Hat Enterprise Linux
Virtualization Host Configuration and Guest Installation
Guide.openSUSE: Installing KVM from the openSUSE Virtualization with KVM
manual.SLES: Installing KVM from the SUSE Linux Enterprise Server
Virtualization with KVM manual.Specify the CPU model of KVM guestsThe Compute service enables you to control the guest CPU model that is exposed to KVM
virtual machines. Use cases include:To maximize performance of virtual machines by exposing new host CPU features
to the guestTo ensure a consistent default CPU across all machines, removing reliance of
variable QEMU defaultsIn libvirt, the CPU is specified by providing a base CPU model name (which is a
shorthand for a set of feature flags), a set of additional feature flags, and the
topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard
CPU model names. These models are defined in the
/usr/share/libvirt/cpu_map.xml file. Check this file to
determine which models are supported by your local installation.Two Compute configuration options define which type of CPU model is exposed to the
hypervisor when using KVM: libvirt_cpu_mode and
libvirt_cpu_model.The libvirt_cpu_mode option can take one of the following values:
none, host-passthrough,
host-model, and custom.Host model (default for KVM & QEMU)If your nova.conf file contains
libvirt_cpu_mode=host-model, libvirt identifies the CPU model
in /usr/share/libvirt/cpu_map.xml file that most closely
matches the host, and requests additional CPU flags to complete the match. This
configuration provides the maximum functionality and performance and maintains good
reliability and compatibility if the guest is migrated to another host with slightly
different host CPUs.Host pass throughIf your nova.conf file contains
libvirt_cpu_mode=host-passthrough, libvirt tells KVM to pass
through the host CPU with no modifications. The difference to host-model, instead of
just matching feature flags, every last detail of the host CPU is matched. This
gives absolutely best performance, and can be important to some apps which check low
level CPU details, but it comes at a cost with respect to migration: the guest can
only be migrated to an exactly matching host CPU.CustomIf your nova.conf file contains
libvirt_cpu_mode=custom, you can explicitly specify one of
the supported named model using the libvirt_cpu_model configuration option. For
example, to configure the KVM guests to expose Nehalem CPUs, your
nova.conf file should contain:libvirt_cpu_mode=custom
libvirt_cpu_model=NehalemNone (default for all libvirt-driven hypervisors other than KVM &
QEMU)If your nova.conf file contains
libvirt_cpu_mode=none, libvirt does not specify a CPU model.
Instead, the hypervisor chooses the default model.Guest agent supportUse guest agents to enable optional access between compute nodes and guests through a
socket, using the QMP protocol.To enable this feature, you must set hw_qemu_guest_agent=yes as a
metadata parameter on the image you wish to use to create guest-agent-capable instances
from. You can explicitly disable the feature by setting
hw_qemu_guest_agent=no in the image metadata.KVM performance tweaksThe VHostNet kernel
module improves network performance. To load the kernel module, run the following
command as root:#modprobe vhost_netTroubleshoot KVMTrying to launch a new virtual machine instance fails with the
ERRORstate, and the following error appears in the
/var/log/nova/nova-compute.log file:libvirtError: internal error no supported architecture for os type 'hvm'This message indicates that the KVM kernel modules were not loaded.If you cannot start VMs after installation without rebooting, the permissions might
not be correct. This can happen if you load the KVM module before you install
nova-compute. To check whether the group is
set to kvm, run:#ls -l /dev/kvmIf it is not set to kvm, run:#udevadm trigger