Merge "Remove the outdated virsh_dev_env and its documentation"

This commit is contained in:
Zuul 2024-04-29 23:59:41 +00:00 committed by Gerrit Code Review
commit 4a94c42dbe
6 changed files with 5 additions and 197 deletions

View File

@ -106,13 +106,5 @@ sushy-tools_ is also installed.
in the previous step.
#. Run the deployment step, as documented in :ref:`deploy`.
Configuring libvirt
-------------------
.. toctree::
virsh
.. _VirtualBMC: https://docs.openstack.org/virtualbmc/
.. _sushy-tools: https://docs.openstack.org/sushy-tools/

View File

@ -1,67 +0,0 @@
Deploying with libvirt
======================
In order to deploy bifrost with libvirt, in order to support managing
baremetal servers from with-in that libvirt VM, a special network
configuration is required.
Two networks need to be created:
- default network, that will be a standard virtual network, using NAT.
- provisioning network, that will be used for PXE boot. As we need to setup
a dhcp server on bifrost guest, creating a virtual network will give
conflicts between guest and host. So to avoid it, we can define a
network that uses macvtap interfaces, associated with the physical
interface.
Please note that you will need to have macvlan enabled on your kernel.
When creating the guest, a minimum of 8GB of memory is needed in order to
build disk images along with run the services to support bifrost.
When defining the interfaces for the guest, the two networks that have been
created need to be attached.
These sample commands will spin up a bifrost vm based on centos::
virsh net-define --file tools/virsh_dev_env/network/default.xml
virsh net-start default
virsh net-define --file tools/virsh_dev_env/network/br_direct.xml
virsh net-start br_direct
virsh define --file tools/virsh_dev_env/vm/baremetal.xml
virsh start baremetal
virsh console baremetal
When you login into baremetal, the interface for the provisioning
network will be down. You may need to add an IP manually::
ip addr add <<provisioning_ip_address>>/<<mask>> dev <<interface>>
ip link set <<interface>> up
Where to get guest images
-------------------------
In order to create the guest VMs, you will need a cloud image
for the distro you want to deploy. You will need to download the
guest image on a directory on the host, and then in the template
for the VM, you can specify it on the disk section, as shown
in the example template.
Please see the `OpenStack Image Guide <https://docs.openstack.org/image-guide/obtain-images.html>`_
for options and locations for obtaining guest images.
Add credentials to guest image
------------------------------
Normally guest images come without user and password, they rely on ssh to
allow access. In this case, it can be useful to enable ssh access to some
user from host to guest. A way to do that, is creating a config drive
and reference it on the template for the guest VM.
A useful script to generate config drives can be found
`here <https://github.com/larsks/virt-utils/blob/master/create-config-drive>`_.
Relying on this script, a config drive can be created with::
create-config-drive -k ~/.ssh/id_rsa.pub config.iso
And then this ISO can be referenced on the guest VM template.

View File

@ -284,6 +284,11 @@ installation. You can also use a custom image:
--image http://example.com/images/my-image.qcow2 \
--image-checksum 91ebfb80743bb98c59f787c9dc1f3cef \
.. note::
Please see the `OpenStack Image Guide
<https://docs.openstack.org/image-guide/obtain-images.html>`_ for options
and locations for obtaining guest images.
You can also provide a custom configdrive URL (or its content) instead of
the one Bifrost builds for you:

View File

@ -1,6 +0,0 @@
<network>
<name>br_direct</name>
<forward mode='bridge'>
<interface dev="eno1" />
</forward>
</network>

View File

@ -1,15 +0,0 @@
<network connections='1'>
<name>default</name>
<uuid>76d95e35-3cf6-4a43-bf31-9d10717982e6</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>

View File

@ -1,101 +0,0 @@
<domain type='kvm' id='33'>
<name>baremetal</name>
<uuid>99714ffa-c947-4ff9-818d-ca11ee152494</uuid>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<vcpu placement='static'>2</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>Broadwell</model>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/opt/CentOS-7-x86_64-GenericCloud-1503.qcow2'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/opt/config.iso' />
<target dev='hdb' bus='ide'/>
<readonly />
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<alias name='usb'/>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<alias name='usb'/>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<alias name='usb'/>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='network'>
<mac address='00:16:3e:1a:b3:4a'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='network'>
<source network='br_direct'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/1'/>
<target type='isa-serial' port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/1'>
<source path='/dev/pts/1'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c71,c869</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c71,c869</imagelabel>
</seclabel>
</domain>