Required Configuration for OpenStack Identity & Compute To work with OpenStack Networking, you must configure and set up the OpenStack Identity Service and the OpenStack Compute Service.
OpenStack Identity To configure the OpenStack Identity Service for use with OpenStack Networking Create the get_id() Function The get_id() function stores the ID of created objects, and removes error-prone copying and pasting of object IDs in later steps: Add the following function to your .bashrc file: $ function get_id () { echo `"$@" | awk '/ id / { print $4 }'` } Source the .bashrc file: $ source .bashrc Create the OpenStack Networking Service Entry OpenStack Networking must be available in the OpenStack Compute service catalog. Create the service, as follows: $ NEUTRON_SERVICE_ID=$(get_id keystone service-create --name neutron --type network --description 'OpenStack Networking Service') Create the OpenStack Networking Service Endpoint Entry The way that you create an OpenStack Networking endpoint entry depends on whether you are using the SQL catalog driver or the template catalog driver: If you are using the SQL driver, run the following using these parameters: given region ($REGION), IP address of the OpenStack Networking server ($IP), and service ID ($NEUTRON_SERVICE_ID, obtained in the above step). $ keystone endpoint-create --region $REGION --service-id $NEUTRON_SERVICE_ID --publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' --internalurl 'http://$IP:9696/' For example: $ keystone endpoint-create --region myregion --service-id $NEUTRON_SERVICE_ID \ --publicurl "http://10.211.55.17:9696/" --adminurl "http://10.211.55.17:9696/" --internalurl "http://10.211.55.17:9696/" If you are using the template driver, add the following content to your OpenStack Compute catalog template file (default_catalog.templates), using these parameters: given region ($REGION) and IP address of the OpenStack Networking server ($IP). catalog.$REGION.network.publicURL = http://$IP:9696 catalog.$REGION.network.adminURL = http://$IP:9696 catalog.$REGION.network.internalURL = http://$IP:9696 catalog.$REGION.network.name = Network Service For example: catalog.$Region.network.publicURL = http://10.211.55.17:9696 catalog.$Region.network.adminURL = http://10.211.55.17:9696 catalog.$Region.network.internalURL = http://10.211.55.17:9696 catalog.$Region.network.name = Network Service Create the OpenStack Networking Service User You must provide admin user credentials that OpenStack Compute and some internal components of OpenStack Networking can use to access the OpenStack Networking API. The suggested approach is to create a special service tenant, create a neutron user within this tenant, and to assign this user an admin role. Create the admin role: $ ADMIN_ROLE=$(get_id keystone role-create --name=admin) Create the neutron user: $ NEUTRON_USER=$(get_id keystone user-create --name=neutron --pass="$NEUTRON_PASSWORD" --email=demo@example.com --tenant-id service) Create the service tenant: $ SERVICE_TENANT=$(get_id keystone tenant-create --name service --description "Services Tenant") Establish the relationship among the tenant, user, and role: $ keystone user-role-add --user_id $NEUTRON_USER --role_id $ADMIN_ROLE --tenant_id $SERVICE_TENANT See the OpenStack Installation Guides for more details about creating service entries and service users.
OpenStack Compute If OpenStack Networking is used, you must not run OpenStack Compute's nova-network (unlike traditional OpenStack Compute deployments). Instead, OpenStack Compute delegates almost all of the network-related decisions to OpenStack Networking. Tenant-facing API calls to manage objects like security groups and floating IPs are proxied by OpenStack Compute to OpenStack Network APIs. However, operator-facing tools (for example, nova-manage) are not proxied and therefore should not be used as these calls are not proxied. It is very important that you refer to this guide when configuring networking, rather than relying on OpenStack Compute networking documentation or past experience with OpenStack Compute. If a Nova CLI command or configuration option related to networking is not mentioned in this guide, the command is probably not supported for use with OpenStack Networking. In particular, you cannot use CLI tools like nova-manage and nova to manage networks or IP addressing, including both fixed and floating IPs, with OpenStack Networking. It is strongly recommended that you uninstall nova-network and reboot any physical nodes that have been running nova-network before using them to run OpenStack Networking. Inadvertently running the nova-network process while using OpenStack Networking can cause problems, as can stale iptables rules pushed down by previously running nova-network. To ensure that OpenStack Compute works properly with OpenStack Networking (rather than the legacy nova-network mechanism), you must adjust settings in the nova.conf configuration file.
Networking API & and Credential Configuration Each time a VM is provisioned or deprovisioned in OpenStack Compute, nova-* services communicate with OpenStack Networking using the standard API. For this to happen, you must configure the following items in the nova.conf file (used by each nova-compute and nova-api instance).
nova.conf API and Credential Settings
Item Configuration
network_api_class Modify from the default to nova.network.neutronv2.api.API, to indicate that OpenStack Networking should be used rather than the traditional nova-network networking model.
neutron_url Update to the hostname/IP and port of the neutron-server instance for this deployment.
neutron_auth_strategy Keep the default keystone value for all production deployments.
neutron_admin_tenant_name Update to the name of the service tenant created in the above section on OpenStack Identity configuration.
neutron_admin_username Update to the name of the user created in the above section on OpenStack Identity configuration.
neutron_admin_password Update to the password of the user created in the above section on OpenStack Identity configuration.
neutron_admin_auth_url Update to the OpenStack Identity server IP and port. This is the Identity (keystone) admin API server IP and port value, and not the Identity service API IP and port.
Security Group Configuration The OpenStack Networking Service provides security group functionality using a mechanism that is more flexible and powerful than the security group capabilities built into OpenStack Compute. Therefore, if you use OpenStack Networking, you should always disable built-in security groups and proxy all security group calls to the OpenStack Networking API . If you do not, security policies will conflict by being simultaneously applied by both services. To proxy security groups to OpenStack Networking, use the following configuration values in nova.conf:
nova.conf Security Group Settings
Item Configuration
firewall_driver Update to nova.virt.firewall.NoopFirewallDriver, so that nova-compute does not perform iptables-based filtering itself.
security_group_api Update to neutron, so that all security group requests are proxied to the OpenStack Network Service.
Metadata Configuration The OpenStack Compute service allows VMs to query metadata associated with a VM by making a web request to a special 169.254.169.254 address. OpenStack Networking supports proxying those requests to nova-api, even when the requests are made from isolated networks, or from multiple networks that use overlapping IP addresses. To enable proxying the requests, you must update the following fields in nova.conf.
nova.conf Metadata Settings
Item Configuration
service_neutron_metadata_proxy Update to true, otherwise nova-api will not properly respond to requests from the neutron-metadata-agent.
neutron_metadata_proxy_shared_secret Update to a string "password" value. You must also configure the same value in the metadata_agent.ini file, to authenticate requests made for metadata. The default value of an empty string in both files will allow metadata to function, but will not be secure if any non-trusted entities have access to the metadata APIs exposed by nova-api.
As a precaution, even when using neutron_metadata_proxy_shared_secret, it is recommended that you do not expose metadata using the same nova-api instances that are used for tenants. Instead, you should run a dedicated set of nova-api instances for metadata that are available only on your management network. Whether a given nova-api instance exposes metadata APIs is determined by the value of enabled_apis in its nova.conf.
Vif-plugging Configuration When nova-compute creates a VM, it "plugs" each of the VM's vNICs into an OpenStack Networking controlled virtual switch, and informs the virtual switch about the OpenStack Networking port ID associated with each vNIC. Different OpenStack Networking plugins may require different types of vif-plugging. You must specify the type of vif-plugging to be used for each nova-compute instance in the nova.conf file. The following plugins support the "port bindings" API extension that allows Nova to query for the type of vif-plugging required: OVS plugin Linux Bridge Plugin NEC Plugin Big Switch Plugin Hyper-V Plugin Brocade Plugin For these plugins, the default values in nova.conf are sufficient. For other plugins, see the sub-sections below for vif-plugging configuration, or consult external plugin documentation. The vif-plugging configuration required for nova-compute might vary even within a single deployment if your deployment includes heterogeneous compute platforms (for example, some Compute hosts are KVM while others are ESX).
Vif-plugging with Nicira NVP Plugin The choice of vif-plugging for the NVP Plugin depends on which version of libvirt you use. To check your libvirt version, use: $ libvirtd version In the nova.conf file, update the libvirt_vif_driver value, depending on your libvirt version.
nova.conf libvirt Settings
Version Required Value
libvirt (version >= 0.9.11) nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver
libvirt (version < 0.9.11) nova.virt.libvirt.vif.LibvirtOpenVswitchDriver
ESX No vif-plugging configuration is required
XenServer nova.virt.xenapi.vif.XenAPIOpenVswitchDriver
For example: libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver When using libvirt < 0.9.11, one must also edit /etc/libvirt/qemu.conf, uncomment the entry for 'cgroup_device_acl', add the value '/dev/net/tun' to the list of items for the configuration entry, and then restart libvirtd.
Example nova.conf (for <systemitem class="service">nova-compute</systemitem> and <systemitem class="service">nova-api</systemitem>) Example values for the above settings, assuming a cloud controller node running OpenStack Compute and OpenStack Networking with an IP address of 192.168.1.2 and vif-plugging using the LibvirtHybridOVSBridgeDriver. network_api_class=nova.network.neutronv2.api.API neutron_url=http://192.168.1.2:9696 neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_admin_username=neutron neutron_admin_password=password neutron_admin_auth_url=http://192.168.1.2:35357/v2.0 security_group_api=neutron firewall_driver=nova.virt.firewall.NoopFirewallDriver service_neutron_metadata_proxy=true neutron_metadata_proxy_shared_secret=foo # needed only for nova-compute and only for some plugins libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver