Networking with nova-networkUnderstanding the networking configuration options helps you design the best configuration
for your Compute instances.You can choose to either install and configure nova-network for networking between VMs or use the OpenStack Networking
service (neutron) for networking. To configure Compute networking options with OpenStack
Networking, see .Networking conceptsThis section offers a brief overview of networking concepts for Compute.Compute assigns a private IP address to each VM instance. (Currently, Compute with
nova-network only supports Linux bridge
networking that enables the virtual interfaces to connect to the outside network through
the physical interface.) Compute makes a distinction between fixed IPs and floating IPs. Fixed IPs
are IP addresses that are assigned to an instance on creation and stay the same until
the instance is explicitly terminated. By contrast, floating IPs are addresses that can
be dynamically associated with an instance. A floating IP address can be disassociated
and associated with another instance at any time. A user can reserve a floating IP for
their project.The network controller with nova-network
provides virtual networks to enable compute servers to interact with each other and with
the public network. Compute with nova-network
supports the following network modes, which are implemented as “Network Manager”
types.Flat Network ManagerIn Flat mode, a network administrator
specifies a subnet. IP addresses for VM instances are assigned from the
subnet, and then injected into the image on launch. Each instance receives a
fixed IP address from the pool of available addresses. A system
administrator must create the Linux networking bridge (typically named
br100, although this is configurable) on the systems
running the nova-network service.
All instances of the system are attached to the same bridge, and this is
configured manually by the network administrator.Configuration injection currently only works on Linux-style systems
that keep networking configuration in
/etc/network/interfaces.Flat DHCP Network ManagerIn FlatDHCP mode, OpenStack starts a DHCP
server (dnsmasq) to allocate IP addresses to VM
instances from the specified subnet, in addition to manually configuring the
networking bridge. IP addresses for VM instances are assigned from a subnet
specified by the network administrator.Like Flat Mode, all instances are attached to a single bridge on the
compute node. Additionally, a DHCP server is running to configure instances
(depending on single-/multi-host mode, alongside each nova-network). In this mode, Compute does a
bit more configuration in that it attempts to bridge into an Ethernet device
(flat_interface, eth0 by default). For every
instance, Compute allocates a fixed IP address and configures dnsmasq with
the MAC/IP pair for the VM. Dnsmasq does not take part in the IP address
allocation process, it only hands out IPs according to the mapping done by
Compute. Instances receive their fixed IPs by doing a
dhcpdiscover. These IPs are not assigned to any of the host's network interfaces, only
to the guest-side interface for the VM.In any setup with flat networking, the hosts providing the nova-network service are responsible for
forwarding traffic from the private network. They also run and configure
dnsmasq as a DHCP server listening on this
bridge, usually on IP address 10.0.0.1 (see DHCP server: dnsmasq ). Compute can determine the NAT entries
for each network, although sometimes NAT is not used, such as when
configured with all public IPs or a hardware router is used (one of the HA
options). Such hosts need to have br100 configured and
physically connected to any other nodes that are hosting VMs. You must set
the flat_network_bridge option or create networks with
the bridge parameter in order to avoid raising an error. Compute nodes have
iptables/ebtables entries created for each project and instance to protect
against IP/MAC address spoofing and ARP poisoning.In single-host Flat DHCP mode you will be able to ping VMs through their fixed IP from the
nova-network node, but you cannot ping them from the compute nodes.
This is expected behavior.VLAN Network ManagerVLANManager mode is the default mode for
OpenStack Compute. In this mode, Compute creates a VLAN and bridge for each
tenant. For multiple-machine installation, the VLAN Network Mode requires a
switch that supports VLAN tagging (IEEE 802.1Q). The tenant gets a range of
private IPs that are only accessible from inside the VLAN. In order for a
user to access the instances in their tenant, a special VPN instance (code
named cloudpipe) needs to be created. Compute generates a certificate and
key for the user to access the VPN and starts the VPN automatically. It
provides a private network segment for each tenant's instances that can be
accessed through a dedicated VPN connection from the Internet. In this mode,
each tenant gets its own VLAN, Linux networking bridge, and subnet.The subnets are specified by the network administrator, and are assigned
dynamically to a tenant when required. A DHCP Server is started for each
VLAN to pass out IP addresses to VM instances from the subnet assigned to
the tenant. All instances belonging to one tenant are bridged into the same
VLAN for that tenant. OpenStack Compute creates the Linux networking bridges
and VLANs when required.These network managers can co-exist in a cloud system. However, because you cannot
select the type of network for a given tenant, you cannot configure multiple network
types in a single Compute installation.All network managers configure the network using network
drivers. For example, the Linux L3 driver (l3.py and
linux_net.py), which makes use of iptables,
route and other network management facilities, and the libvirt
network filtering
facilities. The driver is not tied to any particular network manager; all
network managers use the same driver. The driver usually initializes (creates bridges
and so on) only when the first VM lands on this host node.All network managers operate in either single-host
or multi-host mode. This choice greatly influences
the network configuration. In single-host mode, a single nova-network service provides a default gateway for VMs and hosts a
single DHCP server (dnsmasq). In multi-host mode, each compute
node runs its own nova-network service. In both
cases, all traffic between VMs and the outer world flows through nova-network. Each mode has its pros and cons (see the
Network Topology section in the OpenStack
Operations Guide.All networking options require network connectivity to be already set up between
OpenStack physical nodes. OpenStack does not configure any physical network
interfaces. All network managers automatically create VM virtual interfaces. Some,
but not all, managers create network bridges such as
br100.All machines must have a public and internal network interface (controlled by the options:
public_interface for the public interface, and
flat_interface and vlan_interface for the
internal interface with flat / VLAN managers). This guide refers to the public
network as the external network and the private network as the internal or tenant
network.The internal network interface is used for communication with VMs; the interface
should not have an IP address attached to it before OpenStack installation (it
serves merely as a fabric where the actual endpoints are VMs and dnsmasq). Also, you
must put the internal network interface in promiscuous
mode, because it must receive packets whose target MAC address is of
the guest VM, not of the host.Throughout this documentation, the public network is sometimes referred to as the
external network, while the internal network is also sometimes referred to as the
private network or tenant network.For flat and flat DHCP modes, use the following command to create a network:$nova network-create vmnet \
--fixed-range-v4 10.0.0.0/16 --fixed-cidr 10.0.20.0/24 --bridge br100Where:--fixed-range-v4- specifies the network subnet.--fixed-cidr specifies a range of fixed IP addresses to
allocate, and can be a subset of the --fixed-range-v4
argument.--bridge specifies the bridge device to which this network is
connected on every compute node.DHCP server: dnsmasqThe Compute service uses dnsmasq as the
DHCP server when running with either that Flat DHCP Network Manager or the VLAN Network
Manager. The nova-network service is
responsible for starting up dnsmasq processes.The behavior of dnsmasq can be customized by creating a
dnsmasq configuration file. Specify the configuration file
using the dnsmasq_config_file configuration option. For
example:dnsmasq_config_file=/etc/dnsmasq-nova.confFor an example of how to change the behavior of dnsmasq using
a dnsmasq configuration file, see the OpenStack Configuration Reference. The
dnsmasq documentation also has a more comprehensive dnsmasq
configuration file example.dnsmasq also acts as a caching DNS server for instances. You
can explicitly specify the DNS server that dnsmasq should use
by setting the dns_server configuration option in
/etc/nova/nova.conf. The following example would configure
dnsmasq to use Google's public DNS server:dns_server=8.8.8.8Logging output for dnsmasq goes to the
syslog (typically /var/log/syslog or
/var/log/messages, depending on Linux distribution).
dnsmasq logging output can be useful for troubleshooting if
VM instances boot successfully but are not reachable over the network.A network administrator can run nova-manage fixed reserve
--address IP_ADDRESS to specify the starting
point IP address
(n.n.n.n)
to reserve with the DHCP server. This reservation only affects which IP address the VMs
start at, not the fixed IP addresses that the nova-network service places on the bridges.Metadata serviceIntroductionThe Compute service uses a special metadata service to enable virtual machine
instances to retrieve instance-specific data. Instances access the metadata service
at http://169.254.169.254. The metadata service supports two sets
of APIs: an OpenStack metadata API and an EC2-compatible API. Each of the APIs is
versioned by date.To retrieve a list of supported versions for the OpenStack metadata API, make a
GET request to http://169.254.169.254/openstack For
example:$curl http://169.254.169.254/openstack2012-08-10
latestTo list supported versions for the EC2-compatible metadata API, make a GET request
to http://169.254.169.254.For example:$curl http://169.254.169.2541.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
latestIf you write a consumer for one of these APIs, always attempt to access the most
recent API version supported by your consumer first, then fall back to an earlier
version if the most recent one is not available.OpenStack metadata APIMetadata from the OpenStack API is distributed in JSON format. To retrieve the
metadata, make a GET request to
http://169.254.169.254/openstack/2012-08-10/meta_data.json.For example:$curl http://169.254.169.254/openstack/2012-08-10/meta_data.jsonInstances also retrieve user data (passed as the user_data
parameter in the API call or by the --user_data flag in the
nova boot command) through the metadata service, by making a
GET request to
http://169.254.169.254/openstack/2012-08-10/user_data.For example:$curl http://169.254.169.254/openstack/2012-08-10/user_data#!/bin/bash
echo 'Extra user data here'EC2 metadata APIThe metadata service has an API that is compatible with version 2009-04-04 of the
Amazon EC2 metadata service; virtual machine images that are designed
for EC2 work properly with OpenStack.The EC2 API exposes a separate URL for each metadata. You can retrieve a listing
of these elements by making a GET query to
http://169.254.169.254/2009-04-04/meta-data/For example:$curl http://169.254.169.254/2009-04-04/meta-data/ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups$curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/ami$curl http://169.254.169.254/2009-04-04/meta-data/placement/availability-zone$curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0=mykeyInstances can retrieve the public SSH key (identified by keypair name when a user
requests a new instance) by making a GET request to
http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key.For example:$curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-keyssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by NovaInstances can retrieve user data by making a GET request to
http://169.254.169.254/2009-04-04/user-data.For example:$curl http://169.254.169.254/2009-04-04/user-data#!/bin/bash
echo 'Extra user data here'Run the metadata serviceThe metadata service is implemented by either the nova-api service or the nova-api-metadata service. (The nova-api-metadata service is generally only used when running in
multi-host mode, it retrieves instance-specific metadata). If you are running the
nova-api service, you must have
metadata as one of the elements of the list of the
enabled_apis configuration option in
/etc/nova/nova.conf. The default
enabled_apis configuration setting includes the metadata
service, so you should not need to modify it.Hosts access the service at 169.254.169.254:80, and this is
translated to metadata_host:metadata_port by an iptables rule
established by the nova-network service. In
multi-host mode, you can set to
127.0.0.1.To enable instances to reach the metadata service, the nova-network service configures iptables to NAT port
80 of the 169.254.169.254 address to the
IP address specified in (default
$my_ip, which is the IP address of the nova-network service) and port specified in
(default 8775) in
/etc/nova/nova.conf.The metadata_host configuration option must be an IP
address, not a host name.The default Compute service settings assume that the nova-network service and the nova-api service are running on the same host.
If this is not the case, you must make this change in the
/etc/nova/nova.conf file on the host running the
nova-network service:Set the metadata_host configuration option to the IP
address of the host where the nova-api
service runs.Enable ping and SSH on VMsBe sure you enable access to your VMs by using the euca-authorize
or nova secgroup-add-rule command. These commands enable you to
ping and ssh to your VMs:You must run these commands as root only if the credentials used to interact with
nova-api are in
/root/.bashrc. If the EC2 credentials are the
.bashrc file for another user, you must run these commands
as the user.Run nova commands:$nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0$nova secgroup-add-rule default tcp 22 22 0.0.0.0/0Using euca2ools:$euca-authorize -P icmp -t -1:-1 -s 0.0.0.0/0 default$euca-authorize -P tcp -p 22 -s 0.0.0.0/0 defaultIf you still cannot ping or SSH your instances after issuing the nova
secgroup-add-rule commands, look at the number of
dnsmasq processes that are running. If you have a running
instance, check to see that TWO dnsmasq processes are running. If
not, perform the following commands as root:#killall dnsmasq#service nova-network restartConfigure public (floating) IP addressesIf you are using Compute's nova-network
instead of OpenStack Networking (neutron) for networking in OpenStack, use procedures in
this section to configure floating IP addresses. For instructions on how to configure
OpenStack Networking (neutron) to provide access to instances through floating IP
addresses, see .Private and public IP addressesEvery virtual instance is automatically assigned a private IP address. You can
optionally assign public IP addresses to instances. The term floating IP refers to an IP address,
typically public, that you can dynamically add to a running virtual instance.
OpenStack Compute uses Network Address Translation (NAT) to assign floating IPs to
virtual instances.If you plan to use this feature, you must add edit the
/etc/nova/nova.conf file to specify to which interface the
nova-network service binds public IP
addresses, as follows:public_interface=VLAN100If you make changes to the /etc/nova/nova.conf file while the
nova-network service is running, you
must restart the service.Traffic between VMs using floating IPsBecause floating IPs are implemented by using a source NAT (SNAT rule in
iptables), security groups can display inconsistent behavior if VMs use their
floating IP to communicate with other VMs, particularly on the same physical
host. Traffic from VM to VM across the fixed network does not have this issue,
and so this is the recommended path. To ensure that traffic does not get SNATed
to the floating range, explicitly set:dmz_cidr=x.x.x.x/yThe x.x.x.x/y value specifies the range of floating IPs for
each pool of floating IPs that you define. If the VMs in the source group have
floating IPs, this configuration is also required.Enable IP forwardingBy default, IP forwarding is disabled on most Linux distributions. To use the
floating IP feature, you must enable IP forwarding.You must enable IP forwarding only on the nodes that run the nova-network service. If you use
multi_host mode, ensure that you enable it on all compute
nodes. Otherwise, enable it on only the node that runs the nova-network service.To check whether forwarding is enabled, run:$cat /proc/sys/net/ipv4/ip_forward0Alternatively, you can run:$sysctl net.ipv4.ip_forwardnet.ipv4.ip_forward = 0In the previous example, IP forwarding is disabled. To enable it dynamically, run:#sysctl -w net.ipv4.ip_forward=1Or:#echo 1 > /proc/sys/net/ipv4/ip_forwardTo make the changes permanent, edit the /etc/sysctl.conf file
and update the IP forwarding setting:net.ipv4.ip_forward = 1Save the file and run the following command to apply the changes:#sysctl -pYou can also update the setting by restarting the network service:On Ubuntu, run:#/etc/init.d/procps.sh restartOn RHEL/Fedora/CentOS, run:#service network restartCreate a list of available floating IP addressesCompute maintains a list of floating IP addresses that you can assign to
instances. Use the nova-manage floating create command to add
entries to this list.For example:#nova-manage floating create --pool nova --ip_range 68.99.26.170/31You can use the following nova-manage commands to perform
floating IP operations:#nova-manage floating listLists the floating IP addresses in the pool.#nova-manage floating create --pool POOL_NAME --ip_range CIDRCreates specific floating IPs for either a single address or a
subnet.#nova-manage floating delete CIDRRemoves floating IP addresses using the same parameters as the create
command.For information about how administrators can associate floating IPs with
instances, see Manage IP addresses in the OpenStack Admin User
Guide.Automatically add floating IPsYou can configure the nova-network
service to automatically allocate and assign a floating IP address to virtual
instances when they are launched. Add the following line to the
/etc/nova/nova.conf file and restart the nova-network service:auto_assign_floating_ip=TrueIf you enable this option and all floating IP addresses have already been
allocated, the nova boot command fails.Remove a network from a projectYou cannot remove a network that has already been associated to a project by simply
deleting it.To determine the project ID, you must have administrative rights. You can disassociate
the project from the network with a scrub command and the project ID as the final
parameter:#nova-manage project scrub --project IDMultiple interfaces for your instances (multinic)The multinic feature allows you to plug more than one interface to your instances,
making it possible to make several use cases available:SSL Configurations (VIPs)Services failover/ HABandwidth AllocationAdministrative/ Public access to your instancesEach VIF is representative of a separate network with its own IP block. Every network
mode introduces its own set of changes regarding the multinic usage:Use the multinic featureIn order to use the multinic feature, first create two networks, and attach them
to your tenant (still named 'project' on the command line):
$nova network-create first-net --fixed-range-v4 20.20.0.0/24 --project-id $your-project$nova network-create second-net --fixed-range-v4 20.20.10.0/24 --project-id $your-project
Now every time you spawn a new instance, it gets two IP addresses from the
respective DHCP servers:$nova list+-----+------------+--------+----------------------------------------+
| ID | Name | Status | Networks |
+-----+------------+--------+----------------------------------------+
| 124 | Server 124 | ACTIVE | network2=20.20.0.3; private=20.20.10.14|
+-----+------------+--------+----------------------------------------+Make sure to power up the second interface on the instance, otherwise that
last won't be reachable through its second IP. Here is an example of how to
setup the interfaces within the instance (this is the configuration that needs
to be applied inside the image):/etc/network/interfaces# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet dhcpIf the Virtual Network Service Neutron is installed, it is possible to specify
the networks to attach to the respective interfaces by using the
--nic flag when invoking the nova
command:$nova boot --image ed8b2a37-5535-4a5f-a615-443513036d71 --flavor 1 --nic net-id=NETWORK1_ID --nic net-id=NETWORK2_ID test-vm1Troubleshoot NetworkingCannot reach floating IPsIf you cannot reach your instances through the floating IP address, check the
following:Ensure the default security group allows ICMP (ping) and SSH (port 22), so
that you can reach the instances:$nova secgroup-list-rules default+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+Ensure the NAT rules have been added to iptables
on the node that nova-network is running on, as
root:#iptables -L -nv -t nat-A nova-network-PREROUTING -d 68.99.26.170/32 -j DNAT --to-destination 10.0.0.3
-A nova-network-floating-snat -s 10.0.0.3/32 -j SNAT --to-source 68.99.26.170Check that the public address, in this example "68.99.26.170", has been
added to your public interface. You should see the address in the listing
when you enter "ip addr" at the command prompt.$ip addr2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether xx:xx:xx:17:4b:c2 brd ff:ff:ff:ff:ff:ff
inet 13.22.194.80/24 brd 13.22.194.255 scope global eth0
inet 68.99.26.170/32 scope global eth0
inet6 fe80::82b:2bf:fe1:4b2/64 scope link
valid_lft forever preferred_lft foreverYou cannot ssh to an instance with a public IP from
within the same server because the routing configuration does not allow
it.You can use tcpdump to identify if packets are being
routed to the inbound interface on the compute host. If the packets are
reaching the compute hosts but the connection is failing, the issue may be
that the packet is being dropped by reverse path filtering. Try disabling
reverse-path filtering on the inbound interface. For example, if the inbound
interface is eth2, as root, run:#sysctl -w net.ipv4.conf.ETH2.rp_filter=0If this solves your issue, add the following line to
/etc/sysctl.conf so that the reverse-path filter is
disabled the next time the compute host reboots:net.ipv4.conf.rp_filter=0Disable firewallTo help debug networking issues with reaching VMs, you can disable the firewall by
setting the following option in /etc/nova/nova.conf:firewall_driver=nova.virt.firewall.NoopFirewallDriverWe strongly recommend you remove this line to re-enable the firewall once your
networking issues have been resolved.Packet loss from instances to nova-network server (VLANManager mode)If you can SSH to your instances but you find that the network interactions to
your instance is slow, or if you find that running certain operations are slower
than they should be (for example, sudo), then there may be packet
loss occurring on the connection to the instance.Packet loss can be caused by Linux networking configuration settings related to
bridges. Certain settings can cause packets to be dropped between the VLAN interface
(for example, vlan100) and the associated bridge interface (for
example, br100) on the host running the nova-network service.One way to check whether this is the issue in your setup, is to open up three
terminals and run the following commands:In the first terminal, on the host running nova-network, use
tcpdump on the VLAN interface to monitor DNS-related
traffic (UDP, port 53). As root, run:#tcpdump -K -p -i vlan100 -v -vv udp port 53In the second terminal, also on the host running nova-network, use
tcpdump to monitor DNS-related traffic on the bridge
interface. As root, run:#tcpdump -K -p -i br100 -v -vv udp port 53In the third terminal, SSH inside of the instance and generate DNS
requests by using the nslookup command:$nslookup www.google.comThe symptoms may be intermittent, so try running
nslookup multiple times. If the network configuration
is correct, the command should return immediately each time. If it is not
functioning properly, the command hangs for several seconds.If the nslookup command sometimes hangs, and there are
packets that appear in the first terminal but not the second, then the
problem may be due to filtering done on the bridges. Try to disable
filtering, run the following commands as root:#sysctl -w net.bridge.bridge-nf-call-arptables=0#sysctl -w net.bridge.bridge-nf-call-iptables=0#sysctl -w net.bridge.bridge-nf-call-ip6tables=0If this solves your issue, add the following line to
/etc/sysctl.conf so that these changes take effect
the next time the host reboots:net.bridge.bridge-nf-call-arptables=0
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-ip6tables=0KVM: Network connectivity works initially, then failsSome administrators have observed an issue with the KVM hypervisor where instances
running Ubuntu 12.04 sometimes loses network connectivity after functioning properly
for a period of time. Some users have reported success with loading the vhost_net
kernel module as a workaround for this issue (see bug
#997978) . This kernel module may also improve network performance
on KVM. To load the kernel module, as root:#modprobe vhost_netLoading the module has no effect on running instances.