![Sean Dague](/assets/img/avatar_default.png)
This explains the current state of networking in devstack, and a couple of scenarios that people might want to try out for local testing. Change-Id: I2be35f4345bf9306c981ef6f0186b48da7d06772
3.5 KiB
DevStack Networking
An important part of the DevStack experience is networking that works by default for created guests. This might not be optimal for your particular testing environment, so this document tries it's best to explain what's going on.
Defaults
If you don't specify any configuration you will get the following:
- neutron (including l3 with openvswitch)
- private project networks for each openstack project
- a floating ip range of 172.24.4.0/24 with the gateway of 172.24.4.1
- the demo project configured with fixed ips on 10.0.0.0/24
- a
br-ex
interface controlled by neutron for all it's networking (this is not connected to any physical interfaces). - DNS resolution for guests based on the resolv.conf for you host
- an ip masq rule that allows created guests to route out
This creates an environment which is isolated to the single host. Guests can get to the external network for package updates. Tempest tests will work in this environment.
Note
By default all OpenStack environments have security group rules which block all inbound packets to guests. If you want to be able to ssh / ping your created guests you should run the following.
openstack security group rule create --proto icmp --dst-port 0 default
openstack security group rule create --proto tcp --dst-port 22 default
Locally Accessible Guests
If you want to make you guests accessible other machines on your
network, we have to connect br-ex
to a physical
interface.
Dedicated Guest Interface
If you have 2 or more interfaces on your devstack server, you can allocate an interface to neutron to fully manage. This should not be the same interface you use to ssh into the devstack server itself.
This is done by setting with the PUBLIC_INTERFACE
attribute.
[[local|localrc]]
PUBLIC_INTERFACE=eth1
That will put all layer 2 traffic from your guests onto the main network. When running in this mode the ip masq rule is not added in your devstack, you are responsible for making routing work on your local network.
Shared Guest Interface
Warning
This is not a recommended configuration. Because of interactions between ovs and bridging, if you reboot your box with active networking you may loose network connectivity to your system.
If you need your guests accessible on the network, but only have 1 interface (using something like a NUC), you can share your one network. But in order for this to work you need to manually set a lot of addresses, and have them all exactly correct.
[[local|localrc]]
PUBLIC_INTERFACE=eth0
HOST_IP=10.42.0.52
FLOATING_RANGE=10.42.0.52/24
PUBLIC_NETWORK_GATEWAY=10.42.0.1
Q_FLOATING_ALLOCATION_POOL=start=10.42.0.250,end=10.42.0.254
In order for this scenario to work the floating ip network must match the default networking on your server. This breaks HOST_IP detection, as we exclude the floating range by default, so you have to specify that manually.
The PUBLIC_NETWORK_GATEWAY
is the gateway that server
would normally use to get off the network.
Q_FLOATING_ALLOCATION_POOL
controls the range of floating
ips that will be handed out. As we are sharing your existing network,
you'll want to give it a slice that your local dhcp server is not
allocating. Otherwise you could easily have conflicting ip addresses,
and cause havoc with your local network.