Advanced Configuration Options This section describes advanced configurations options for various system components (i.e. config options where the default is usually ok, but that the user may want to tweak). After installing from packages, $NEUTRON_CONF_DIR is /etc/neutron.
OpenStack Networking Server with Plugin This is the web server that runs the OpenStack Networking API Web Server. It is responsible for loading a plugin and passing the API calls to the plugin for processing. The neutron-server should receive one of more configuration files as it its input, for example: neutron-server --config-file <neutron config> --config-file <plugin config> The neutron config contains the common neutron configuration parameters. The plugin config contains the plugin specific flags. The plugin that is run on the service is loaded via the configuration parameter ‘core_plugin’. In some cases a plugin may have an agent that performs the actual networking. Specific configuration details can be seen in the Appendix - Configuration File Options. Most plugins require a SQL database. After installing and starting the database server, set a password for the root account and delete the anonymous accounts: $> mysql -u root mysql> update mysql.user set password = password('iamroot') where user = 'root'; mysql> delete from mysql.user where user = ''; Create a database and user account specifically for plugin: mysql> create database <database-name>; mysql> create user '<user-name>'@'localhost' identified by '<user-name>'; mysql> create user '<user-name>'@'%' identified by '<user-name>'; mysql> grant all on <database-name>.* to '<user-name>'@'%'; Once the above is done you can update the settings in the relevant plugin configuration files. The plugin specific configuration files can be found at $NEUTRON_CONF_DIR/plugins. Some plugins have a L2 agent that performs the actual networking. That is, the agent will attach the virtual machine NIC to the OpenStack Networking network. Each node should have an L2 agent running on it. Note that the agent receives the following input parameters: neutron-plugin-agent --config-file <neutron config> --config-file <plugin config> Two things need to be done prior to working with the plugin: Ensure that the core plugin is updated. Ensure that the database connection is correctly set. The table below contains examples for these settings. Some Linux packages may provide installation utilities that configure these.
Settings
Parameter Value
Open vSwitch
core_plugin ($NEUTRON_CONF_DIR/neutron.conf) neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
sql_connection (in the plugin configuration file) mysql://<username>:<password>@localhost/ovs_neutron?charset=utf8
Plugin Configuration File $NEUTRON_CONF_DIR/plugins/openvswitch/ovs_neutron_plugin.ini
Agent neutron-openvswitch-agent
Linux Bridge
core_plugin ($NEUTRON_CONF_DIR/neutron.conf) neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2
sql_connection (in the plugin configuration file) mysql://<username>:<password>@localhost/neutron_linux_bridge?charset=utf8
Plugin Configuration File $NEUTRON_CONF_DIR/plugins/linuxbridge/linuxbridge_conf.ini
Agent neutron-linuxbridge-agent
All of the plugin configuration files options can be found in the Appendix - Configuration File Options.
DHCP Agent There is an option to run a DHCP server that will allocate IP addresses to virtual machines running on the network. When a subnet is created, by default, the subnet has DHCP enabled. The node that runs the DHCP agent should run: neutron-dhcp-agent --config-file <neutron config> --config-file <dhcp config> Currently the DHCP agent uses dnsmasq to perform that static address assignment. A driver needs to be configured that matches the plugin running on the service.
Basic settings
Parameter Value
Open vSwitch
interface_driver ($NEUTRON_CONF_DIR/dhcp_agent.ini) neutron.agent.linux.interface.OVSInterfaceDriver
Linux Bridge
interface_driver ($NEUTRON_CONF_DIR/dhcp_agent.ini) neutron.agent.linux.interface.BridgeInterfaceDriver
All of the DHCP agent configuration options can be found in the Appendix - Configuration File Options.
Namespace By default the DHCP agent makes use of Linux network namespaces in order to support overlapping IP addresses. Requirements for network namespaces support are described in the Limitation section. If the Linux installation does not support network namespace, you must disable using network namespace in the DHCP agent config file (The default value of use_namespaces is True). use_namespaces = False
L3 Agent There is an option to run a L3 agent that will give enable layer 3 forwarding and floating IP support. The node that runs the L3 agent should run: neutron-l3-agent --config-file <neutron config> --config-file <l3 config> A driver needs to be configured that matches the plugin running on the service. The driver is used to create the routing interface.
Basic settings
Parameter Value
Open vSwitch
interface_driver ($NEUTRON_CONF_DIR/l3_agent.ini) neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge ($NEUTRON_CONF_DIR/l3_agent.ini) br-ex
Linux Bridge
interface_driver ($NEUTRON_CONF_DIR/l3_agent.ini) neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge ($NEUTRON_CONF_DIR/l3_agent.ini) This field must be empty (or the bridge name for the external network).
The L3 agent communicates with the OpenStack Networking server via the OpenStack Networking API, so the following configuration is required: OpenStack Identity authentication: auth_url="$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_AUTH_HOST:$KEYSTONE_AUTH_PORT/v2.0" For example, http://10.56.51.210:5000/v2.0 Admin user details: admin_tenant_name $SERVICE_TENANT_NAME admin_user $Q_ADMIN_USERNAME admin_password $SERVICE_PASSWORD All of the L3 agent configuration options can be found in the Appendix - Configuration File Options.
Namespace By default the L3 agent makes use of Linux network namespaces in order to support overlapping IP addresses. Requirements for network namespaces support are described in the Limitation section. If the Linux installation does not support network namespace, you must disable using network namespace in the L3 agent config file (The default value of use_namespaces is True). use_namespaces = False When use_namespaces is set to False, only one router ID can be supported per node. This must be configured via the configuration variable router_id. # If use_namespaces is set to False then the agent can only configure one router. # This is done by setting the specific router_id. router_id = 1064ad16-36b7-4c2f-86f0-daa2bcbd6b2a To configure it, you need to run the OpenStack Networking service and create a router, and then set an ID of the router created to router_id in the L3 agent configuration file. $ neutron router-create myrouter1 Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 338d42d7-b22e-42c5-9df6-f3674768fe75 | | name | myrouter1 | | status | ACTIVE | | tenant_id | 0c236f65baa04e6f9b4236b996555d56 | +-----------------------+--------------------------------------+
Multiple Floating IP Pools The L3 API in OpenStack Networking supports multiple floating IP pools. In OpenStack Networking, a floating IP pool is represented as an external network and a floating IP is allocated from a subnet associated with the external network. Since each L3 agent can be associated with at most one external network, we need to invoke multiple L3 agent to define multiple floating IP pools. 'gateway_external_network_id' in L3 agent configuration file indicates the external network that the L3 agent handles. You can run multiple L3 agent instances on one host. In addition, when you run multiple L3 agents, make sure that handle_internal_only_routers is set to True only for one L3 agent in an OpenStack Networking deployment and set to False for all other L3 agents. Since the default value of this parameter is True, you need to configure it carefully. Before starting L3 agents, you need to create routers and external networks, then update the configuration files with UUID of external networks and start L3 agents. For the first agent, invoke it with the following l3_agent.ini where handle_internal_only_routers is True. handle_internal_only_routers = True gateway_external_network_id = 2118b11c-011e-4fa5-a6f1-2ca34d372c35 external_network_bridge = br-ex python /opt/stack/neutron/bin/neutron-l3-agent --config-file /etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini For the second (or later) agent, invoke it with the following l3_agent.ini where handle_internal_only_routers is False. handle_internal_only_routers = False gateway_external_network_id = e828e54c-850a-4e74-80a8-8b79c6a285d8 external_network_bridge = br-ex-2