Fix xml and json validation errors in openstack-manuals
Change-Id: Iaeb551d44d9a3cd6e7131e925fac89ed269515bc author: diane fleming
This commit is contained in:
parent
0394277a2c
commit
ed15af07b9
@ -1,8 +1,13 @@
|
||||
[DEFAULT]
|
||||
repo_name = openstack-manuals
|
||||
|
||||
# Not in DocBook format
|
||||
file_exception = emc-vmax.xml
|
||||
file_exception = emc-vnx.xml
|
||||
|
||||
# Not whitelisted via bk-*.xml
|
||||
file_exception = st-training-guides.xml
|
||||
|
||||
# Not in xml format
|
||||
file_exception = ha-guide-docinfo.xml
|
||||
|
||||
|
@ -167,8 +167,9 @@
|
||||
Compute services manages instances.</para>
|
||||
<para>For more information about creating and troubleshooting
|
||||
images, see the <citetitle><link
|
||||
xlink:href="http://docs.openstack.org/image-guide/content/"
|
||||
>OpenStack Virtual Machine Image Guide</link></citetitle>.</para>
|
||||
xlink:href="http://docs.openstack.org/image-guide/content/"
|
||||
>OpenStack Virtual Machine Image
|
||||
Guide</link></citetitle>.</para>
|
||||
<para>For more information about image configuration options,
|
||||
see the <link
|
||||
xlink:href="http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-image-service.html"
|
||||
@ -458,9 +459,8 @@
|
||||
a native package for most Linux distributions, or you can
|
||||
install the latest version using the
|
||||
<application>pip</application> python package
|
||||
installer:
|
||||
<programlisting language="bash">sudo pip install python-novaclient</programlisting>
|
||||
</para>
|
||||
installer:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo pip install python-novaclient</userinput></screen>
|
||||
<para>For more information about
|
||||
<application>python-novaclient</application> and other
|
||||
available command-line tools, see the <link
|
||||
@ -676,7 +676,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
|
||||
<filename>/etc/openstack-dashboard/local_settings.py</filename>
|
||||
and on openSUSE and SUSE Linux Enterprise Server:
|
||||
<filename>/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>)
|
||||
<programlisting>OPENSTACK_HYPERVISOR_FEATURE = {
|
||||
<programlisting language="ini">OPENSTACK_HYPERVISOR_FEATURE = {
|
||||
...
|
||||
'can_set_password': False,
|
||||
}</programlisting></para>
|
||||
@ -688,7 +688,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
|
||||
default. To enable it, set the following option in
|
||||
<filename>/etc/nova/nova.conf</filename>:</para>
|
||||
<para>
|
||||
<programlisting>[libvirt]
|
||||
<programlisting language="ini">[libvirt]
|
||||
inject_password=true</programlisting>
|
||||
</para>
|
||||
<para>When enabled, Compute will modify the password of
|
||||
@ -874,7 +874,8 @@ inject_password=true</programlisting>
|
||||
IP addresses to VM instances from the specified subnet
|
||||
in addition to manually configuring the networking
|
||||
bridge. IP addresses for VM instances are grabbed from
|
||||
a subnet specified by the network administrator.</para>
|
||||
a subnet specified by the network
|
||||
administrator.</para>
|
||||
<para>Like Flat Mode, all instances are attached to a
|
||||
single bridge on the compute node. In addition a DHCP
|
||||
server is running to configure instances (depending on
|
||||
@ -885,27 +886,28 @@ inject_password=true</programlisting>
|
||||
(<literal>flat_interface</literal>, eth0 by
|
||||
default). For every instance, nova allocates a fixed
|
||||
IP address and configure dnsmasq with the MAC/IP pair
|
||||
for the VM. Dnsmasq doesn't take part in
|
||||
the IP address allocation process, it only hands out
|
||||
IPs according to the mapping done by nova. Instances
|
||||
receive their fixed IPs by doing a dhcpdiscover.
|
||||
These IPs are <emphasis role="italic">not</emphasis>
|
||||
assigned to any of the host's network interfaces,
|
||||
only to the VM's guest-side interface.</para>
|
||||
<para>In any setup with flat networking, the hosts providing
|
||||
the <systemitem class="service">nova-network</systemitem>
|
||||
service are responsible for forwarding
|
||||
traffic from the private network. They also run and
|
||||
configure dnsmasq as a DHCP server listening on
|
||||
this bridge, usually on IP address 10.0.0.1 (see
|
||||
<link linkend="section_dnsmasq">DHCP server: dnsmasq
|
||||
</link>). Compute can determine the NAT entries for
|
||||
each network, though sometimes NAT is not used, such
|
||||
as when configured with all public IPs or a hardware
|
||||
router is used (one of the HA options). Such hosts
|
||||
need to have <literal>br100</literal> configured and
|
||||
physically connected to any other nodes that are hosting
|
||||
VMs. You must set the <literal>flat_network_bridge</literal>
|
||||
for the VM. Dnsmasq doesn't take part in the IP
|
||||
address allocation process, it only hands out IPs
|
||||
according to the mapping done by nova. Instances
|
||||
receive their fixed IPs by doing a dhcpdiscover. These
|
||||
IPs are <emphasis role="italic">not</emphasis>
|
||||
assigned to any of the host's network interfaces, only
|
||||
to the VM's guest-side interface.</para>
|
||||
<para>In any setup with flat networking, the hosts
|
||||
providing the <systemitem class="service"
|
||||
>nova-network</systemitem> service are responsible
|
||||
for forwarding traffic from the private network. They
|
||||
also run and configure dnsmasq as a DHCP server
|
||||
listening on this bridge, usually on IP address
|
||||
10.0.0.1 (see <link linkend="section_dnsmasq">DHCP
|
||||
server: dnsmasq </link>). Compute can determine
|
||||
the NAT entries for each network, though sometimes NAT
|
||||
is not used, such as when configured with all public
|
||||
IPs or a hardware router is used (one of the HA
|
||||
options). Such hosts need to have
|
||||
<literal>br100</literal> configured and physically
|
||||
connected to any other nodes that are hosting VMs. You
|
||||
must set the <literal>flat_network_bridge</literal>
|
||||
option or create networks with the bridge parameter in
|
||||
order to avoid raising an error. Compute nodes have
|
||||
iptables/ebtables entries created for each project and
|
||||
@ -959,11 +961,11 @@ inject_password=true</programlisting>
|
||||
creating a dnsmasq configuration file. Specify the
|
||||
config file using the
|
||||
<literal>dnsmasq_config_file</literal>
|
||||
configuration option. For example:
|
||||
<programlisting language="ini">dnsmasq_config_file=/etc/dnsmasq-nova.conf</programlisting>
|
||||
See the <link
|
||||
configuration option. For example:</para>
|
||||
<programlisting language="ini">dnsmasq_config_file=/etc/dnsmasq-nova.conf</programlisting>
|
||||
<para>See the <link
|
||||
xlink:href="http://docs.openstack.org/havana/config-reference/content/"
|
||||
><citetitle> OpenStack Configuration
|
||||
><citetitle>OpenStack Configuration
|
||||
Reference</citetitle></link> for an example of
|
||||
how to change the behavior of dnsmasq using a dnsmasq
|
||||
configuration file. The dnsmasq documentation has a
|
||||
@ -976,8 +978,8 @@ inject_password=true</programlisting>
|
||||
<literal>dns_server</literal> configuration option
|
||||
in <filename>/etc/nova/nova.conf</filename>. The
|
||||
following example would configure dnsmasq to use
|
||||
Google's public DNS server:
|
||||
<programlisting language="ini">dns_server=8.8.8.8</programlisting></para>
|
||||
Google's public DNS server:</para>
|
||||
<programlisting language="ini">dns_server=8.8.8.8</programlisting>
|
||||
<para>Dnsmasq logging output goes to the syslog (typically
|
||||
<filename>/var/log/syslog</filename> or
|
||||
<filename>/var/log/messages</filename>, depending
|
||||
@ -1009,14 +1011,14 @@ inject_password=true</programlisting>
|
||||
Each of the APIs is versioned by date.</para>
|
||||
<para>To retrieve a list of supported versions for the
|
||||
OpenStack metadata API, make a GET request to
|
||||
<programlisting>http://169.254.169.254/openstack</programlisting>
|
||||
<literal>http://169.254.169.254/openstack</literal>
|
||||
For example:</para>
|
||||
<para><screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack</userinput>
|
||||
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack</userinput>
|
||||
<computeroutput>2012-08-10
|
||||
latest</computeroutput></screen>
|
||||
To retrieve a list of supported versions for the
|
||||
<para>To list supported versions for the
|
||||
EC2-compatible metadata API, make a GET request to
|
||||
<programlisting>http://169.254.169.254</programlisting></para>
|
||||
<literal>http://169.254.169.254</literal>.</para>
|
||||
<para>For example:</para>
|
||||
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254</userinput>
|
||||
<computeroutput>1.0
|
||||
@ -1039,39 +1041,22 @@ latest</computeroutput></screen>
|
||||
<title>OpenStack metadata API</title>
|
||||
<para>Metadata from the OpenStack API is distributed
|
||||
in JSON format. To retrieve the metadata, make a
|
||||
GET request to:</para>
|
||||
<programlisting>http://169.254.169.254/openstack/2012-08-10/meta_data.json</programlisting>
|
||||
GET request to
|
||||
<literal>http://169.254.169.254/openstack/2012-08-10/meta_data.json</literal>.</para>
|
||||
<para>For example:</para>
|
||||
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack/2012-08-10/meta_data.json</userinput>
|
||||
<computeroutput>{"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38", "availability_zone": "nova", "hostname": "test.novalocal", "launch_index": 0, "meta": {"priority": "low", "role": "webserver"}, "public_keys": {"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"}, "name": "test"}</computeroutput></screen>
|
||||
<para>Here is the same content after having run
|
||||
through a JSON pretty-printer:</para>
|
||||
<programlisting language="json">{
|
||||
"availability_zone": "nova",
|
||||
"hostname": "test.novalocal",
|
||||
"launch_index": 0,
|
||||
"meta": {
|
||||
"priority": "low",
|
||||
"role": "webserver"
|
||||
},
|
||||
"name": "test",
|
||||
"public_keys": {
|
||||
"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"
|
||||
},
|
||||
"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38"
|
||||
}</programlisting>
|
||||
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack/2012-08-10/meta_data.json</userinput></screen>
|
||||
<programlisting language="json"><xi:include href="../common/samples/list_metadata.json" parse="text"/></programlisting>
|
||||
<para>Instances also retrieve user data (passed as the
|
||||
<literal>user_data</literal> parameter in the
|
||||
API call or by the <literal>--user_data</literal>
|
||||
flag in the <command>nova boot</command> command)
|
||||
through the metadata service, by making a GET
|
||||
request to:
|
||||
<programlisting>http://169.254.169.254/openstack/2012-08-10/user_data</programlisting>
|
||||
For example:</para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack/2012-08-10/user_data</userinput><computeroutput>#!/bin/bash
|
||||
request to
|
||||
<literal>http://169.254.169.254/openstack/2012-08-10/user_data</literal>.</para>
|
||||
<para>For example:</para>
|
||||
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack/2012-08-10/user_data</userinput>
|
||||
<computeroutput>#!/bin/bash
|
||||
echo 'Extra user data here'</computeroutput></screen>
|
||||
</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>EC2 metadata API</title>
|
||||
@ -1083,8 +1068,8 @@ echo 'Extra user data here'</computeroutput></screen>
|
||||
properly with OpenStack.</para>
|
||||
<para>The EC2 API exposes a separate URL for each
|
||||
metadata. You can retrieve a listing of these
|
||||
elements by making a GET query to:</para>
|
||||
<programlisting>http://169.254.169.254/2009-04-04/meta-data/</programlisting>
|
||||
elements by making a GET query to
|
||||
<literal>http://169.254.169.254/2009-04-04/meta-data/</literal></para>
|
||||
<para>For example:</para>
|
||||
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/2009-04-04/meta-data/</userinput><computeroutput>ami-id
|
||||
ami-launch-index
|
||||
@ -1111,14 +1096,14 @@ security-groups</computeroutput></screen>
|
||||
<computeroutput>0=mykey</computeroutput></screen>
|
||||
<para>Instances can retrieve the public SSH key
|
||||
(identified by keypair name when a user requests a
|
||||
new instance) by making a GET request to:</para>
|
||||
<programlisting>http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key</programlisting>
|
||||
new instance) by making a GET request to
|
||||
<literal>http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key</literal>.</para>
|
||||
<para>For example:</para>
|
||||
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key</userinput>
|
||||
<computeroutput>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova</computeroutput></screen>
|
||||
<para>Instances can retrieve user data by making a GET
|
||||
request to:</para>
|
||||
<programlisting>http://169.254.169.254/2009-04-04/user-data</programlisting>
|
||||
request to
|
||||
<literal>http://169.254.169.254/2009-04-04/user-data</literal>.</para>
|
||||
<para>For example:</para>
|
||||
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/2009-04-04/user-data</userinput>
|
||||
<computeroutput>#!/bin/bash
|
||||
@ -1239,9 +1224,9 @@ echo 'Extra user data here'</computeroutput></screen>
|
||||
<para>Every virtual instance is automatically assigned
|
||||
a private IP address. You can optionally assign
|
||||
public IP addresses to instances. The term
|
||||
<glossterm baseform="floating IP address">floating
|
||||
IP</glossterm> refers to
|
||||
an IP address, typically public, that you can
|
||||
<glossterm baseform="floating IP address"
|
||||
>floating IP</glossterm> refers to an IP
|
||||
address, typically public, that you can
|
||||
dynamically add to a running virtual instance.
|
||||
OpenStack Compute uses Network Address Translation
|
||||
(NAT) to assign floating IPs to virtual
|
||||
@ -1252,7 +1237,7 @@ echo 'Extra user data here'</computeroutput></screen>
|
||||
class="service">nova-network</systemitem>
|
||||
service binds public IP addresses, as
|
||||
follows:</para>
|
||||
<programlisting>public_interface=<replaceable>vlan100</replaceable></programlisting>
|
||||
<programlisting language="ini">public_interface=<replaceable>vlan100</replaceable></programlisting>
|
||||
<para>If you make changes to the
|
||||
<filename>/etc/nova/nova.conf</filename> file
|
||||
while the <systemitem class="service"
|
||||
@ -1271,7 +1256,7 @@ echo 'Extra user data here'</computeroutput></screen>
|
||||
and so this is the recommended path. To ensure
|
||||
that traffic does not get SNATed to the
|
||||
floating range, explicitly set
|
||||
<programlisting>dmz_cidr=x.x.x.x/y</programlisting>.
|
||||
<programlisting language="ini">dmz_cidr=x.x.x.x/y</programlisting>.
|
||||
The <literal>x.x.x.x/y</literal> value
|
||||
specifies the range of floating IPs for each
|
||||
pool of floating IPs that you define. If the
|
||||
@ -1310,7 +1295,7 @@ echo 'Extra user data here'</computeroutput></screen>
|
||||
<para>To make the changes permanent, edit the
|
||||
<filename>/etc/sysctl.conf</filename> file and
|
||||
update the IP forwarding setting:</para>
|
||||
<programlisting>net.ipv4.ip_forward = 1</programlisting>
|
||||
<programlisting language="ini">net.ipv4.ip_forward = 1</programlisting>
|
||||
<para>Save the file and run this command to apply the
|
||||
changes:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sysctl -p</userinput></screen>
|
||||
@ -1373,7 +1358,7 @@ echo 'Extra user data here'</computeroutput></screen>
|
||||
<filename>/etc/nova/nova.conf</filename> file
|
||||
and restart the <systemitem class="service"
|
||||
>nova-network</systemitem> service:</para>
|
||||
<programlisting>auto_assign_floating_ip=True</programlisting>
|
||||
<programlisting language="ini">auto_assign_floating_ip=True</programlisting>
|
||||
<note>
|
||||
<para>If you enable this option and all floating
|
||||
IP addresses have already been allocated, the
|
||||
@ -1470,7 +1455,7 @@ echo 'Extra user data here'</computeroutput></screen>
|
||||
the instance (this is the configuration that
|
||||
needs to be applied inside the image):</para>
|
||||
<para><filename>/etc/network/interfaces</filename></para>
|
||||
<programlisting># The loopback network interface
|
||||
<programlisting language="bash"># The loopback network interface
|
||||
auto lo
|
||||
iface lo inet loopback
|
||||
|
||||
@ -2062,22 +2047,22 @@ syslog_log_facility = LOG_LOCAL0</programlisting>
|
||||
<filename>/etc/rsyslog.conf</filename> on the
|
||||
log server host, which receives the log
|
||||
files:</para>
|
||||
<programlisting># provides TCP syslog reception
|
||||
<programlisting language="bash"># provides TCP syslog reception
|
||||
$ModLoad imtcp
|
||||
$InputTCPServerRun 1024</programlisting>
|
||||
<para>Add to <filename>/etc/rsyslog.conf</filename> a
|
||||
filter rule on which looks for a host name. The
|
||||
example below use
|
||||
<replaceable>compute-01</replaceable> as an
|
||||
example of a compute host
|
||||
name:<programlisting>:hostname, isequal, "<replaceable>compute-01</replaceable>" /mnt/rsyslog/logs/compute-01.log</programlisting></para>
|
||||
example of a compute host name:</para>
|
||||
<programlisting language="bash">:hostname, isequal, "<replaceable>compute-01</replaceable>" /mnt/rsyslog/logs/compute-01.log</programlisting>
|
||||
<para>On the compute hosts, create a file named
|
||||
<filename>/etc/rsyslog.d/60-nova.conf</filename>,
|
||||
with this
|
||||
content.<programlisting># prevent debug from dnsmasq with the daemon.none parameter
|
||||
with this content:</para>
|
||||
<programlisting language="bash"># prevent debug from dnsmasq with the daemon.none parameter
|
||||
*.*;auth,authpriv.none,daemon.none,local0.none -/var/log/syslog
|
||||
# Specify a log level of ERROR
|
||||
local0.error @@172.20.1.43:1024</programlisting></para>
|
||||
local0.error @@172.20.1.43:1024</programlisting>
|
||||
<para>Once you have created this file, restart your
|
||||
rsyslog daemon. Error-level log messages on the
|
||||
compute hosts should now be sent to your log
|
||||
@ -2248,7 +2233,7 @@ HostC p2 5 10240 150
|
||||
Here's an example using the EC2 API -
|
||||
instance i-000015b9 that is running on
|
||||
node np-rcc54:</para>
|
||||
<programlisting>i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60</programlisting>
|
||||
<programlisting language="bash">i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60</programlisting>
|
||||
</step>
|
||||
<step>
|
||||
<para>You can review the status of the host by
|
||||
@ -2261,7 +2246,7 @@ HostC p2 5 10240 150
|
||||
can find the credentials for your database
|
||||
in
|
||||
<filename>/etc/nova.conf</filename>.</para>
|
||||
<programlisting>SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;
|
||||
<programlisting language="bash">SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;
|
||||
*************************** 1. row ***************************
|
||||
created_at: 2012-06-19 00:48:11
|
||||
updated_at: 2012-07-03 00:35:11
|
||||
@ -2289,7 +2274,7 @@ HostC p2 5 10240 150
|
||||
host the affected VMs should move. Run the
|
||||
following database command to move the VM
|
||||
to np-rcc46:</para>
|
||||
<programlisting>UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06'; </programlisting>
|
||||
<programlisting language="bash">UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06'; </programlisting>
|
||||
</step>
|
||||
<step>
|
||||
<para>Next, if using a hypervisor that relies
|
||||
|
@ -3,187 +3,189 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="section_networking_auth">
|
||||
<title>Authentication and authorization</title>
|
||||
<para>Networking uses the Identity Service as the default
|
||||
authentication service. When the Identity Service is
|
||||
enabled, users who submit requests to the Networking
|
||||
service must provide an authentication token in
|
||||
<literal>X-Auth-Token</literal> request header. Users
|
||||
obtain this token by authenticating with the Identity
|
||||
Service endpoint. For more information about
|
||||
authentication with the Identity Service, see <link
|
||||
xlink:href="http://docs.openstack.org/api/openstack-identity-service/2.0/content/"
|
||||
><citetitle>OpenStack Identity Service API v2.0
|
||||
Reference</citetitle></link>. When the Identity
|
||||
Service is enabled, it is not mandatory to specify the
|
||||
tenant ID for resources in create requests because the
|
||||
tenant ID is derived from the authentication token.</para>
|
||||
<note>
|
||||
<para>The default authorization settings only allow
|
||||
administrative users to create resources on behalf of
|
||||
a different tenant. Networking uses information
|
||||
received from Identity to authorize user requests.
|
||||
Networking handles two kind of authorization
|
||||
policies:</para>
|
||||
</note>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Operation-based</emphasis>
|
||||
policies specify access criteria for specific
|
||||
operations, possibly with fine-grained control
|
||||
over specific attributes;</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Resource-based</emphasis>
|
||||
policies specify whether access to specific
|
||||
resource is granted or not according to the
|
||||
permissions configured for the resource (currently
|
||||
available only for the network resource). The
|
||||
actual authorization policies enforced in
|
||||
Networking might vary from deployment to
|
||||
deployment.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>The policy engine reads entries from the
|
||||
<filename>policy.json</filename> file. The actual
|
||||
location of this file might vary from distribution to
|
||||
distribution. Entries can be updated while the system is
|
||||
running, and no service restart is required. Every time
|
||||
the policy file is updated, the policies are automatically
|
||||
reloaded. Currently the only way of updating such policies
|
||||
is to edit the policy file. In this section, the terms
|
||||
<emphasis role="italic">policy</emphasis> and
|
||||
<emphasis role="italic">rule</emphasis> refer to
|
||||
objects that are specified in the same way in the policy
|
||||
file. There are no syntax differences between a rule and a
|
||||
policy. A policy is something that is matched directly
|
||||
from the Networking policy engine. A rule is an element in
|
||||
a policy, which is evaluated. For instance in
|
||||
<code>create_subnet:
|
||||
[["admin_or_network_owner"]]</code>, <emphasis
|
||||
role="italic">create_subnet</emphasis> is a policy,
|
||||
and <emphasis role="italic"
|
||||
>admin_or_network_owner</emphasis> is a rule.</para>
|
||||
<para>Policies are triggered by the Networking policy engine
|
||||
whenever one of them matches an Networking API operation
|
||||
or a specific attribute being used in a given operation.
|
||||
For instance the <code>create_subnet</code> policy is
|
||||
triggered every time a <code>POST /v2.0/subnets</code>
|
||||
request is sent to the Networking server; on the other
|
||||
hand <code>create_network:shared</code> is triggered every
|
||||
time the <emphasis role="italic">shared</emphasis>
|
||||
attribute is explicitly specified (and set to a value
|
||||
different from its default) in a <code>POST
|
||||
/v2.0/networks</code> request. It is also worth
|
||||
mentioning that policies can be also related to specific
|
||||
API extensions; for instance
|
||||
<code>extension:provider_network:set</code> is be
|
||||
triggered if the attributes defined by the Provider
|
||||
Network extensions are specified in an API request.</para>
|
||||
<para>An authorization policy can be composed by one or more
|
||||
rules. If more rules are specified, evaluation policy
|
||||
succeeds if any of the rules evaluates successfully; if an
|
||||
API operation matches multiple policies, then all the
|
||||
policies must evaluate successfully. Also, authorization
|
||||
rules are recursive. Once a rule is matched, the rule(s)
|
||||
can be resolved to another rule, until a terminal rule is
|
||||
reached.</para>
|
||||
<para>The Networking policy engine currently defines the
|
||||
following kinds of terminal rules:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Role-based
|
||||
rules</emphasis> evaluate successfully if the
|
||||
user who submits the request has the specified
|
||||
role. For instance <code>"role:admin"</code> is
|
||||
successful if the user who submits the request is
|
||||
an administrator.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Field-based rules
|
||||
</emphasis>evaluate successfully if a field of the
|
||||
resource specified in the current request matches
|
||||
a specific value. For instance
|
||||
<code>"field:networks:shared=True"</code> is
|
||||
successful if the <literal>shared</literal>
|
||||
attribute of the <literal>network</literal>
|
||||
resource is set to true.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Generic rules</emphasis>
|
||||
compare an attribute in the resource with an
|
||||
attribute extracted from the user's security
|
||||
credentials and evaluates successfully if the
|
||||
comparison is successful. For instance
|
||||
<code>"tenant_id:%(tenant_id)s"</code> is
|
||||
successful if the tenant identifier in the
|
||||
resource is equal to the tenant identifier of the
|
||||
user submitting the request.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>This extract is from the default
|
||||
<filename>policy.json</filename> file:</para>
|
||||
<programlisting language="bash">{
|
||||
[1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
|
||||
"admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]],
|
||||
"admin_only": [["role:admin"]], "regular_user": [],
|
||||
"shared": [["field:networks:shared=True"]],
|
||||
[2] "default": [["rule:admin_or_owner"]],
|
||||
"create_subnet": [["rule:admin_or_network_owner"]],
|
||||
"get_subnet": [["rule:admin_or_owner"], ["rule:shared"]],
|
||||
"update_subnet": [["rule:admin_or_network_owner"]],
|
||||
"delete_subnet": [["rule:admin_or_network_owner"]],
|
||||
"create_network": [],
|
||||
[3] "get_network": [["rule:admin_or_owner"], ["rule:shared"]],
|
||||
[4] "create_network:shared": [["rule:admin_only"]],
|
||||
"update_network": [["rule:admin_or_owner"]],
|
||||
"delete_network": [["rule:admin_or_owner"]],
|
||||
"create_port": [],
|
||||
[5] "create_port:mac_address": [["rule:admin_or_network_owner"]],
|
||||
"create_port:fixed_ips": [["rule:admin_or_network_owner"]],
|
||||
"get_port": [["rule:admin_or_owner"]],
|
||||
"update_port": [["rule:admin_or_owner"]],
|
||||
"delete_port": [["rule:admin_or_owner"]]
|
||||
}</programlisting>
|
||||
<para>[1] is a rule which evaluates successfully if the
|
||||
current user is an administrator or the owner of the
|
||||
resource specified in the request (tenant identifier is
|
||||
equal).</para>
|
||||
<para>[2] is the default policy which is always evaluated if
|
||||
an API operation does not match any of the policies in
|
||||
<filename>policy.json</filename>.</para>
|
||||
<para>[3] This policy evaluates successfully if either
|
||||
<emphasis role="italic">admin_or_owner</emphasis>, or
|
||||
<emphasis role="italic">shared</emphasis> evaluates
|
||||
successfully.</para>
|
||||
<para>[4] This policy restricts the ability to manipulate the
|
||||
<emphasis role="italic">shared</emphasis> attribute
|
||||
for a network to administrators only.</para>
|
||||
<para>[5] This policy restricts the ability to manipulate the
|
||||
<emphasis role="italic">mac_address</emphasis>
|
||||
attribute for a port only to administrators and the owner
|
||||
of the network where the port is attached.</para>
|
||||
<para>In some cases, some operations are restricted to
|
||||
administrators only. This example shows you how to modify
|
||||
a policy file to permit tenants to define networks and see
|
||||
their resources and permit administrative users to perform
|
||||
all other operations:</para>
|
||||
<programlisting language="bash">{
|
||||
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
|
||||
"admin_only": [["role:admin"]], "regular_user": [],
|
||||
"default": [["rule:admin_only"]],
|
||||
"create_subnet": [["rule:admin_only"]],
|
||||
"get_subnet": [["rule:admin_or_owner"]],
|
||||
"update_subnet": [["rule:admin_only"]],
|
||||
"delete_subnet": [["rule:admin_only"]],
|
||||
"create_network": [],
|
||||
"get_network": [["rule:admin_or_owner"]],
|
||||
"create_network:shared": [["rule:admin_only"]],
|
||||
"update_network": [["rule:admin_or_owner"]],
|
||||
"delete_network": [["rule:admin_or_owner"]],
|
||||
"create_port": [["rule:admin_only"]],
|
||||
"get_port": [["rule:admin_or_owner"]],
|
||||
"update_port": [["rule:admin_only"]],
|
||||
"delete_port": [["rule:admin_only"]]
|
||||
}</programlisting>
|
||||
</section>
|
||||
<title>Authentication and authorization</title>
|
||||
<para>Networking uses the Identity Service as the default
|
||||
authentication service. When the Identity Service is enabled,
|
||||
users who submit requests to the Networking service must
|
||||
provide an authentication token in
|
||||
<literal>X-Auth-Token</literal> request header. Users
|
||||
obtain this token by authenticating with the Identity Service
|
||||
endpoint. For more information about authentication with the
|
||||
Identity Service, see <link
|
||||
xlink:href="http://docs.openstack.org/api/openstack-identity-service/2.0/content/"
|
||||
><citetitle>OpenStack Identity Service API v2.0
|
||||
Reference</citetitle></link>. When the Identity
|
||||
Service is enabled, it is not mandatory to specify the tenant
|
||||
ID for resources in create requests because the tenant ID is
|
||||
derived from the authentication token.</para>
|
||||
<note>
|
||||
<para>The default authorization settings only allow
|
||||
administrative users to create resources on behalf of a
|
||||
different tenant. Networking uses information received
|
||||
from Identity to authorize user requests. Networking
|
||||
handles two kind of authorization policies:</para>
|
||||
</note>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Operation-based</emphasis>
|
||||
policies specify access criteria for specific
|
||||
operations, possibly with fine-grained control over
|
||||
specific attributes;</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Resource-based</emphasis>
|
||||
policies specify whether access to specific resource
|
||||
is granted or not according to the permissions
|
||||
configured for the resource (currently available only
|
||||
for the network resource). The actual authorization
|
||||
policies enforced in Networking might vary from
|
||||
deployment to deployment.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>The policy engine reads entries from the
|
||||
<filename>policy.json</filename> file. The actual location
|
||||
of this file might vary from distribution to distribution.
|
||||
Entries can be updated while the system is running, and no
|
||||
service restart is required. Every time the policy file is
|
||||
updated, the policies are automatically reloaded. Currently
|
||||
the only way of updating such policies is to edit the policy
|
||||
file. In this section, the terms <emphasis role="italic"
|
||||
>policy</emphasis> and <emphasis role="italic"
|
||||
>rule</emphasis> refer to objects that are specified in
|
||||
the same way in the policy file. There are no syntax
|
||||
differences between a rule and a policy. A policy is something
|
||||
that is matched directly from the Networking policy engine. A
|
||||
rule is an element in a policy, which is evaluated. For
|
||||
instance in <code>create_subnet:
|
||||
[["admin_or_network_owner"]]</code>, <emphasis
|
||||
role="italic">create_subnet</emphasis> is a policy, and
|
||||
<emphasis role="italic">admin_or_network_owner</emphasis>
|
||||
is a rule.</para>
|
||||
<para>Policies are triggered by the Networking policy engine
|
||||
whenever one of them matches an Networking API operation or a
|
||||
specific attribute being used in a given operation. For
|
||||
instance the <code>create_subnet</code> policy is triggered
|
||||
every time a <code>POST /v2.0/subnets</code> request is sent
|
||||
to the Networking server; on the other hand
|
||||
<code>create_network:shared</code> is triggered every time
|
||||
the <emphasis role="italic">shared</emphasis> attribute is
|
||||
explicitly specified (and set to a value different from its
|
||||
default) in a <code>POST /v2.0/networks</code> request. It is
|
||||
also worth mentioning that policies can be also related to
|
||||
specific API extensions; for instance
|
||||
<code>extension:provider_network:set</code> is be
|
||||
triggered if the attributes defined by the Provider Network
|
||||
extensions are specified in an API request.</para>
|
||||
<para>An authorization policy can be composed by one or more
|
||||
rules. If more rules are specified, evaluation policy succeeds
|
||||
if any of the rules evaluates successfully; if an API
|
||||
operation matches multiple policies, then all the policies
|
||||
must evaluate successfully. Also, authorization rules are
|
||||
recursive. Once a rule is matched, the rule(s) can be resolved
|
||||
to another rule, until a terminal rule is reached.</para>
|
||||
<para>The Networking policy engine currently defines the following
|
||||
kinds of terminal rules:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Role-based rules</emphasis>
|
||||
evaluate successfully if the user who submits the
|
||||
request has the specified role. For instance
|
||||
<code>"role:admin"</code> is successful if the
|
||||
user who submits the request is an
|
||||
administrator.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Field-based rules
|
||||
</emphasis>evaluate successfully if a field of the
|
||||
resource specified in the current request matches a
|
||||
specific value. For instance
|
||||
<code>"field:networks:shared=True"</code> is
|
||||
successful if the <literal>shared</literal> attribute
|
||||
of the <literal>network</literal> resource is set to
|
||||
true.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Generic rules</emphasis>
|
||||
compare an attribute in the resource with an attribute
|
||||
extracted from the user's security credentials and
|
||||
evaluates successfully if the comparison is
|
||||
successful. For instance
|
||||
<code>"tenant_id:%(tenant_id)s"</code> is
|
||||
successful if the tenant identifier in the resource is
|
||||
equal to the tenant identifier of the user submitting
|
||||
the request.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>This extract is from the default
|
||||
<filename>policy.json</filename> file:</para>
|
||||
|
||||
<programlistingco>
|
||||
<areaspec>
|
||||
<area xml:id="networking_auth.json.rule"
|
||||
units="linecolumn" coords="2 23"/>
|
||||
<area xml:id="networking_auth.json.policy1"
|
||||
units="linecolumn" coords="31 16"/>
|
||||
<area xml:id="networking_auth.json.policy2"
|
||||
units="linecolumn" coords="62 20"/>
|
||||
<area xml:id="networking_auth.json.policy3"
|
||||
units="linecolumn" coords="70 30"/>
|
||||
<area xml:id="networking_auth.json.policy4"
|
||||
units="linecolumn" coords="88 32"/>
|
||||
</areaspec>
|
||||
<programlisting language="json"><xi:include href="../common/samples/networking_auth.json" parse="text"/></programlisting>
|
||||
</programlistingco>
|
||||
<calloutlist>
|
||||
<callout arearefs="networking_auth.json.rule">
|
||||
<para>A rule that evaluates successfully if the current
|
||||
user is an administrator or the owner of the resource
|
||||
specified in the request (tenant identifier is
|
||||
equal).</para>
|
||||
</callout>
|
||||
<callout arearefs="networking_auth.json.policy1">
|
||||
<para>The default policy that is always evaluated if an
|
||||
API operation does not match any of the policies in
|
||||
<filename>policy.json</filename>.</para>
|
||||
</callout>
|
||||
<callout arearefs="networking_auth.json.policy2">
|
||||
<para>This policy evaluates successfully if either
|
||||
<emphasis role="italic">admin_or_owner</emphasis>,
|
||||
or <emphasis role="italic">shared</emphasis> evaluates
|
||||
successfully.</para>
|
||||
</callout>
|
||||
<callout arearefs="networking_auth.json.policy3">
|
||||
<para>This policy restricts the ability to manipulate the
|
||||
<emphasis role="italic">shared</emphasis>
|
||||
attribute for a network to administrators only.</para>
|
||||
</callout>
|
||||
<callout arearefs="networking_auth.json.policy4">
|
||||
<para>This policy restricts the ability to manipulate the
|
||||
<emphasis role="italic">mac_address</emphasis>
|
||||
attribute for a port only to administrators and the
|
||||
owner of the network where the port is
|
||||
attached.</para>
|
||||
</callout>
|
||||
</calloutlist>
|
||||
<para>In some cases, some operations are restricted to
|
||||
administrators only. This example shows you how to modify a
|
||||
policy file to permit tenants to define networks and see their
|
||||
resources and permit administrative users to perform all other
|
||||
operations:</para>
|
||||
<programlisting language="bash">{
|
||||
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
|
||||
"admin_only": [["role:admin"]], "regular_user": [],
|
||||
"default": [["rule:admin_only"]],
|
||||
"create_subnet": [["rule:admin_only"]],
|
||||
"get_subnet": [["rule:admin_or_owner"]],
|
||||
"update_subnet": [["rule:admin_only"]],
|
||||
"delete_subnet": [["rule:admin_only"]],
|
||||
"create_network": [],
|
||||
"get_network": [["rule:admin_or_owner"]],
|
||||
"create_network:shared": [["rule:admin_only"]],
|
||||
"update_network": [["rule:admin_or_owner"]],
|
||||
"delete_network": [["rule:admin_or_owner"]],
|
||||
"create_port": [["rule:admin_only"]],
|
||||
"get_port": [["rule:admin_or_owner"]],
|
||||
"update_port": [["rule:admin_only"]],
|
||||
"delete_port": [["rule:admin_only"]]
|
||||
}</programlisting>
|
||||
</section>
|
||||
|
55
doc/common/samples/authentication.json
Normal file
55
doc/common/samples/authentication.json
Normal file
@ -0,0 +1,55 @@
|
||||
{
|
||||
"context_is_admin":[
|
||||
[
|
||||
"role:admin"
|
||||
]
|
||||
],
|
||||
"admin_or_owner":[
|
||||
[
|
||||
"is_admin:True"
|
||||
],
|
||||
[
|
||||
"project_id:%(project_id)s"
|
||||
]
|
||||
],
|
||||
"default":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"admin_api":[
|
||||
[
|
||||
"is_admin:True"
|
||||
]
|
||||
],
|
||||
"volume:create":[
|
||||
|
||||
],
|
||||
"volume:get_all":[
|
||||
|
||||
],
|
||||
"volume:get_volume_metadata":[
|
||||
|
||||
],
|
||||
"volume:get_snapshot":[
|
||||
|
||||
],
|
||||
"volume:get_all_snapshots":[
|
||||
|
||||
],
|
||||
"volume_extension:types_manage":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"volume_extension:types_extra_specs":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"...":[
|
||||
[
|
||||
"...:..."
|
||||
]
|
||||
]
|
||||
}
|
@ -245,5 +245,4 @@
|
||||
"network:create_private_dns_domain":"",
|
||||
"network:create_public_dns_domain":"",
|
||||
"network:delete_dns_domain":""
|
||||
}
|
||||
|
||||
}
|
14
doc/common/samples/list_metadata.json
Normal file
14
doc/common/samples/list_metadata.json
Normal file
@ -0,0 +1,14 @@
|
||||
{
|
||||
"uuid":"d8e02d56-2648-49a3-bf97-6be8f1204f38",
|
||||
"availability_zone":"nova",
|
||||
"hostname":"test.novalocal",
|
||||
"launch_index":0,
|
||||
"meta":{
|
||||
"priority":"low",
|
||||
"role":"webserver"
|
||||
},
|
||||
"public_keys":{
|
||||
"mykey":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"
|
||||
},
|
||||
"name":"test"
|
||||
}
|
113
doc/common/samples/networking_auth.json
Normal file
113
doc/common/samples/networking_auth.json
Normal file
@ -0,0 +1,113 @@
|
||||
{
|
||||
"admin_or_owner":[
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"tenant_id:%(tenant_id)s"
|
||||
]
|
||||
],
|
||||
"admin_or_network_owner":[
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"tenant_id:%(network_tenant_id)s"
|
||||
]
|
||||
],
|
||||
"admin_only":[
|
||||
[
|
||||
"role:admin"
|
||||
]
|
||||
],
|
||||
"regular_user":[
|
||||
|
||||
],
|
||||
"shared":[
|
||||
[
|
||||
"field:networks:shared=True"
|
||||
]
|
||||
],
|
||||
"default":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"create_subnet":[
|
||||
[
|
||||
"rule:admin_or_network_owner"
|
||||
]
|
||||
],
|
||||
"get_subnet":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
],
|
||||
[
|
||||
"rule:shared"
|
||||
]
|
||||
],
|
||||
"update_subnet":[
|
||||
[
|
||||
"rule:admin_or_network_owner"
|
||||
]
|
||||
],
|
||||
"delete_subnet":[
|
||||
[
|
||||
"rule:admin_or_network_owner"
|
||||
]
|
||||
],
|
||||
"create_network":[
|
||||
|
||||
],
|
||||
"get_network":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
],
|
||||
[
|
||||
"rule:shared"
|
||||
]
|
||||
],
|
||||
"create_network:shared":[
|
||||
[
|
||||
"rule:admin_only"
|
||||
]
|
||||
],
|
||||
"update_network":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"delete_network":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"create_port":[
|
||||
|
||||
],
|
||||
"create_port:mac_address":[
|
||||
[
|
||||
"rule:admin_or_network_owner"
|
||||
]
|
||||
],
|
||||
"create_port:fixed_ips":[
|
||||
[
|
||||
"rule:admin_or_network_owner"
|
||||
]
|
||||
],
|
||||
"get_port":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"update_port":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"delete_port":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
]
|
||||
}
|
346
doc/common/samples/restrict_roles.json
Normal file
346
doc/common/samples/restrict_roles.json
Normal file
@ -0,0 +1,346 @@
|
||||
{
|
||||
"admin_or_owner":[
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"project_id:%(project_id)s"
|
||||
]
|
||||
],
|
||||
"default":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute:create":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:create:attach_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:create:attach_volume":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:get_all":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:unlock_override":[
|
||||
"rule:admin_api"
|
||||
],
|
||||
"admin_api":[
|
||||
[
|
||||
"role:admin"
|
||||
]
|
||||
],
|
||||
"compute_extension:accounts":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:pause":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:unpause":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:suspend":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:resume":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:lock":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:unlock":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:resetNetwork":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:injectNetworkInfo":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:createBackup":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:migrateLive":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:migrate":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:aggregates":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:certificates":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:cloudpipe":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:console_output":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:consoles":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:createserverext":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:deferred_delete":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:disk_config":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:evacuate":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:extended_server_attributes":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:extended_status":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:flavorextradata":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:flavorextraspecs":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:flavormanage":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:floating_ip_dns":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:floating_ip_pools":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:floating_ips":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:hosts":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:keypairs":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:multinic":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:networks":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:quotas":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:rescue":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:security_groups":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:server_action_list":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:server_diagnostics":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:simple_tenant_usage:show":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:simple_tenant_usage:list":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:users":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:virtual_interfaces":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:virtual_storage_arrays":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volumes":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volume_attachments:index":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volume_attachments:show":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volume_attachments:create":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volume_attachments:delete":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volumetypes":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:create":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_all":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_volume_metadata":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_snapshot":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_all_snapshots":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_all_networks":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:delete_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:disassociate_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_vifs_by_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:allocate_for_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:deallocate_for_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:validate_networks":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_instance_uuids_by_ip_filter":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ip_pools":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ip_by_address":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ips_by_project":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ips_by_fixed_address":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:allocate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:deallocate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:associate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:disassociate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_fixed_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:add_fixed_ip_to_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:remove_fixed_ip_from_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:add_network_to_project":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_instance_nw_info":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_dns_domains":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:add_dns_entry":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:modify_dns_entry":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:delete_dns_entry":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_dns_entries_by_address":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_dns_entries_by_name":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:create_private_dns_domain":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:create_public_dns_domain":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:delete_dns_domain":[
|
||||
"role:compute-user"
|
||||
]
|
||||
}
|
346
doc/common/samples/restrict_roles2.json
Normal file
346
doc/common/samples/restrict_roles2.json
Normal file
@ -0,0 +1,346 @@
|
||||
{
|
||||
"admin_or_owner":[
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"project_id:%(project_id)s"
|
||||
]
|
||||
],
|
||||
"default":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute:create":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:create:attach_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:create:attach_volume":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:get_all":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:unlock_override":[
|
||||
"rule:admin_api"
|
||||
],
|
||||
"admin_api":[
|
||||
[
|
||||
"role:admin"
|
||||
]
|
||||
],
|
||||
"compute_extension:accounts":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:pause":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:unpause":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:suspend":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:resume":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:lock":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:unlock":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:resetNetwork":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:injectNetworkInfo":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:createBackup":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:migrateLive":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:migrate":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:aggregates":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:certificates":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:cloudpipe":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:console_output":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:consoles":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:createserverext":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:deferred_delete":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:disk_config":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:evacuate":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:extended_server_attributes":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:extended_status":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:flavorextradata":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:flavorextraspecs":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:flavormanage":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:floating_ip_dns":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:floating_ip_pools":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:floating_ips":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:hosts":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:keypairs":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:multinic":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:networks":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:quotas":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:rescue":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:security_groups":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:server_action_list":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:server_diagnostics":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:simple_tenant_usage:show":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:simple_tenant_usage:list":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:users":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:virtual_interfaces":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:virtual_storage_arrays":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volumes":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volume_attachments:index":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volume_attachments:show":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volume_attachments:create":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volume_attachments:delete":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volumetypes":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:create":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_all":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_volume_metadata":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_snapshot":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_all_snapshots":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_all_networks":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:delete_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:disassociate_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_vifs_by_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:allocate_for_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:deallocate_for_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:validate_networks":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_instance_uuids_by_ip_filter":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ip_pools":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ip_by_address":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ips_by_project":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ips_by_fixed_address":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:allocate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:deallocate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:associate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:disassociate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_fixed_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:add_fixed_ip_to_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:remove_fixed_ip_from_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:add_network_to_project":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_instance_nw_info":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_dns_domains":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:add_dns_entry":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:modify_dns_entry":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:delete_dns_entry":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_dns_entries_by_address":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_dns_entries_by_name":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:create_private_dns_domain":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:create_public_dns_domain":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:delete_dns_domain":[
|
||||
"role:compute-user"
|
||||
]
|
||||
}
|
13
doc/common/samples/server-scheduler-hints.json
Normal file
13
doc/common/samples/server-scheduler-hints.json
Normal file
@ -0,0 +1,13 @@
|
||||
{
|
||||
"server":{
|
||||
"name":"server-1",
|
||||
"imageRef":"cedef40a-ed67-4d10-800e-17455edce175",
|
||||
"flavorRef":"1"
|
||||
},
|
||||
"os:scheduler_hints":{
|
||||
"different_host":[
|
||||
"a0cf03a5-d921-4877-bb5c-86d26cf818e1",
|
||||
"8c19174f-4220-44f0-824a-cd1eeef10287"
|
||||
]
|
||||
}
|
||||
}
|
10
doc/common/samples/server-scheduler-hints2.json
Normal file
10
doc/common/samples/server-scheduler-hints2.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"server":{
|
||||
"name":"server-1",
|
||||
"imageRef":"cedef40a-ed67-4d10-800e-17455edce175",
|
||||
"flavorRef":"1"
|
||||
},
|
||||
"os:scheduler_hints":{
|
||||
"query":"[>=,$free_ram_mb,1024]"
|
||||
}
|
||||
}
|
13
doc/common/samples/server-scheduler-hints3.json
Normal file
13
doc/common/samples/server-scheduler-hints3.json
Normal file
@ -0,0 +1,13 @@
|
||||
{
|
||||
"server":{
|
||||
"name":"server-1",
|
||||
"imageRef":"cedef40a-ed67-4d10-800e-17455edce175",
|
||||
"flavorRef":"1"
|
||||
},
|
||||
"os:scheduler_hints":{
|
||||
"same_host":[
|
||||
"a0cf03a5-d921-4877-bb5c-86d26cf818e1",
|
||||
"8c19174f-4220-44f0-824a-cd1eeef10287"
|
||||
]
|
||||
}
|
||||
}
|
11
doc/common/samples/server-scheduler-hints4.json
Normal file
11
doc/common/samples/server-scheduler-hints4.json
Normal file
@ -0,0 +1,11 @@
|
||||
{
|
||||
"server":{
|
||||
"name":"server-1",
|
||||
"imageRef":"cedef40a-ed67-4d10-800e-17455edce175",
|
||||
"flavorRef":"1"
|
||||
},
|
||||
"os:scheduler_hints":{
|
||||
"build_near_host_ip":"192.168.1.1",
|
||||
"cidr":"24"
|
||||
}
|
||||
}
|
13
doc/common/samples/token.json
Normal file
13
doc/common/samples/token.json
Normal file
@ -0,0 +1,13 @@
|
||||
{
|
||||
"token":{
|
||||
"expires":"2013-06-26T16:52:50Z",
|
||||
"id":"MIIKXAY...",
|
||||
"issued_at":"2013-06-25T16:52:50.622502",
|
||||
"tenant":{
|
||||
"description":null,
|
||||
"enabled":true,
|
||||
"id":"912426c8f4c04fb0a07d2547b0704185",
|
||||
"name":"demo"
|
||||
}
|
||||
}
|
||||
}
|
@ -109,108 +109,5 @@
|
||||
<programlisting language="json">"volume:create": ["role:compute-user"],</programlisting>
|
||||
<para>To restrict all Compute service requests to require this
|
||||
role, the resulting file would look like:</para>
|
||||
<programlisting language="json"><?db-font-size 50%?>{
|
||||
"admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]],
|
||||
"default": [["rule:admin_or_owner"]],
|
||||
|
||||
"compute:create": ["role":"compute-user"],
|
||||
"compute:create:attach_network": ["role":"compute-user"],
|
||||
"compute:create:attach_volume": ["role":"compute-user"],
|
||||
"compute:get_all": ["role":"compute-user"],
|
||||
"compute:unlock_override": ["rule":"admin_api"],
|
||||
|
||||
"admin_api": [["role:admin"]],
|
||||
"compute_extension:accounts": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions:pause": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:resume": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:lock": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:unlock": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:migrateLive": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions:migrate": [["rule:admin_api"]],
|
||||
"compute_extension:aggregates": [["rule:admin_api"]],
|
||||
"compute_extension:certificates": ["role":"compute-user"],
|
||||
"compute_extension:cloudpipe": [["rule:admin_api"]],
|
||||
"compute_extension:console_output": ["role":"compute-user"],
|
||||
"compute_extension:consoles": ["role":"compute-user"],
|
||||
"compute_extension:createserverext": ["role":"compute-user"],
|
||||
"compute_extension:deferred_delete": ["role":"compute-user"],
|
||||
"compute_extension:disk_config": ["role":"compute-user"],
|
||||
"compute_extension:evacuate": [["rule:admin_api"]],
|
||||
"compute_extension:extended_server_attributes": [["rule:admin_api"]],
|
||||
"compute_extension:extended_status": ["role":"compute-user"],
|
||||
"compute_extension:flavorextradata": ["role":"compute-user"],
|
||||
"compute_extension:flavorextraspecs": ["role":"compute-user"],
|
||||
"compute_extension:flavormanage": [["rule:admin_api"]],
|
||||
"compute_extension:floating_ip_dns": ["role":"compute-user"],
|
||||
"compute_extension:floating_ip_pools": ["role":"compute-user"],
|
||||
"compute_extension:floating_ips": ["role":"compute-user"],
|
||||
"compute_extension:hosts": [["rule:admin_api"]],
|
||||
"compute_extension:keypairs": ["role":"compute-user"],
|
||||
"compute_extension:multinic": ["role":"compute-user"],
|
||||
"compute_extension:networks": [["rule:admin_api"]],
|
||||
"compute_extension:quotas": ["role":"compute-user"],
|
||||
"compute_extension:rescue": ["role":"compute-user"],
|
||||
"compute_extension:security_groups": ["role":"compute-user"],
|
||||
"compute_extension:server_action_list": [["rule:admin_api"]],
|
||||
"compute_extension:server_diagnostics": [["rule:admin_api"]],
|
||||
"compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]],
|
||||
"compute_extension:simple_tenant_usage:list": [["rule:admin_api"]],
|
||||
"compute_extension:users": [["rule:admin_api"]],
|
||||
"compute_extension:virtual_interfaces": ["role":"compute-user"],
|
||||
"compute_extension:virtual_storage_arrays": ["role":"compute-user"],
|
||||
"compute_extension:volumes": ["role":"compute-user"],
|
||||
"compute_extension:volume_attachments:index": ["role":"compute-user"],
|
||||
"compute_extension:volume_attachments:show": ["role":"compute-user"],
|
||||
"compute_extension:volume_attachments:create": ["role":"compute-user"],
|
||||
"compute_extension:volume_attachments:delete": ["role":"compute-user"],
|
||||
"compute_extension:volumetypes": ["role":"compute-user"],
|
||||
|
||||
"volume:create": ["role":"compute-user"],
|
||||
"volume:get_all": ["role":"compute-user"],
|
||||
"volume:get_volume_metadata": ["role":"compute-user"],
|
||||
"volume:get_snapshot": ["role":"compute-user"],
|
||||
"volume:get_all_snapshots": ["role":"compute-user"],
|
||||
|
||||
"network:get_all_networks": ["role":"compute-user"],
|
||||
"network:get_network": ["role":"compute-user"],
|
||||
"network:delete_network": ["role":"compute-user"],
|
||||
"network:disassociate_network": ["role":"compute-user"],
|
||||
"network:get_vifs_by_instance": ["role":"compute-user"],
|
||||
"network:allocate_for_instance": ["role":"compute-user"],
|
||||
"network:deallocate_for_instance": ["role":"compute-user"],
|
||||
"network:validate_networks": ["role":"compute-user"],
|
||||
"network:get_instance_uuids_by_ip_filter": ["role":"compute-user"],
|
||||
|
||||
"network:get_floating_ip": ["role":"compute-user"],
|
||||
"network:get_floating_ip_pools": ["role":"compute-user"],
|
||||
"network:get_floating_ip_by_address": ["role":"compute-user"],
|
||||
"network:get_floating_ips_by_project": ["role":"compute-user"],
|
||||
"network:get_floating_ips_by_fixed_address": ["role":"compute-user"],
|
||||
"network:allocate_floating_ip": ["role":"compute-user"],
|
||||
"network:deallocate_floating_ip": ["role":"compute-user"],
|
||||
"network:associate_floating_ip": ["role":"compute-user"],
|
||||
"network:disassociate_floating_ip": ["role":"compute-user"],
|
||||
|
||||
"network:get_fixed_ip": ["role":"compute-user"],
|
||||
"network:add_fixed_ip_to_instance": ["role":"compute-user"],
|
||||
"network:remove_fixed_ip_from_instance": ["role":"compute-user"],
|
||||
"network:add_network_to_project": ["role":"compute-user"],
|
||||
"network:get_instance_nw_info": ["role":"compute-user"],
|
||||
|
||||
"network:get_dns_domains": ["role":"compute-user"],
|
||||
"network:add_dns_entry": ["role":"compute-user"],
|
||||
"network:modify_dns_entry": ["role":"compute-user"],
|
||||
"network:delete_dns_entry": ["role":"compute-user"],
|
||||
"network:get_dns_entries_by_address": ["role":"compute-user"],
|
||||
"network:get_dns_entries_by_name": ["role":"compute-user"],
|
||||
"network:create_private_dns_domain": ["role":"compute-user"],
|
||||
"network:create_public_dns_domain": ["role":"compute-user"],
|
||||
"network:delete_dns_domain": ["role":"compute-user"]
|
||||
}</programlisting>
|
||||
<programlisting language="json"><xi:include href="../common/samples/restrict_roles2.json" parse="text"/></programlisting>
|
||||
</section>
|
||||
|
@ -2,6 +2,7 @@
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>EMC SMI-S iSCSI driver</title>
|
||||
<para>The EMC SMI-S iSCSI driver, which is based on the iSCSI
|
||||
driver, can create, delete, attach, and detach volumes. It can
|
||||
@ -12,8 +13,8 @@
|
||||
HTTP.</para>
|
||||
<para>The EMC CIM Object Manager (ECOM) is packaged with the EMC
|
||||
SMI-S provider. It is a CIM server that enables CIM clients to
|
||||
perform CIM operations over HTTP by using SMI-S in the back-end for
|
||||
EMC storage operations.</para>
|
||||
perform CIM operations over HTTP by using SMI-S in the
|
||||
back-end for EMC storage operations.</para>
|
||||
<para>The EMC SMI-S Provider supports the SNIA Storage Management
|
||||
Initiative (SMI), an ANSI standard for storage management. It
|
||||
supports VMAX and VNX storage systems.</para>
|
||||
@ -29,8 +30,7 @@
|
||||
</section>
|
||||
<section xml:id="emc-supported-ops">
|
||||
<title>Supported operations</title>
|
||||
<para>VMAX and
|
||||
VNX arrays support these operations:</para>
|
||||
<para>VMAX and VNX arrays support these operations:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Create volume</para>
|
||||
@ -73,9 +73,9 @@
|
||||
<procedure>
|
||||
<title>To set up the EMC SMI-S iSCSI driver</title>
|
||||
<step>
|
||||
<para>Install the <package>python-pywbem</package> package for your
|
||||
distribution. See <xref linkend="install-pywbem"
|
||||
/>.</para>
|
||||
<para>Install the <package>python-pywbem</package>
|
||||
package for your distribution. See <xref
|
||||
linkend="install-pywbem"/>.</para>
|
||||
</step>
|
||||
<step>
|
||||
<para>Download SMI-S from PowerLink and install it.
|
||||
@ -93,11 +93,12 @@
|
||||
</step>
|
||||
</procedure>
|
||||
<section xml:id="install-pywbem">
|
||||
<title>Install the <package>python-pywbem</package> package</title>
|
||||
<title>Install the <package>python-pywbem</package>
|
||||
package</title>
|
||||
<procedure>
|
||||
<step>
|
||||
<para>Install the <package>python-pywbem</package> package for your
|
||||
distribution:</para>
|
||||
<para>Install the <package>python-pywbem</package>
|
||||
package for your distribution:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>On Ubuntu:</para>
|
||||
@ -119,14 +120,16 @@
|
||||
<title>Set up SMI-S</title>
|
||||
<para>You can install SMI-S on a non-OpenStack host.
|
||||
Supported platforms include different flavors of
|
||||
Windows, Red Hat, and SUSE Linux. The host can be either a
|
||||
physical server or VM hosted by an ESX server. See
|
||||
the EMC SMI-S Provider release notes for supported
|
||||
platforms and installation instructions.</para>
|
||||
Windows, Red Hat, and SUSE Linux. The host can be
|
||||
either a physical server or VM hosted by an ESX
|
||||
server. See the EMC SMI-S Provider release notes for
|
||||
supported platforms and installation
|
||||
instructions.</para>
|
||||
<note>
|
||||
<para>You must discover storage arrays on the SMI-S
|
||||
server before you can use the Cinder driver. Follow
|
||||
instructions in the SMI-S release notes.</para>
|
||||
server before you can use the Cinder driver.
|
||||
Follow instructions in the SMI-S release
|
||||
notes.</para>
|
||||
</note>
|
||||
<para>SMI-S is usually installed at
|
||||
<filename>/opt/emc/ECIM/ECOM/bin</filename> on
|
||||
@ -146,29 +149,33 @@
|
||||
<title>Register with VNX</title>
|
||||
<para>To export a VNX volume to a Compute node, you must
|
||||
register the node with VNX.</para>
|
||||
<para>On the Compute node <literal>1.1.1.1</literal>, run these commands (assume <literal>10.10.61.35</literal>
|
||||
<para>On the Compute node <literal>1.1.1.1</literal>, run
|
||||
these commands (assume <literal>10.10.61.35</literal>
|
||||
is the iscsi target):</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo /etc/init.d/open-iscsi start</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m discovery -t st -p <literal>10.10.61.35</literal></userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>cd /etc/iscsi</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>sudo more initiatorname.iscsi</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>iscsiadm -m node</userinput></screen>
|
||||
<para>Log in to VNX from the Compute node by using the target
|
||||
corresponding to the SPA port:</para>
|
||||
<para>Log in to VNX from the Compute node by using the
|
||||
target corresponding to the SPA port:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T <literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal> -p <literal>10.10.61.35</literal> -l</userinput></screen>
|
||||
<para>Assume
|
||||
<para>Assume that
|
||||
<literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal>
|
||||
is the initiator name of the Compute node. Log in to
|
||||
Unisphere, go to
|
||||
<literal>VNX00000</literal>->Hosts->Initiators,
|
||||
refresh and wait until initiator
|
||||
refresh, and wait until initiator
|
||||
<literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal>
|
||||
with SP Port <literal>A-8v0</literal> appears.</para>
|
||||
<para>Click <guibutton>Register</guibutton>, select <guilabel>CLARiiON/VNX</guilabel>,
|
||||
and enter the <literal>myhost1</literal> host name and <literal>myhost1</literal>
|
||||
IP address. Click <guibutton>Register</guibutton>.
|
||||
Now the <literal>1.1.1.1</literal> host appears under
|
||||
<guimenu>Hosts</guimenu> <guimenuitem>Host List</guimenuitem> as well.</para>
|
||||
<para>Click <guibutton>Register</guibutton>, select
|
||||
<guilabel>CLARiiON/VNX</guilabel>, and enter the
|
||||
<literal>myhost1</literal> host name and
|
||||
<literal>myhost1</literal> IP address. Click
|
||||
<guibutton>Register</guibutton>. Now the
|
||||
<literal>1.1.1.1</literal> host appears under
|
||||
<guimenu>Hosts</guimenu>
|
||||
<guimenuitem>Host List</guimenuitem> as well.</para>
|
||||
<para>Log out of VNX on the Compute node:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen>
|
||||
<para>Log in to VNX from the Compute node using the target
|
||||
@ -184,7 +191,7 @@
|
||||
<para>For VMAX, you must set up the Unisphere for VMAX
|
||||
server. On the Unisphere for VMAX server, create
|
||||
initiator group, storage group, and port group and put
|
||||
them in a masking view. Initiator group contains the
|
||||
them in a masking view. initiator group contains the
|
||||
initiator names of the OpenStack hosts. Storage group
|
||||
must have at least six gatekeepers.</para>
|
||||
</section>
|
||||
@ -219,37 +226,23 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
|
||||
change.</para>
|
||||
<para>For VMAX, add the following lines to the XML
|
||||
file:</para>
|
||||
<programlisting language="xml"><?xml version='1.0' encoding='UTF-8'?>
|
||||
<EMC>
|
||||
<StorageType>xxxx</StorageType>
|
||||
<MaskingView>xxxx</MaskingView>
|
||||
<EcomServerIp>x.x.x.x</EcomServerIp>
|
||||
<EcomServerPort>xxxx</EcomServerPort>
|
||||
<EcomUserName>xxxxxxxx</EcomUserName>
|
||||
<EcomPassword>xxxxxxxx</EcomPassword>
|
||||
</EMC<</programlisting>
|
||||
<programlisting language="xml"><xi:include href="samples/emc-vmax.xml" parse="text"/></programlisting>
|
||||
<para>For VNX, add the following lines to the XML
|
||||
file:</para>
|
||||
<programlisting language="xml"><?xml version='1.0' encoding='UTF-8'?>
|
||||
<EMC>
|
||||
<StorageType>xxxx</StorageType>
|
||||
<EcomServerIp>x.x.x.x</EcomServerIp>
|
||||
<EcomServerPort>xxxx</EcomServerPort>
|
||||
<EcomUserName>xxxxxxxx</EcomUserName>
|
||||
<EcomPassword>xxxxxxxx</EcomPassword>
|
||||
</EMC<</programlisting>
|
||||
<programlisting language="xml"><xi:include href="samples/emc-vnx.xml" parse="text"/></programlisting>
|
||||
<para>To attach VMAX volumes to an OpenStack VM, you must
|
||||
create a Masking View by using Unisphere for VMAX. The
|
||||
Masking View must have an Initiator Group that
|
||||
create a masking view by using Unisphere for VMAX. The
|
||||
masking view must have an initiator group that
|
||||
contains the initiator of the OpenStack compute node
|
||||
that hosts the VM.</para>
|
||||
<para>StorageType is the thin pool where the user wants to
|
||||
create the volume from. Only thin LUNs are supported
|
||||
by the plug-in. Thin pools can be created using
|
||||
Unisphere for VMAX and VNX.</para>
|
||||
<para>EcomServerIp and EcomServerPort are the IP address
|
||||
and port number of the ECOM server which is packaged
|
||||
with SMI-S. EcomUserName and EcomPassword are
|
||||
<para><parameter>StorageType</parameter> is the thin pool
|
||||
where the user wants to create the volume from. Only
|
||||
thin LUNs are supported by the plug-in. Thin pools can
|
||||
be created using Unisphere for VMAX and VNX.</para>
|
||||
<para><parameter>EcomServerIp</parameter> and
|
||||
<parameter>EcomServerPort</parameter> are the IP
|
||||
address and port number of the ECOM server which is
|
||||
packaged with SMI-S. EcomUserName and EcomPassword are
|
||||
credentials for the ECOM server.</para>
|
||||
</section>
|
||||
</section>
|
||||
|
@ -0,0 +1,9 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<EMC>
|
||||
<StorageType>xxxx</StorageType>
|
||||
<MaskingView>xxxx</MaskingView>
|
||||
<EcomServerIp>x.x.x.x</EcomServerIp>
|
||||
<EcomServerPort>xxxx</EcomServerPort>
|
||||
<EcomUserName>xxxxxxxx</EcomUserName>
|
||||
<EcomPassword>xxxxxxxx</EcomPassword>
|
||||
</EMC>
|
@ -0,0 +1,8 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<EMC>
|
||||
<StorageType>xxxx</StorageType>
|
||||
<EcomServerIp>x.x.x.x</EcomServerIp>
|
||||
<EcomServerPort>xxxx</EcomServerPort>
|
||||
<EcomUserName>xxxxxxxx</EcomUserName>
|
||||
<EcomPassword>xxxxxxxx</EcomPassword>
|
||||
</EMC>
|
@ -5,17 +5,22 @@
|
||||
xmlns:ns5="http://www.w3.org/1999/xhtml"
|
||||
xmlns:ns4="http://www.w3.org/2000/svg"
|
||||
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
|
||||
xmlns:ns="http://docbook.org/ns/docbook"
|
||||
version="5.0">
|
||||
xmlns:ns="http://docbook.org/ns/docbook" version="5.0">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Scheduling</title>
|
||||
<para>Compute uses the <systemitem class="service">nova-scheduler</systemitem> service to
|
||||
determine how to dispatch compute requests. For example, the <systemitem
|
||||
class="service">nova-scheduler</systemitem> service determines which host a VM should
|
||||
launch on. The term <firstterm>host</firstterm> in the context of filters means a physical node that has a
|
||||
<systemitem class="service">nova-compute</systemitem> service running on it.
|
||||
You can configure the scheduler through a variety of options.</para>
|
||||
<para>Compute is configured with the following default scheduler options:</para>
|
||||
<programlisting language="ini">scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
|
||||
<para>Compute uses the <systemitem class="service"
|
||||
>nova-scheduler</systemitem> service to determine how to
|
||||
dispatch compute and volume requests. For example, the
|
||||
<systemitem class="service">nova-scheduler</systemitem>
|
||||
service determines which host a VM should launch on. The term
|
||||
<firstterm>host</firstterm> in the context of filters
|
||||
means a physical node that has a <systemitem class="service"
|
||||
>nova-compute</systemitem> service running on it. You can
|
||||
configure the scheduler through a variety of options.</para>
|
||||
<para>Compute is configured with the following default scheduler
|
||||
options:</para>
|
||||
<programlisting language="ini">scheduler_driver=nova.scheduler.multi.MultiScheduler
|
||||
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
|
||||
scheduler_available_filters=nova.scheduler.filters.all_filters
|
||||
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter</programlisting>
|
||||
<para>By default, the scheduler_driver is configured as a filter
|
||||
@ -40,16 +45,22 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi
|
||||
(<literal>ComputeFilter</literal>).</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Satisfy the extra specs associated with the instance type
|
||||
(<literal>ComputeCapabilitiesFilter</literal>).</para>
|
||||
<para>Satisfy the extra specs associated with the instance
|
||||
type
|
||||
(<literal>ComputeCapabilitiesFilter</literal>).</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Satisfy any architecture, hypervisor type, or virtual
|
||||
machine mode properties specified on the instance's image
|
||||
properties.
|
||||
<para>Satisfy any architecture, hypervisor type, or
|
||||
virtual machine mode properties specified on the
|
||||
instance's image properties.
|
||||
(<literal>ImagePropertiesFilter</literal>).</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>For information on the volume scheduler, refer the Block
|
||||
Storage section of <link
|
||||
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/managing-volumes.html">
|
||||
<citetitle>OpenStack Cloud Administrator
|
||||
Guide</citetitle></link> for information.</para>
|
||||
<section xml:id="filter-scheduler">
|
||||
<title>Filter scheduler</title>
|
||||
<para>The Filter Scheduler
|
||||
@ -80,7 +91,7 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
</figure>
|
||||
</para>
|
||||
</para>
|
||||
<para>The <literal>scheduler_available_filters</literal>
|
||||
configuration option in <filename>nova.conf</filename>
|
||||
provides the Compute service with the list of the filters
|
||||
@ -100,9 +111,10 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
<para>The <literal>scheduler_default_filters</literal>
|
||||
configuration option in <filename>nova.conf</filename>
|
||||
defines the list of filters that are applied by the
|
||||
<systemitem class="service">nova-scheduler</systemitem> service. As
|
||||
mentioned, the default filters are:</para>
|
||||
<programlisting language="ini">scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter</programlisting>
|
||||
<systemitem class="service"
|
||||
>nova-scheduler</systemitem> service. As mentioned,
|
||||
the default filters are:</para>
|
||||
<programlisting language="ini">scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter</programlisting>
|
||||
<para>The following sections describe the available
|
||||
filters.</para>
|
||||
<section xml:id="aggregatecorefilter">
|
||||
@ -117,11 +129,12 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
<title>AggregateInstanceExtraSpecsFilter</title>
|
||||
<para>Matches properties defined in an instance type's
|
||||
extra specs against admin-defined properties on a host
|
||||
aggregate. Works with specifications that are unscoped,
|
||||
or are scoped with <literal>aggregate_instance_extra_specs</literal>.
|
||||
See the <link linkend="host-aggregates"
|
||||
>host aggregates</link> section for documentation
|
||||
on how to use this filter.</para>
|
||||
aggregate. Works with specifications that are
|
||||
unscoped, or are scoped with
|
||||
<literal>aggregate_instance_extra_specs</literal>.
|
||||
See the <link linkend="host-aggregates">host
|
||||
aggregates</link> section for documentation on how
|
||||
to use this filter.</para>
|
||||
</section>
|
||||
<section xml:id="aggregate-multi-tenancy-isolation">
|
||||
<title>AggregateMultiTenancyIsolation</title>
|
||||
@ -175,7 +188,7 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
<para>Passes all hosts that are operational and
|
||||
enabled.</para>
|
||||
<para>In general, this filter should always be enabled.
|
||||
</para>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="corefilter">
|
||||
<title>CoreFilter</title>
|
||||
@ -214,18 +227,7 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
<para>With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key. For
|
||||
example:</para>
|
||||
<programlisting language="json">
|
||||
{
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
'flavorRef': '1'
|
||||
},
|
||||
'os:scheduler_hints': {
|
||||
'different_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
|
||||
'8c19174f-4220-44f0-824a-cd1eeef10287'],
|
||||
}
|
||||
</programlisting>
|
||||
<programlisting language="json"><xi:include href="../../common/samples/server-scheduler-hints.json" parse="text"/></programlisting>
|
||||
</section>
|
||||
<section xml:id="diskfilter">
|
||||
<title>DiskFilter</title>
|
||||
@ -238,8 +240,9 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
Configuration option in
|
||||
<filename>nova.conf</filename>. The default setting
|
||||
is:</para>
|
||||
<programlisting language="ini">disk_allocation_ratio=1.0</programlisting>
|
||||
<para>Adjusting this value to greater than 1.0 enables scheduling instances while over committing disk
|
||||
<programlisting language="ini">disk_allocation_ratio=1.0</programlisting>
|
||||
<para>Adjusting this value to greater than 1.0 enables
|
||||
scheduling instances while over committing disk
|
||||
resources on the node. This might be desirable if you
|
||||
use an image format that is sparse or copy on write
|
||||
such that each virtual instance does not require a 1:1
|
||||
@ -248,11 +251,11 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
<section xml:id="groupaffinityfilter">
|
||||
<title>GroupAffinityFilter</title>
|
||||
<para>The GroupAffinityFilter ensures that an instance is
|
||||
scheduled on to a host from a set of group hosts.
|
||||
To take advantage of this filter, the requester must pass a
|
||||
scheduler hint, using <literal>group</literal> as the
|
||||
key and an arbitrary name as the value. Using
|
||||
the <command>nova</command> command-line tool, use the
|
||||
scheduled on to a host from a set of group hosts. To
|
||||
take advantage of this filter, the requester must pass
|
||||
a scheduler hint, using <literal>group</literal> as
|
||||
the key and an arbitrary name as the value. Using the
|
||||
<command>nova</command> command-line tool, use the
|
||||
<literal>--hint</literal> flag. For
|
||||
example:</para>
|
||||
<screen><prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
|
||||
@ -264,8 +267,8 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
instance in a group is on a different host. To take
|
||||
advantage of this filter, the requester must pass a
|
||||
scheduler hint, using <literal>group</literal> as the
|
||||
key and an arbitrary name as the value. Using
|
||||
the <command>nova</command> command-line tool, use the
|
||||
key and an arbitrary name as the value. Using the
|
||||
<command>nova</command> command-line tool, use the
|
||||
<literal>--hint</literal> flag. For
|
||||
example:</para>
|
||||
<screen><prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
|
||||
@ -314,8 +317,10 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
of images and a special (isolated) set of hosts, such
|
||||
that the isolated images can only run on the isolated
|
||||
hosts, and the isolated hosts can only run isolated
|
||||
images. The flag <literal>restrict_isolated_hosts_to_isolated_images</literal>
|
||||
can be used to force isolated hosts to only run isolated images.</para>
|
||||
images. The flag
|
||||
<literal>restrict_isolated_hosts_to_isolated_images</literal>
|
||||
can be used to force isolated hosts to only run
|
||||
isolated images.</para>
|
||||
<para>The admin must specify the isolated set of images
|
||||
and hosts in the <filename>nova.conf</filename> file
|
||||
using the <literal>isolated_hosts</literal> and
|
||||
@ -323,7 +328,7 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
options. For example:
|
||||
<programlisting language="ini">isolated_hosts=server1,server2
|
||||
isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09</programlisting>
|
||||
</para>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="jsonfilter">
|
||||
<title>JsonFilter</title>
|
||||
@ -380,18 +385,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
|
||||
--flavor 1 --hint query='[">=","$free_ram_mb",1024]' server1</userinput></screen>
|
||||
<para>With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key:</para>
|
||||
<programlisting language="json">
|
||||
{
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
'flavorRef': '1'
|
||||
},
|
||||
'os:scheduler_hints': {
|
||||
'query': '[">=","$free_ram_mb",1024]',
|
||||
}
|
||||
}
|
||||
</programlisting>
|
||||
<programlisting language="json"><xi:include href="../../common/samples/server-scheduler-hints2.json" parse="text"/></programlisting>
|
||||
</section>
|
||||
<section xml:id="ramfilter">
|
||||
<title>RamFilter</title>
|
||||
@ -416,8 +410,9 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
|
||||
<para>Filter out hosts that have already been attempted
|
||||
for scheduling purposes. If the scheduler selects a
|
||||
host to respond to a service request, and the host
|
||||
fails to respond to the request, this filter prevents the scheduler from retrying that host for the
|
||||
service request.</para>
|
||||
fails to respond to the request, this filter prevents
|
||||
the scheduler from retrying that host for the service
|
||||
request.</para>
|
||||
<para>This filter is only useful if the
|
||||
<literal>scheduler_max_attempts</literal>
|
||||
configuration option is set to a value greater than
|
||||
@ -439,19 +434,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
|
||||
--hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1</userinput></screen>
|
||||
<para>With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key:</para>
|
||||
<programlisting language="json">
|
||||
{
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
'flavorRef': '1'
|
||||
},
|
||||
'os:scheduler_hints': {
|
||||
'same_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
|
||||
'8c19174f-4220-44f0-824a-cd1eeef10287'],
|
||||
}
|
||||
}
|
||||
</programlisting>
|
||||
<programlisting language="json"><xi:include href="../../common/samples/server-scheduler-hints3.json" parse="text"/></programlisting>
|
||||
</section>
|
||||
<section xml:id="simplecidraffinityfilter">
|
||||
<title>SimpleCIDRAffinityFilter</title>
|
||||
@ -485,18 +468,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
|
||||
--hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1</userinput></screen>
|
||||
<para>With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key:</para>
|
||||
<programlisting language="json">{
|
||||
{
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
'flavorRef': '1'
|
||||
},
|
||||
'os:scheduler_hints': {
|
||||
'build_near_host_ip': '192.168.1.1',
|
||||
'cidr': '24'
|
||||
}
|
||||
}</programlisting>
|
||||
<programlisting language="json"><xi:include href="../../common/samples/server-scheduler-hints4.json" parse="text"/></programlisting>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="weights">
|
||||
@ -519,13 +491,15 @@ ram_weight_multiplier=1.0</programlisting>
|
||||
<section xml:id="chance-scheduler">
|
||||
<title>Chance scheduler</title>
|
||||
<?dbhtml stop-chunking?>
|
||||
<para>As an administrator, you work with the
|
||||
Filter Scheduler. However, the Compute service also uses
|
||||
the Chance Scheduler,
|
||||
<para>As an administrator, you work with the Filter Scheduler.
|
||||
However, the Compute service also uses the Chance
|
||||
Scheduler,
|
||||
<literal>nova.scheduler.chance.ChanceScheduler</literal>,
|
||||
which randomly selects from lists of filtered hosts.</para>
|
||||
which randomly selects from lists of filtered
|
||||
hosts.</para>
|
||||
</section>
|
||||
<xi:include href="../../common/section_cli_nova_host_aggregates.xml"/>
|
||||
<xi:include
|
||||
href="../../common/section_cli_nova_host_aggregates.xml"/>
|
||||
<section xml:id="compute-scheduler-config-ref">
|
||||
<title>Configuration reference</title>
|
||||
<xi:include href="../../common/tables/nova-scheduling.xml"/>
|
||||
|
@ -1,162 +1,306 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://docbook.org/ns/docbook" xmlns:db="http://docbook.org/ns/docbook" version="5.0" xml:id="ch024_authentication"><?dbhtml stop-chunking?>
|
||||
<title>Identity</title>
|
||||
<para>The OpenStack Identity Service (Keystone) supports multiple methods of authentication, including username & password, LDAP, and external authentication methods. Upon successful authentication, The Identity Service provides the user with an authorization token used for subsequent service requests.</para>
|
||||
<para>Transport Layer Security TLS/SSL provides authentication between services and persons using X.509 certificates. Although the default mode for SSL is server-side only authentication, certificates may also be used for client authentication.</para>
|
||||
<section xml:id="ch024_authentication-idp195568">
|
||||
<title>Authentication</title>
|
||||
<section xml:id="ch024_authentication-idp196256">
|
||||
<title>Invalid Login Attempts</title>
|
||||
<para>The Identity Service does not provide a method to limit access to accounts after repeated unsuccessful login attempts. Repeated failed login attempts are likely brute-force attacks (Refer figure Attack-types). This is a more significant issue in Public clouds.</para>
|
||||
<para>Prevention is possible by using an external authentication system that blocks out an account after some configured number of failed login attempts. The account then may only be unlocked with further side-channel intervention.</para>
|
||||
<para>If prevention is not an option, detection can be used to mitigate damage.Detection involves frequent review of access control logs to identify unauthorized attempts to access accounts. Possible remediation would include reviewing the strength of the user password, or blocking the network source of the attack via firewall rules. Firewall rules on the keystone server that restrict the number of connections could be used to reduce the attack effectiveness, and thus dissuade the attacker.</para>
|
||||
<para>In addition, it is useful to examine account activity for unusual login times and suspicious actions, with possibly disable the account. Often times this approach is taken by credit card providers for fraud detection and alert.</para>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp241008">
|
||||
<title>Multi-factor Authentication</title>
|
||||
<para>Employ multi-factor authentication for network access to privileged user accounts. The Identity Service supports external authentication services through the Apache web server that can provide this functionality. Servers may also enforce client-side authentication using certificates.</para>
|
||||
<para>This recommendation provides insulation from brute force, social engineering, and both spear and mass phishing attacks that may compromise administrator passwords.</para>
|
||||
</section>
|
||||
<chapter xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:db="http://docbook.org/ns/docbook" version="5.0"
|
||||
xml:id="ch024_authentication">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Identity</title>
|
||||
<para>The OpenStack Identity Service (Keystone) supports multiple
|
||||
methods of authentication, including username & password,
|
||||
LDAP, and external authentication methods. Upon successful
|
||||
authentication, The Identity Service provides the user with an
|
||||
authorization token used for subsequent service requests.</para>
|
||||
<para>Transport Layer Security TLS/SSL provides authentication
|
||||
between services and persons using X.509 certificates. Although
|
||||
the default mode for SSL is server-side only authentication,
|
||||
certificates may also be used for client authentication.</para>
|
||||
<section xml:id="ch024_authentication-idp195568">
|
||||
<title>Authentication</title>
|
||||
<section xml:id="ch024_authentication-idp196256">
|
||||
<title>Invalid Login Attempts</title>
|
||||
<para>The Identity Service does not provide a method to limit
|
||||
access to accounts after repeated unsuccessful login attempts.
|
||||
Repeated failed login attempts are likely brute-force attacks
|
||||
(Refer figure Attack-types). This is a more significant issue
|
||||
in Public clouds.</para>
|
||||
<para>Prevention is possible by using an external authentication
|
||||
system that blocks out an account after some configured number
|
||||
of failed login attempts. The account then may only be
|
||||
unlocked with further side-channel intervention.</para>
|
||||
<para>If prevention is not an option, detection can be used to
|
||||
mitigate damage.Detection involves frequent review of access
|
||||
control logs to identify unauthorized attempts to access
|
||||
accounts. Possible remediation would include reviewing the
|
||||
strength of the user password, or blocking the network source
|
||||
of the attack via firewall rules. Firewall rules on the
|
||||
keystone server that restrict the number of connections could
|
||||
be used to reduce the attack effectiveness, and thus dissuade
|
||||
the attacker.</para>
|
||||
<para>In addition, it is useful to examine account activity for
|
||||
unusual login times and suspicious actions, with possibly
|
||||
disable the account. Often times this approach is taken by
|
||||
credit card providers for fraud detection and alert.</para>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp243184">
|
||||
<title>Authentication Methods</title>
|
||||
<section xml:id="ch024_authentication-idp243824">
|
||||
<title>Internally Implemented Authentication Methods</title>
|
||||
<para>The Identity Service can store user credentials in an SQL Database, or may use an LDAP-compliant directory server. The Identity database may be separate from databases used by other OpenStack services to reduce the risk of a compromise of the stored credentials.</para>
|
||||
<para>When authentication is provided via username and password, the Identity Service does not enforce policies on password strength, expiration, or failed authentication attempts as recommended by NIST Special Publication 800-118 (draft). Organizations that desire to enforce stronger password policies should consider using Keystone Identity Service Extensions or external authentication services.</para>
|
||||
<para>LDAP simplifies integration of Identity authentication into an organization's existing directory service and user account management processes.</para>
|
||||
<para>Authentication and authorization policy in OpenStack may be delegated to an external LDAP server. A typical use case is an organization that seeks to deploy a private cloud and already has a database of employees, the users. This may be in an LDAP system. Using LDAP as a source of authority authentication, requests to Identity Service are delegated to the LDAP service, which will authorize or deny requests based on locally set policies. A token is generated on successful authentication.</para>
|
||||
<para>Note that if the LDAP system has attributes defined for the user such as admin, finance, HR etc, these must be mapped into roles and groups within Identity for use by the various OpenStack services. The <emphasis>etc/keystone.conf</emphasis> file provides the mapping from the LDAP attributes to Identity attributes.</para>
|
||||
<para>The Identity Service <emphasis role="bold">MUST NOT</emphasis> be allowed to write to LDAP services used for authentication outside of the OpenStack deployment as this would allow a sufficiently privileged keystone user to make changes to the LDAP directory. This would allow privilege escalation within the wider organization or facilitate unauthorized access to other information and resources. In such a deployment, user provisioning would be out of the realm of the OpenStack deployment.</para>
|
||||
<note>
|
||||
<para>There is an <link xlink:href="https://bugs.launchpad.net/ossn/+bug/1168252">OpenStack Security Note (OSSN) regarding keystone.conf permissions</link>.</para>
|
||||
<para>There is an <link xlink:href="https://bugs.launchpad.net/ossn/+bug/1155566">OpenStack Security Note (OSSN) regarding potential DoS attacks</link>.</para>
|
||||
</note>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp251520">
|
||||
<title>External Authentication Methods</title>
|
||||
<para>Organizations may desire to implement external authentication for compatibility with existing authentication services or to enforce stronger authentication policy requirements. Although passwords are the most common form of authentication, they can be compromised through numerous methods, including keystroke logging and password compromise. External authentication services can provide alternative forms of authentication that minimize the risk from weak passwords.</para>
|
||||
<para>These include:</para>
|
||||
<itemizedlist><listitem>
|
||||
<para>Password Policy Enforcement: Requires user passwords to conform to minimum standards for length, diversity of characters, expiration, or failed login attempts.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Multi-factor authentication: The authentication
|
||||
<section xml:id="ch024_authentication-idp241008">
|
||||
<title>Multi-factor Authentication</title>
|
||||
<para>Employ multi-factor authentication for network access to
|
||||
privileged user accounts. The Identity Service supports
|
||||
external authentication services through the Apache web server
|
||||
that can provide this functionality. Servers may also enforce
|
||||
client-side authentication using certificates.</para>
|
||||
<para>This recommendation provides insulation from brute force,
|
||||
social engineering, and both spear and mass phishing attacks
|
||||
that may compromise administrator passwords.</para>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp243184">
|
||||
<title>Authentication Methods</title>
|
||||
<section xml:id="ch024_authentication-idp243824">
|
||||
<title>Internally Implemented Authentication Methods</title>
|
||||
<para>The Identity Service can store user credentials in an SQL
|
||||
Database, or may use an LDAP-compliant directory server. The
|
||||
Identity database may be separate from databases used by other
|
||||
OpenStack services to reduce the risk of a compromise of the
|
||||
stored credentials.</para>
|
||||
<para>When authentication is provided via username and password,
|
||||
the Identity Service does not enforce policies on password
|
||||
strength, expiration, or failed authentication attempts as
|
||||
recommended by NIST Special Publication 800-118 (draft).
|
||||
Organizations that desire to enforce stronger password
|
||||
policies should consider using Keystone Identity Service
|
||||
Extensions or external authentication services.</para>
|
||||
<para>LDAP simplifies integration of Identity authentication
|
||||
into an organization's existing directory service and user
|
||||
account management processes.</para>
|
||||
<para>Authentication and authorization policy in OpenStack may
|
||||
be delegated to an external LDAP server. A typical use case is
|
||||
an organization that seeks to deploy a private cloud and
|
||||
already has a database of employees, the users. This may be in
|
||||
an LDAP system. Using LDAP as a source of authority
|
||||
authentication, requests to Identity Service are delegated to
|
||||
the LDAP service, which will authorize or deny requests based
|
||||
on locally set policies. A token is generated on successful
|
||||
authentication.</para>
|
||||
<para>Note that if the LDAP system has attributes defined for
|
||||
the user such as admin, finance, HR etc, these must be mapped
|
||||
into roles and groups within Identity for use by the various
|
||||
OpenStack services. The <emphasis>etc/keystone.conf</emphasis>
|
||||
file provides the mapping from the LDAP attributes to Identity
|
||||
attributes.</para>
|
||||
<para>The Identity Service <emphasis role="bold">MUST
|
||||
NOT</emphasis> be allowed to write to LDAP services used for
|
||||
authentication outside of the OpenStack deployment as this
|
||||
would allow a sufficiently privileged keystone user to make
|
||||
changes to the LDAP directory. This would allow privilege
|
||||
escalation within the wider organization or facilitate
|
||||
unauthorized access to other information and resources. In
|
||||
such a deployment, user provisioning would be out of the realm
|
||||
of the OpenStack deployment.</para>
|
||||
<note>
|
||||
<para>There is an <link
|
||||
xlink:href="https://bugs.launchpad.net/ossn/+bug/1168252"
|
||||
>OpenStack Security Note (OSSN) regarding keystone.conf
|
||||
permissions</link>.</para>
|
||||
<para>There is an <link
|
||||
xlink:href="https://bugs.launchpad.net/ossn/+bug/1155566"
|
||||
>OpenStack Security Note (OSSN) regarding potential DoS
|
||||
attacks</link>.</para>
|
||||
</note>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp251520">
|
||||
<title>External Authentication Methods</title>
|
||||
<para>Organizations may desire to implement external
|
||||
authentication for compatibility with existing authentication
|
||||
services or to enforce stronger authentication policy
|
||||
requirements. Although passwords are the most common form of
|
||||
authentication, they can be compromised through numerous
|
||||
methods, including keystroke logging and password compromise.
|
||||
External authentication services can provide alternative forms
|
||||
of authentication that minimize the risk from weak
|
||||
passwords.</para>
|
||||
<para>These include:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Password Policy Enforcement: Requires user passwords
|
||||
to conform to minimum standards for length, diversity of
|
||||
characters, expiration, or failed login attempts.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Multi-factor authentication: The authentication
|
||||
service requires the user to provide information based on
|
||||
something they have, such as a one-time password token or
|
||||
X.509 certificate, and something they know, such as a
|
||||
password.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Kerberos</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Kerberos</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp256832">
|
||||
<title>Authorization</title>
|
||||
<para>The Identity Service supports the notion of groups and roles. Users belong to groups. A group has a list of roles. OpenStack services reference the roles of the user attempting to access the service. The OpenStack policy enforcer middleware takes into consideration the policy rule associated with each resource and the user's group/roles and tenant association to determine if he/she has access to the requested resource.</para>
|
||||
<para>The Policy enforcement middleware enables fine-grained access control to OpenStack resources. Only admin users can provision new users and have access to various management functionality. The cloud tenant would be able to only spin up instances, attach volumes, etc.</para>
|
||||
<section xml:id="ch024_authentication-idp259168">
|
||||
<title>Establish Formal Access Control Policies</title>
|
||||
<para>Prior to configuring roles, groups, and users, document your required access control policies for the OpenStack installation. The policies should be consistent with any regulatory or legal requirements for the organization. Future modifications to access control configuration should be done consistently with the formal policies. The policies should include the conditions and processes for creating, deleting, disabling, and enabling accounts, and for assigning privileges to the accounts. Periodically review the policies and ensure that configuration is in compliance with approved policies.</para>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp261600">
|
||||
<title>Service Authorization</title>
|
||||
<para>As described in the <link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/index.html"><citetitle>OpenStack Cloud Administrator Guide</citetitle></link>, cloud administrators must define a user for each service, with a role of Admin. This service user account provides the service with the authorization to authenticate users.</para>
|
||||
<para>The Compute and Object Storage services can be configured to use either the "tempAuth" file or Identity Service to store authentication information. The "tempAuth" solution MUST NOT be deployed in a production environment since it stores passwords in plain text.</para>
|
||||
<para>The Identity Service supports client authentication for
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp256832">
|
||||
<title>Authorization</title>
|
||||
<para>The Identity Service supports the notion of groups and
|
||||
roles. Users belong to groups. A group has a list of roles.
|
||||
OpenStack services reference the roles of the user attempting to
|
||||
access the service. The OpenStack policy enforcer middleware
|
||||
takes into consideration the policy rule associated with each
|
||||
resource and the user's group/roles and tenant association to
|
||||
determine if he/she has access to the requested resource.</para>
|
||||
<para>The Policy enforcement middleware enables fine-grained
|
||||
access control to OpenStack resources. Only admin users can
|
||||
provision new users and have access to various management
|
||||
functionality. The cloud tenant would be able to only spin up
|
||||
instances, attach volumes, etc.</para>
|
||||
<section xml:id="ch024_authentication-idp259168">
|
||||
<title>Establish Formal Access Control Policies</title>
|
||||
<para>Prior to configuring roles, groups, and users, document
|
||||
your required access control policies for the OpenStack
|
||||
installation. The policies should be consistent with any
|
||||
regulatory or legal requirements for the organization. Future
|
||||
modifications to access control configuration should be done
|
||||
consistently with the formal policies. The policies should
|
||||
include the conditions and processes for creating, deleting,
|
||||
disabling, and enabling accounts, and for assigning privileges
|
||||
to the accounts. Periodically review the policies and ensure
|
||||
that configuration is in compliance with approved
|
||||
policies.</para>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp261600">
|
||||
<title>Service Authorization</title>
|
||||
<para>As described in the <link
|
||||
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/index.html"
|
||||
><citetitle>OpenStack Cloud Administrator
|
||||
Guide</citetitle></link>, cloud administrators must define
|
||||
a user for each service, with a role of Admin. This service
|
||||
user account provides the service with the authorization to
|
||||
authenticate users.</para>
|
||||
<para>The Compute and Object Storage services can be configured
|
||||
to use either the "tempAuth" file or Identity Service to store
|
||||
authentication information. The "tempAuth" solution MUST NOT
|
||||
be deployed in a production environment since it stores
|
||||
passwords in plain text.</para>
|
||||
<para>The Identity Service supports client authentication for
|
||||
SSL which may be enabled. SSL client authentication provides
|
||||
an additional authentication factor, in addition to the
|
||||
username / password, that provides greater reliability on user
|
||||
identification. It reduces the risk of unauthorized access
|
||||
when user names and passwords may be compromised. However,
|
||||
when user names and passwords may be compromised. However,
|
||||
there is additional administrative overhead and cost to issue
|
||||
certificates to users that may not be feasible in every
|
||||
deployment.</para>
|
||||
<note>
|
||||
<para>We recommend that you use client authentication with SSL for the authentication of services to the Identity Service.</para>
|
||||
</note>
|
||||
<para>The cloud administrator should protect sensitive configuration files for unauthorized modification. This can be achieved with mandatory access control frameworks such as SELinux, including <literal>/etc/keystone.conf</literal> and X.509 certificates.</para>
|
||||
<note>
|
||||
<para>We recommend that you use client authentication with SSL
|
||||
for the authentication of services to the Identity
|
||||
Service.</para>
|
||||
</note>
|
||||
<para>The cloud administrator should protect sensitive
|
||||
configuration files for unauthorized modification. This can be
|
||||
achieved with mandatory access control frameworks such as
|
||||
SELinux, including <literal>/etc/keystone.conf</literal> and
|
||||
X.509 certificates.</para>
|
||||
|
||||
<para>For client authentication with SSL, you need to issue
|
||||
<para>For client authentication with SSL, you need to issue
|
||||
certificates. These certificates can be signed by an external
|
||||
authority or by the cloud administrator. OpenStack services by
|
||||
default check the signatures of certificates and connections
|
||||
fail if the signature cannot be checked. If the administrator
|
||||
uses self-signed certificates, the check might need to be
|
||||
disabled. To disable these certificates, set
|
||||
<code>insecure=False</code> in the
|
||||
<code>[filter:authtoken]</code> section in the
|
||||
<filename>/etc/nova/api.paste.ini</filename> file. This
|
||||
disabled. To disable these certificates, set
|
||||
<code>insecure=False</code> in the
|
||||
<code>[filter:authtoken]</code> section in the
|
||||
<filename>/etc/nova/api.paste.ini</filename> file. This
|
||||
setting also disables certificates for other
|
||||
components.</para>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp267040">
|
||||
<title>Administrative Users</title>
|
||||
<para>We recommend that admin users authenticate using
|
||||
Identity Service and an external authentication service that
|
||||
supports 2-factor authentication, such as a certificate. This
|
||||
reduces the risk from passwords that may be compromised. This
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp267040">
|
||||
<title>Administrative Users</title>
|
||||
<para>We recommend that admin users authenticate using Identity
|
||||
Service and an external authentication service that supports
|
||||
2-factor authentication, such as a certificate. This reduces
|
||||
the risk from passwords that may be compromised. This
|
||||
recommendation is in compliance with NIST 800-53 IA-2(1)
|
||||
guidance in the use of multi factor authentication for network
|
||||
access to privileged accounts.</para>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp268960">
|
||||
<title>End Users</title>
|
||||
<para>The Identity Service can directly provide end-user authentication, or can be configured to use external authentication methods to conform to an organization's security policies and requirements.</para>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp270544">
|
||||
<title>Policies</title>
|
||||
<para>Each OpenStack service has a policy file in json format, called <emphasis role="bold">policy.json</emphasis>. The policy file specifies rules, and the rule that governs each resource. A resource could be API access, the ability to attach to a volume, or to fire up instances.</para>
|
||||
<para>The policies can be updated by the cloud administrator to further control access to the various resources. The middleware could also be further customized. Note that your users must be assigned to groups/roles that you refer to in your policies.</para>
|
||||
<para>Below is a snippet of the Block Storage service policy.json file.</para>
|
||||
<screen>
|
||||
{
|
||||
"context_is_admin": [["role:admin"]],
|
||||
"admin_or_owner": [["is_admin:True"], ["project_id:%(project_id)s"]],
|
||||
"default": [["rule:admin_or_owner"]],
|
||||
|
||||
"admin_api": [["is_admin:True"]],
|
||||
|
||||
"volume:create": [],
|
||||
"volume:get_all": [],
|
||||
"volume:get_volume_metadata": [],
|
||||
"volume:get_snapshot": [],
|
||||
"volume:get_all_snapshots": [],
|
||||
|
||||
"volume_extension:types_manage": [["rule:admin_api"]],
|
||||
"volume_extension:types_extra_specs": [["rule:admin_api"]],
|
||||
...
|
||||
}</screen>
|
||||
<para>Note the <emphasis role="bold">default</emphasis> rule specifies that the user must be either an admin or the owner of the volume. It essentially says only the owner of a volume or the admin may create/delete/update volumes. Certain other operations such as managing volume types are accessible only to admin users.</para>
|
||||
<section xml:id="ch024_authentication-idp268960">
|
||||
<title>End Users</title>
|
||||
<para>The Identity Service can directly provide end-user
|
||||
authentication, or can be configured to use external
|
||||
authentication methods to conform to an organization's
|
||||
security policies and requirements.</para>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp276176">
|
||||
<title>Tokens</title>
|
||||
<para>Once a user is authenticated, a token is generated and used internally in OpenStack for authorization and access. The default token <emphasis role="bold">lifespan</emphasis> is<emphasis role="bold"> 24 hours</emphasis>. It is recommended that this value be set lower but caution needs to be taken as some internal services will need sufficient time to complete their work. The cloud may not provide services if tokens expire too early. An example of this would be the time needed by the Compute Service to transfer a disk image onto the hypervisor for local caching.</para>
|
||||
<para>The following example shows a PKI token. Note that, in
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp270544">
|
||||
<title>Policies</title>
|
||||
<para>Each OpenStack service has a policy file in json format,
|
||||
called <emphasis role="bold">policy.json</emphasis>. The policy
|
||||
file specifies rules, and the rule that governs each resource. A
|
||||
resource could be API access, the ability to attach to a volume,
|
||||
or to fire up instances.</para>
|
||||
<para>The policies can be updated by the cloud administrator to
|
||||
further control access to the various resources. The middleware
|
||||
could also be further customized. Note that your users must be
|
||||
assigned to groups/roles that you refer to in your
|
||||
policies.</para>
|
||||
<para>Below is a snippet of the Block Storage service policy.json
|
||||
file.</para>
|
||||
<programlisting language="json"><xi:include href="../common/samples/authentication.json" parse="text"/></programlisting>
|
||||
<para>Note the <emphasis role="bold">default</emphasis> rule
|
||||
specifies that the user must be either an admin or the owner of
|
||||
the volume. It essentially says only the owner of a volume or
|
||||
the admin may create/delete/update volumes. Certain other
|
||||
operations such as managing volume types are accessible only to
|
||||
admin users.</para>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp276176">
|
||||
<title>Tokens</title>
|
||||
<para>Once a user is authenticated, a token is generated and used
|
||||
internally in OpenStack for authorization and access. The
|
||||
default token <emphasis role="bold">lifespan</emphasis>
|
||||
is<emphasis role="bold"> 24 hours</emphasis>. It is
|
||||
recommended that this value be set lower but caution needs to be
|
||||
taken as some internal services will need sufficient time to
|
||||
complete their work. The cloud may not provide services if
|
||||
tokens expire too early. An example of this would be the time
|
||||
needed by the Compute Service to transfer a disk image onto the
|
||||
hypervisor for local caching.</para>
|
||||
<para>The following example shows a PKI token. Note that, in
|
||||
practice, the token id value is about 3500 bytes. We shorten it
|
||||
in this example.</para>
|
||||
<screen>
|
||||
"token": {
|
||||
"expires": "2013-06-26T16:52:50Z",
|
||||
"id": "MIIKXAY...",
|
||||
"issued_at": "2013-06-25T16:52:50.622502",
|
||||
"tenant": {
|
||||
"description": null,
|
||||
"enabled": true,
|
||||
"id": "912426c8f4c04fb0a07d2547b0704185",
|
||||
"name": "demo"
|
||||
}
|
||||
}</screen>
|
||||
<para>Note that the token is often passed within the structure of a larger context of an Identity Service response. These responses also provide a catalog of the various OpenStack services. Each service is listed with its name, access endpoints for internal, admin, and public access.</para>
|
||||
<para>The Identity Service supports token revocation. This manifests as an API to revoke a token, to list revoked tokens and individual OpenStack services that cache tokens to query for the revoked tokens and remove them from their cache and append the same to their list of cached revoked tokens.</para>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp287584">
|
||||
<title>Future</title>
|
||||
<para>Domains are high-level containers for projects, users and groups. As such, they can be used to centrally manage all Keystone-based identity components. With the introduction of account Domains, server, storage and other resources can now be logically grouped into multiple Projects (previously called Tenants) which can themselves be grouped under a master account-like container. In addition, multiple users can be managed within an account Domain and assigned roles that vary for each Project.</para>
|
||||
<para>Keystone's V3 API supports multiple domains. Users of different domains may be represented in different authentication backends and even have different attributes that must be mapped to a single set of roles and privileges, that are used in the policy definitions to access the various service resources.</para>
|
||||
<para>Where a rule may specify access to only admin users and users belonging to the tenant, the mapping may be trivial. In other scenarios the cloud administrator may need to approve the mapping routines per tenant.</para>
|
||||
</section>
|
||||
</chapter>
|
||||
<programlisting language="json"><xi:include href="../common/samples/token.json" parse="text"/></programlisting>
|
||||
<para>Note that the token is often passed within the structure of
|
||||
a larger context of an Identity Service response. These
|
||||
responses also provide a catalog of the various OpenStack
|
||||
services. Each service is listed with its name, access endpoints
|
||||
for internal, admin, and public access.</para>
|
||||
<para>The Identity Service supports token revocation. This
|
||||
manifests as an API to revoke a token, to list revoked tokens
|
||||
and individual OpenStack services that cache tokens to query for
|
||||
the revoked tokens and remove them from their cache and append
|
||||
the same to their list of cached revoked tokens.</para>
|
||||
</section>
|
||||
<section xml:id="ch024_authentication-idp287584">
|
||||
<title>Future</title>
|
||||
<para>Domains are high-level containers for projects, users and
|
||||
groups. As such, they can be used to centrally manage all
|
||||
Keystone-based identity components. With the introduction of
|
||||
account Domains, server, storage and other resources can now be
|
||||
logically grouped into multiple Projects (previously called
|
||||
Tenants) which can themselves be grouped under a master
|
||||
account-like container. In addition, multiple users can be
|
||||
managed within an account Domain and assigned roles that vary
|
||||
for each Project.</para>
|
||||
<para>Keystone's V3 API supports multiple domains. Users of
|
||||
different domains may be represented in different authentication
|
||||
backends and even have different attributes that must be mapped
|
||||
to a single set of roles and privileges, that are used in the
|
||||
policy definitions to access the various service
|
||||
resources.</para>
|
||||
<para>Where a rule may specify access to only admin users and
|
||||
users belonging to the tenant, the mapping may be trivial. In
|
||||
other scenarios the cloud administrator may need to approve the
|
||||
mapping routines per tenant.</para>
|
||||
</section>
|
||||
</chapter>
|
||||
|
Loading…
Reference in New Issue
Block a user