diff --git a/doc-test.conf b/doc-test.conf index ba72d7fff9..4af407edeb 100644 --- a/doc-test.conf +++ b/doc-test.conf @@ -1,8 +1,13 @@ [DEFAULT] repo_name = openstack-manuals +# Not in DocBook format +file_exception = emc-vmax.xml +file_exception = emc-vnx.xml + # Not whitelisted via bk-*.xml file_exception = st-training-guides.xml # Not in xml format file_exception = ha-guide-docinfo.xml + diff --git a/doc/admin-guide-cloud/ch_compute.xml b/doc/admin-guide-cloud/ch_compute.xml index 64af3415a0..95e51ffa62 100644 --- a/doc/admin-guide-cloud/ch_compute.xml +++ b/doc/admin-guide-cloud/ch_compute.xml @@ -167,8 +167,9 @@ Compute services manages instances. For more information about creating and troubleshooting images, see the OpenStack Virtual Machine Image Guide. + xlink:href="http://docs.openstack.org/image-guide/content/" + >OpenStack Virtual Machine Image + Guide. For more information about image configuration options, see the pip python package - installer: - sudo pip install python-novaclient - + installer: + $ sudo pip install python-novaclient For more information about python-novaclient and other available command-line tools, see the /etc/openstack-dashboard/local_settings.py and on openSUSE and SUSE Linux Enterprise Server: /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py) - OPENSTACK_HYPERVISOR_FEATURE = { + OPENSTACK_HYPERVISOR_FEATURE = { ... 'can_set_password': False, } @@ -688,7 +688,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT default. To enable it, set the following option in /etc/nova/nova.conf: - [libvirt] + [libvirt] inject_password=true When enabled, Compute will modify the password of @@ -874,7 +874,8 @@ inject_password=true IP addresses to VM instances from the specified subnet in addition to manually configuring the networking bridge. IP addresses for VM instances are grabbed from - a subnet specified by the network administrator. + a subnet specified by the network + administrator. Like Flat Mode, all instances are attached to a single bridge on the compute node. In addition a DHCP server is running to configure instances (depending on @@ -885,27 +886,28 @@ inject_password=true (flat_interface, eth0 by default). For every instance, nova allocates a fixed IP address and configure dnsmasq with the MAC/IP pair - for the VM. Dnsmasq doesn't take part in - the IP address allocation process, it only hands out - IPs according to the mapping done by nova. Instances - receive their fixed IPs by doing a dhcpdiscover. - These IPs are not - assigned to any of the host's network interfaces, - only to the VM's guest-side interface. - In any setup with flat networking, the hosts providing - the nova-network - service are responsible for forwarding - traffic from the private network. They also run and - configure dnsmasq as a DHCP server listening on - this bridge, usually on IP address 10.0.0.1 (see - DHCP server: dnsmasq - ). Compute can determine the NAT entries for - each network, though sometimes NAT is not used, such - as when configured with all public IPs or a hardware - router is used (one of the HA options). Such hosts - need to have br100 configured and - physically connected to any other nodes that are hosting - VMs. You must set the flat_network_bridge + for the VM. Dnsmasq doesn't take part in the IP + address allocation process, it only hands out IPs + according to the mapping done by nova. Instances + receive their fixed IPs by doing a dhcpdiscover. These + IPs are not + assigned to any of the host's network interfaces, only + to the VM's guest-side interface. + In any setup with flat networking, the hosts + providing the nova-network service are responsible + for forwarding traffic from the private network. They + also run and configure dnsmasq as a DHCP server + listening on this bridge, usually on IP address + 10.0.0.1 (see DHCP + server: dnsmasq ). Compute can determine + the NAT entries for each network, though sometimes NAT + is not used, such as when configured with all public + IPs or a hardware router is used (one of the HA + options). Such hosts need to have + br100 configured and physically + connected to any other nodes that are hosting VMs. You + must set the flat_network_bridge option or create networks with the bridge parameter in order to avoid raising an error. Compute nodes have iptables/ebtables entries created for each project and @@ -959,11 +961,11 @@ inject_password=true creating a dnsmasq configuration file. Specify the config file using the dnsmasq_config_file - configuration option. For example: - dnsmasq_config_file=/etc/dnsmasq-nova.conf - See the + dnsmasq_config_file=/etc/dnsmasq-nova.conf + See the OpenStack Configuration + >OpenStack Configuration Reference for an example of how to change the behavior of dnsmasq using a dnsmasq configuration file. The dnsmasq documentation has a @@ -976,8 +978,8 @@ inject_password=true dns_server configuration option in /etc/nova/nova.conf. The following example would configure dnsmasq to use - Google's public DNS server: - dns_server=8.8.8.8 + Google's public DNS server: + dns_server=8.8.8.8 Dnsmasq logging output goes to the syslog (typically /var/log/syslog or /var/log/messages, depending @@ -1009,14 +1011,14 @@ inject_password=true Each of the APIs is versioned by date. To retrieve a list of supported versions for the OpenStack metadata API, make a GET request to - http://169.254.169.254/openstack + http://169.254.169.254/openstack For example: - $ curl http://169.254.169.254/openstack + $ curl http://169.254.169.254/openstack 2012-08-10 latest - To retrieve a list of supported versions for the + To list supported versions for the EC2-compatible metadata API, make a GET request to - http://169.254.169.254 + http://169.254.169.254. For example: $ curl http://169.254.169.254 1.0 @@ -1039,39 +1041,22 @@ latest OpenStack metadata API Metadata from the OpenStack API is distributed in JSON format. To retrieve the metadata, make a - GET request to: - http://169.254.169.254/openstack/2012-08-10/meta_data.json + GET request to + http://169.254.169.254/openstack/2012-08-10/meta_data.json. For example: - $ curl http://169.254.169.254/openstack/2012-08-10/meta_data.json - {"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38", "availability_zone": "nova", "hostname": "test.novalocal", "launch_index": 0, "meta": {"priority": "low", "role": "webserver"}, "public_keys": {"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"}, "name": "test"} - Here is the same content after having run - through a JSON pretty-printer: - { - "availability_zone": "nova", - "hostname": "test.novalocal", - "launch_index": 0, - "meta": { - "priority": "low", - "role": "webserver" - }, - "name": "test", - "public_keys": { - "mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n" - }, - "uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38" -} + $ curl http://169.254.169.254/openstack/2012-08-10/meta_data.json + Instances also retrieve user data (passed as the user_data parameter in the API call or by the --user_data flag in the nova boot command) through the metadata service, by making a GET - request to: - http://169.254.169.254/openstack/2012-08-10/user_data - For example: - - $ curl http://169.254.169.254/openstack/2012-08-10/user_data#!/bin/bash + request to + http://169.254.169.254/openstack/2012-08-10/user_data. + For example: + $ curl http://169.254.169.254/openstack/2012-08-10/user_data + #!/bin/bash echo 'Extra user data here' - EC2 metadata API @@ -1083,8 +1068,8 @@ echo 'Extra user data here' properly with OpenStack. The EC2 API exposes a separate URL for each metadata. You can retrieve a listing of these - elements by making a GET query to: - http://169.254.169.254/2009-04-04/meta-data/ + elements by making a GET query to + http://169.254.169.254/2009-04-04/meta-data/ For example: $ curl http://169.254.169.254/2009-04-04/meta-data/ami-id ami-launch-index @@ -1111,14 +1096,14 @@ security-groups 0=mykey Instances can retrieve the public SSH key (identified by keypair name when a user requests a - new instance) by making a GET request to: - http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key + new instance) by making a GET request to + http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key. For example: $ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova Instances can retrieve user data by making a GET - request to: - http://169.254.169.254/2009-04-04/user-data + request to + http://169.254.169.254/2009-04-04/user-data. For example: $ curl http://169.254.169.254/2009-04-04/user-data #!/bin/bash @@ -1239,9 +1224,9 @@ echo 'Extra user data here' Every virtual instance is automatically assigned a private IP address. You can optionally assign public IP addresses to instances. The term - floating - IP refers to - an IP address, typically public, that you can + floating IP refers to an IP + address, typically public, that you can dynamically add to a running virtual instance. OpenStack Compute uses Network Address Translation (NAT) to assign floating IPs to virtual @@ -1252,7 +1237,7 @@ echo 'Extra user data here' class="service">nova-network service binds public IP addresses, as follows: - public_interface=vlan100 + public_interface=vlan100 If you make changes to the /etc/nova/nova.conf file while the and so this is the recommended path. To ensure that traffic does not get SNATed to the floating range, explicitly set - dmz_cidr=x.x.x.x/y. + dmz_cidr=x.x.x.x/y. The x.x.x.x/y value specifies the range of floating IPs for each pool of floating IPs that you define. If the @@ -1310,7 +1295,7 @@ echo 'Extra user data here' To make the changes permanent, edit the /etc/sysctl.conf file and update the IP forwarding setting: - net.ipv4.ip_forward = 1 + net.ipv4.ip_forward = 1 Save the file and run this command to apply the changes: $ sysctl -p @@ -1373,7 +1358,7 @@ echo 'Extra user data here' /etc/nova/nova.conf file and restart the nova-network service: - auto_assign_floating_ip=True + auto_assign_floating_ip=True If you enable this option and all floating IP addresses have already been allocated, the @@ -1470,7 +1455,7 @@ echo 'Extra user data here' the instance (this is the configuration that needs to be applied inside the image): /etc/network/interfaces - # The loopback network interface + # The loopback network interface auto lo iface lo inet loopback @@ -2062,22 +2047,22 @@ syslog_log_facility = LOG_LOCAL0 /etc/rsyslog.conf on the log server host, which receives the log files: - # provides TCP syslog reception + # provides TCP syslog reception $ModLoad imtcp $InputTCPServerRun 1024 Add to /etc/rsyslog.conf a filter rule on which looks for a host name. The example below use compute-01 as an - example of a compute host - name::hostname, isequal, "compute-01" /mnt/rsyslog/logs/compute-01.log + example of a compute host name: + :hostname, isequal, "compute-01" /mnt/rsyslog/logs/compute-01.log On the compute hosts, create a file named /etc/rsyslog.d/60-nova.conf, - with this - content.# prevent debug from dnsmasq with the daemon.none parameter + with this content: + # prevent debug from dnsmasq with the daemon.none parameter *.*;auth,authpriv.none,daemon.none,local0.none -/var/log/syslog # Specify a log level of ERROR -local0.error @@172.20.1.43:1024 +local0.error @@172.20.1.43:1024 Once you have created this file, restart your rsyslog daemon. Error-level log messages on the compute hosts should now be sent to your log @@ -2248,7 +2233,7 @@ HostC p2 5 10240 150 Here's an example using the EC2 API - instance i-000015b9 that is running on node np-rcc54: - i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60 + i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60 You can review the status of the host by @@ -2261,7 +2246,7 @@ HostC p2 5 10240 150 can find the credentials for your database in /etc/nova.conf. - SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G; + SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G; *************************** 1. row *************************** created_at: 2012-06-19 00:48:11 updated_at: 2012-07-03 00:35:11 @@ -2289,7 +2274,7 @@ HostC p2 5 10240 150 host the affected VMs should move. Run the following database command to move the VM to np-rcc46: - UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06'; + UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06'; Next, if using a hypervisor that relies diff --git a/doc/admin-guide-cloud/section_networking_auth.xml b/doc/admin-guide-cloud/section_networking_auth.xml index 890f33ffb4..7d23ac4a6c 100644 --- a/doc/admin-guide-cloud/section_networking_auth.xml +++ b/doc/admin-guide-cloud/section_networking_auth.xml @@ -3,187 +3,189 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="section_networking_auth"> - Authentication and authorization - Networking uses the Identity Service as the default - authentication service. When the Identity Service is - enabled, users who submit requests to the Networking - service must provide an authentication token in - X-Auth-Token request header. Users - obtain this token by authenticating with the Identity - Service endpoint. For more information about - authentication with the Identity Service, see OpenStack Identity Service API v2.0 - Reference. When the Identity - Service is enabled, it is not mandatory to specify the - tenant ID for resources in create requests because the - tenant ID is derived from the authentication token. - - The default authorization settings only allow - administrative users to create resources on behalf of - a different tenant. Networking uses information - received from Identity to authorize user requests. - Networking handles two kind of authorization - policies: - - - - Operation-based - policies specify access criteria for specific - operations, possibly with fine-grained control - over specific attributes; - - - Resource-based - policies specify whether access to specific - resource is granted or not according to the - permissions configured for the resource (currently - available only for the network resource). The - actual authorization policies enforced in - Networking might vary from deployment to - deployment. - - - The policy engine reads entries from the - policy.json file. The actual - location of this file might vary from distribution to - distribution. Entries can be updated while the system is - running, and no service restart is required. Every time - the policy file is updated, the policies are automatically - reloaded. Currently the only way of updating such policies - is to edit the policy file. In this section, the terms - policy and - rule refer to - objects that are specified in the same way in the policy - file. There are no syntax differences between a rule and a - policy. A policy is something that is matched directly - from the Networking policy engine. A rule is an element in - a policy, which is evaluated. For instance in - create_subnet: - [["admin_or_network_owner"]], create_subnet is a policy, - and admin_or_network_owner is a rule. - Policies are triggered by the Networking policy engine - whenever one of them matches an Networking API operation - or a specific attribute being used in a given operation. - For instance the create_subnet policy is - triggered every time a POST /v2.0/subnets - request is sent to the Networking server; on the other - hand create_network:shared is triggered every - time the shared - attribute is explicitly specified (and set to a value - different from its default) in a POST - /v2.0/networks request. It is also worth - mentioning that policies can be also related to specific - API extensions; for instance - extension:provider_network:set is be - triggered if the attributes defined by the Provider - Network extensions are specified in an API request. - An authorization policy can be composed by one or more - rules. If more rules are specified, evaluation policy - succeeds if any of the rules evaluates successfully; if an - API operation matches multiple policies, then all the - policies must evaluate successfully. Also, authorization - rules are recursive. Once a rule is matched, the rule(s) - can be resolved to another rule, until a terminal rule is - reached. - The Networking policy engine currently defines the - following kinds of terminal rules: - - - Role-based - rules evaluate successfully if the - user who submits the request has the specified - role. For instance "role:admin" is - successful if the user who submits the request is - an administrator. - - - Field-based rules - evaluate successfully if a field of the - resource specified in the current request matches - a specific value. For instance - "field:networks:shared=True" is - successful if the shared - attribute of the network - resource is set to true. - - - Generic rules - compare an attribute in the resource with an - attribute extracted from the user's security - credentials and evaluates successfully if the - comparison is successful. For instance - "tenant_id:%(tenant_id)s" is - successful if the tenant identifier in the - resource is equal to the tenant identifier of the - user submitting the request. - - - This extract is from the default - policy.json file: - { -[1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], - "admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]], - "admin_only": [["role:admin"]], "regular_user": [], - "shared": [["field:networks:shared=True"]], -[2] "default": [["rule:admin_or_owner"]], - "create_subnet": [["rule:admin_or_network_owner"]], - "get_subnet": [["rule:admin_or_owner"], ["rule:shared"]], - "update_subnet": [["rule:admin_or_network_owner"]], - "delete_subnet": [["rule:admin_or_network_owner"]], - "create_network": [], -[3] "get_network": [["rule:admin_or_owner"], ["rule:shared"]], -[4] "create_network:shared": [["rule:admin_only"]], - "update_network": [["rule:admin_or_owner"]], - "delete_network": [["rule:admin_or_owner"]], - "create_port": [], -[5] "create_port:mac_address": [["rule:admin_or_network_owner"]], - "create_port:fixed_ips": [["rule:admin_or_network_owner"]], - "get_port": [["rule:admin_or_owner"]], - "update_port": [["rule:admin_or_owner"]], - "delete_port": [["rule:admin_or_owner"]] -} - [1] is a rule which evaluates successfully if the - current user is an administrator or the owner of the - resource specified in the request (tenant identifier is - equal). - [2] is the default policy which is always evaluated if - an API operation does not match any of the policies in - policy.json. - [3] This policy evaluates successfully if either - admin_or_owner, or - shared evaluates - successfully. - [4] This policy restricts the ability to manipulate the - shared attribute - for a network to administrators only. - [5] This policy restricts the ability to manipulate the - mac_address - attribute for a port only to administrators and the owner - of the network where the port is attached. - In some cases, some operations are restricted to - administrators only. This example shows you how to modify - a policy file to permit tenants to define networks and see - their resources and permit administrative users to perform - all other operations: - { - "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], - "admin_only": [["role:admin"]], "regular_user": [], - "default": [["rule:admin_only"]], - "create_subnet": [["rule:admin_only"]], - "get_subnet": [["rule:admin_or_owner"]], - "update_subnet": [["rule:admin_only"]], - "delete_subnet": [["rule:admin_only"]], - "create_network": [], - "get_network": [["rule:admin_or_owner"]], - "create_network:shared": [["rule:admin_only"]], - "update_network": [["rule:admin_or_owner"]], - "delete_network": [["rule:admin_or_owner"]], - "create_port": [["rule:admin_only"]], - "get_port": [["rule:admin_or_owner"]], - "update_port": [["rule:admin_only"]], - "delete_port": [["rule:admin_only"]] -} - + Authentication and authorization + Networking uses the Identity Service as the default + authentication service. When the Identity Service is enabled, + users who submit requests to the Networking service must + provide an authentication token in + X-Auth-Token request header. Users + obtain this token by authenticating with the Identity Service + endpoint. For more information about authentication with the + Identity Service, see OpenStack Identity Service API v2.0 + Reference. When the Identity + Service is enabled, it is not mandatory to specify the tenant + ID for resources in create requests because the tenant ID is + derived from the authentication token. + + The default authorization settings only allow + administrative users to create resources on behalf of a + different tenant. Networking uses information received + from Identity to authorize user requests. Networking + handles two kind of authorization policies: + + + + Operation-based + policies specify access criteria for specific + operations, possibly with fine-grained control over + specific attributes; + + + Resource-based + policies specify whether access to specific resource + is granted or not according to the permissions + configured for the resource (currently available only + for the network resource). The actual authorization + policies enforced in Networking might vary from + deployment to deployment. + + + The policy engine reads entries from the + policy.json file. The actual location + of this file might vary from distribution to distribution. + Entries can be updated while the system is running, and no + service restart is required. Every time the policy file is + updated, the policies are automatically reloaded. Currently + the only way of updating such policies is to edit the policy + file. In this section, the terms policy and rule refer to objects that are specified in + the same way in the policy file. There are no syntax + differences between a rule and a policy. A policy is something + that is matched directly from the Networking policy engine. A + rule is an element in a policy, which is evaluated. For + instance in create_subnet: + [["admin_or_network_owner"]], create_subnet is a policy, and + admin_or_network_owner + is a rule. + Policies are triggered by the Networking policy engine + whenever one of them matches an Networking API operation or a + specific attribute being used in a given operation. For + instance the create_subnet policy is triggered + every time a POST /v2.0/subnets request is sent + to the Networking server; on the other hand + create_network:shared is triggered every time + the shared attribute is + explicitly specified (and set to a value different from its + default) in a POST /v2.0/networks request. It is + also worth mentioning that policies can be also related to + specific API extensions; for instance + extension:provider_network:set is be + triggered if the attributes defined by the Provider Network + extensions are specified in an API request. + An authorization policy can be composed by one or more + rules. If more rules are specified, evaluation policy succeeds + if any of the rules evaluates successfully; if an API + operation matches multiple policies, then all the policies + must evaluate successfully. Also, authorization rules are + recursive. Once a rule is matched, the rule(s) can be resolved + to another rule, until a terminal rule is reached. + The Networking policy engine currently defines the following + kinds of terminal rules: + + + Role-based rules + evaluate successfully if the user who submits the + request has the specified role. For instance + "role:admin" is successful if the + user who submits the request is an + administrator. + + + Field-based rules + evaluate successfully if a field of the + resource specified in the current request matches a + specific value. For instance + "field:networks:shared=True" is + successful if the shared attribute + of the network resource is set to + true. + + + Generic rules + compare an attribute in the resource with an attribute + extracted from the user's security credentials and + evaluates successfully if the comparison is + successful. For instance + "tenant_id:%(tenant_id)s" is + successful if the tenant identifier in the resource is + equal to the tenant identifier of the user submitting + the request. + + + This extract is from the default + policy.json file: + + + + + + + + + + + + + + A rule that evaluates successfully if the current + user is an administrator or the owner of the resource + specified in the request (tenant identifier is + equal). + + + The default policy that is always evaluated if an + API operation does not match any of the policies in + policy.json. + + + This policy evaluates successfully if either + admin_or_owner, + or shared evaluates + successfully. + + + This policy restricts the ability to manipulate the + shared + attribute for a network to administrators only. + + + This policy restricts the ability to manipulate the + mac_address + attribute for a port only to administrators and the + owner of the network where the port is + attached. + + + In some cases, some operations are restricted to + administrators only. This example shows you how to modify a + policy file to permit tenants to define networks and see their + resources and permit administrative users to perform all other + operations: + { + "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], + "admin_only": [["role:admin"]], "regular_user": [], + "default": [["rule:admin_only"]], + "create_subnet": [["rule:admin_only"]], + "get_subnet": [["rule:admin_or_owner"]], + "update_subnet": [["rule:admin_only"]], + "delete_subnet": [["rule:admin_only"]], + "create_network": [], + "get_network": [["rule:admin_or_owner"]], + "create_network:shared": [["rule:admin_only"]], + "update_network": [["rule:admin_or_owner"]], + "delete_network": [["rule:admin_or_owner"]], + "create_port": [["rule:admin_only"]], + "get_port": [["rule:admin_or_owner"]], + "update_port": [["rule:admin_only"]], + "delete_port": [["rule:admin_only"]] + } + diff --git a/doc/common/samples/authentication.json b/doc/common/samples/authentication.json new file mode 100644 index 0000000000..ca5befb4d9 --- /dev/null +++ b/doc/common/samples/authentication.json @@ -0,0 +1,55 @@ +{ + "context_is_admin":[ + [ + "role:admin" + ] + ], + "admin_or_owner":[ + [ + "is_admin:True" + ], + [ + "project_id:%(project_id)s" + ] + ], + "default":[ + [ + "rule:admin_or_owner" + ] + ], + "admin_api":[ + [ + "is_admin:True" + ] + ], + "volume:create":[ + + ], + "volume:get_all":[ + + ], + "volume:get_volume_metadata":[ + + ], + "volume:get_snapshot":[ + + ], + "volume:get_all_snapshots":[ + + ], + "volume_extension:types_manage":[ + [ + "rule:admin_api" + ] + ], + "volume_extension:types_extra_specs":[ + [ + "rule:admin_api" + ] + ], + "...":[ + [ + "...:..." + ] + ] +} \ No newline at end of file diff --git a/doc/common/samples/dashboard-nova_policy.json b/doc/common/samples/dashboard-nova_policy.json index 9fa96104d2..f191484d71 100644 --- a/doc/common/samples/dashboard-nova_policy.json +++ b/doc/common/samples/dashboard-nova_policy.json @@ -245,5 +245,4 @@ "network:create_private_dns_domain":"", "network:create_public_dns_domain":"", "network:delete_dns_domain":"" -} - +} \ No newline at end of file diff --git a/doc/common/samples/list_metadata.json b/doc/common/samples/list_metadata.json new file mode 100644 index 0000000000..447402bcdd --- /dev/null +++ b/doc/common/samples/list_metadata.json @@ -0,0 +1,14 @@ +{ + "uuid":"d8e02d56-2648-49a3-bf97-6be8f1204f38", + "availability_zone":"nova", + "hostname":"test.novalocal", + "launch_index":0, + "meta":{ + "priority":"low", + "role":"webserver" + }, + "public_keys":{ + "mykey":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n" + }, + "name":"test" +} \ No newline at end of file diff --git a/doc/common/samples/networking_auth.json b/doc/common/samples/networking_auth.json new file mode 100644 index 0000000000..fcab0f534b --- /dev/null +++ b/doc/common/samples/networking_auth.json @@ -0,0 +1,113 @@ +{ + "admin_or_owner":[ + [ + "role:admin" + ], + [ + "tenant_id:%(tenant_id)s" + ] + ], + "admin_or_network_owner":[ + [ + "role:admin" + ], + [ + "tenant_id:%(network_tenant_id)s" + ] + ], + "admin_only":[ + [ + "role:admin" + ] + ], + "regular_user":[ + + ], + "shared":[ + [ + "field:networks:shared=True" + ] + ], + "default":[ + [ + "rule:admin_or_owner" + ] + ], + "create_subnet":[ + [ + "rule:admin_or_network_owner" + ] + ], + "get_subnet":[ + [ + "rule:admin_or_owner" + ], + [ + "rule:shared" + ] + ], + "update_subnet":[ + [ + "rule:admin_or_network_owner" + ] + ], + "delete_subnet":[ + [ + "rule:admin_or_network_owner" + ] + ], + "create_network":[ + + ], + "get_network":[ + [ + "rule:admin_or_owner" + ], + [ + "rule:shared" + ] + ], + "create_network:shared":[ + [ + "rule:admin_only" + ] + ], + "update_network":[ + [ + "rule:admin_or_owner" + ] + ], + "delete_network":[ + [ + "rule:admin_or_owner" + ] + ], + "create_port":[ + + ], + "create_port:mac_address":[ + [ + "rule:admin_or_network_owner" + ] + ], + "create_port:fixed_ips":[ + [ + "rule:admin_or_network_owner" + ] + ], + "get_port":[ + [ + "rule:admin_or_owner" + ] + ], + "update_port":[ + [ + "rule:admin_or_owner" + ] + ], + "delete_port":[ + [ + "rule:admin_or_owner" + ] + ] +} \ No newline at end of file diff --git a/doc/common/samples/restrict_roles.json b/doc/common/samples/restrict_roles.json new file mode 100644 index 0000000000..e6fcb36c42 --- /dev/null +++ b/doc/common/samples/restrict_roles.json @@ -0,0 +1,346 @@ +{ + "admin_or_owner":[ + [ + "role:admin" + ], + [ + "project_id:%(project_id)s" + ] + ], + "default":[ + [ + "rule:admin_or_owner" + ] + ], + "compute:create":[ + "role:compute-user" + ], + "compute:create:attach_network":[ + "role:compute-user" + ], + "compute:create:attach_volume":[ + "role:compute-user" + ], + "compute:get_all":[ + "role:compute-user" + ], + "compute:unlock_override":[ + "rule:admin_api" + ], + "admin_api":[ + [ + "role:admin" + ] + ], + "compute_extension:accounts":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:pause":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:unpause":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:suspend":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:resume":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:lock":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:unlock":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:resetNetwork":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:injectNetworkInfo":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:createBackup":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:migrateLive":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:migrate":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:aggregates":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:certificates":[ + "role:compute-user" + ], + "compute_extension:cloudpipe":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:console_output":[ + "role:compute-user" + ], + "compute_extension:consoles":[ + "role:compute-user" + ], + "compute_extension:createserverext":[ + "role:compute-user" + ], + "compute_extension:deferred_delete":[ + "role:compute-user" + ], + "compute_extension:disk_config":[ + "role:compute-user" + ], + "compute_extension:evacuate":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:extended_server_attributes":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:extended_status":[ + "role:compute-user" + ], + "compute_extension:flavorextradata":[ + "role:compute-user" + ], + "compute_extension:flavorextraspecs":[ + "role:compute-user" + ], + "compute_extension:flavormanage":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:floating_ip_dns":[ + "role:compute-user" + ], + "compute_extension:floating_ip_pools":[ + "role:compute-user" + ], + "compute_extension:floating_ips":[ + "role:compute-user" + ], + "compute_extension:hosts":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:keypairs":[ + "role:compute-user" + ], + "compute_extension:multinic":[ + "role:compute-user" + ], + "compute_extension:networks":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:quotas":[ + "role:compute-user" + ], + "compute_extension:rescue":[ + "role:compute-user" + ], + "compute_extension:security_groups":[ + "role:compute-user" + ], + "compute_extension:server_action_list":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:server_diagnostics":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:simple_tenant_usage:show":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:simple_tenant_usage:list":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:users":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:virtual_interfaces":[ + "role:compute-user" + ], + "compute_extension:virtual_storage_arrays":[ + "role:compute-user" + ], + "compute_extension:volumes":[ + "role:compute-user" + ], + "compute_extension:volume_attachments:index":[ + "role:compute-user" + ], + "compute_extension:volume_attachments:show":[ + "role:compute-user" + ], + "compute_extension:volume_attachments:create":[ + "role:compute-user" + ], + "compute_extension:volume_attachments:delete":[ + "role:compute-user" + ], + "compute_extension:volumetypes":[ + "role:compute-user" + ], + "volume:create":[ + "role:compute-user" + ], + "volume:get_all":[ + "role:compute-user" + ], + "volume:get_volume_metadata":[ + "role:compute-user" + ], + "volume:get_snapshot":[ + "role:compute-user" + ], + "volume:get_all_snapshots":[ + "role:compute-user" + ], + "network:get_all_networks":[ + "role:compute-user" + ], + "network:get_network":[ + "role:compute-user" + ], + "network:delete_network":[ + "role:compute-user" + ], + "network:disassociate_network":[ + "role:compute-user" + ], + "network:get_vifs_by_instance":[ + "role:compute-user" + ], + "network:allocate_for_instance":[ + "role:compute-user" + ], + "network:deallocate_for_instance":[ + "role:compute-user" + ], + "network:validate_networks":[ + "role:compute-user" + ], + "network:get_instance_uuids_by_ip_filter":[ + "role:compute-user" + ], + "network:get_floating_ip":[ + "role:compute-user" + ], + "network:get_floating_ip_pools":[ + "role:compute-user" + ], + "network:get_floating_ip_by_address":[ + "role:compute-user" + ], + "network:get_floating_ips_by_project":[ + "role:compute-user" + ], + "network:get_floating_ips_by_fixed_address":[ + "role:compute-user" + ], + "network:allocate_floating_ip":[ + "role:compute-user" + ], + "network:deallocate_floating_ip":[ + "role:compute-user" + ], + "network:associate_floating_ip":[ + "role:compute-user" + ], + "network:disassociate_floating_ip":[ + "role:compute-user" + ], + "network:get_fixed_ip":[ + "role:compute-user" + ], + "network:add_fixed_ip_to_instance":[ + "role:compute-user" + ], + "network:remove_fixed_ip_from_instance":[ + "role:compute-user" + ], + "network:add_network_to_project":[ + "role:compute-user" + ], + "network:get_instance_nw_info":[ + "role:compute-user" + ], + "network:get_dns_domains":[ + "role:compute-user" + ], + "network:add_dns_entry":[ + "role:compute-user" + ], + "network:modify_dns_entry":[ + "role:compute-user" + ], + "network:delete_dns_entry":[ + "role:compute-user" + ], + "network:get_dns_entries_by_address":[ + "role:compute-user" + ], + "network:get_dns_entries_by_name":[ + "role:compute-user" + ], + "network:create_private_dns_domain":[ + "role:compute-user" + ], + "network:create_public_dns_domain":[ + "role:compute-user" + ], + "network:delete_dns_domain":[ + "role:compute-user" + ] +} \ No newline at end of file diff --git a/doc/common/samples/restrict_roles2.json b/doc/common/samples/restrict_roles2.json new file mode 100644 index 0000000000..e6fcb36c42 --- /dev/null +++ b/doc/common/samples/restrict_roles2.json @@ -0,0 +1,346 @@ +{ + "admin_or_owner":[ + [ + "role:admin" + ], + [ + "project_id:%(project_id)s" + ] + ], + "default":[ + [ + "rule:admin_or_owner" + ] + ], + "compute:create":[ + "role:compute-user" + ], + "compute:create:attach_network":[ + "role:compute-user" + ], + "compute:create:attach_volume":[ + "role:compute-user" + ], + "compute:get_all":[ + "role:compute-user" + ], + "compute:unlock_override":[ + "rule:admin_api" + ], + "admin_api":[ + [ + "role:admin" + ] + ], + "compute_extension:accounts":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:pause":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:unpause":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:suspend":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:resume":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:lock":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:unlock":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:resetNetwork":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:injectNetworkInfo":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:createBackup":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:migrateLive":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:migrate":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:aggregates":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:certificates":[ + "role:compute-user" + ], + "compute_extension:cloudpipe":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:console_output":[ + "role:compute-user" + ], + "compute_extension:consoles":[ + "role:compute-user" + ], + "compute_extension:createserverext":[ + "role:compute-user" + ], + "compute_extension:deferred_delete":[ + "role:compute-user" + ], + "compute_extension:disk_config":[ + "role:compute-user" + ], + "compute_extension:evacuate":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:extended_server_attributes":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:extended_status":[ + "role:compute-user" + ], + "compute_extension:flavorextradata":[ + "role:compute-user" + ], + "compute_extension:flavorextraspecs":[ + "role:compute-user" + ], + "compute_extension:flavormanage":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:floating_ip_dns":[ + "role:compute-user" + ], + "compute_extension:floating_ip_pools":[ + "role:compute-user" + ], + "compute_extension:floating_ips":[ + "role:compute-user" + ], + "compute_extension:hosts":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:keypairs":[ + "role:compute-user" + ], + "compute_extension:multinic":[ + "role:compute-user" + ], + "compute_extension:networks":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:quotas":[ + "role:compute-user" + ], + "compute_extension:rescue":[ + "role:compute-user" + ], + "compute_extension:security_groups":[ + "role:compute-user" + ], + "compute_extension:server_action_list":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:server_diagnostics":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:simple_tenant_usage:show":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:simple_tenant_usage:list":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:users":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:virtual_interfaces":[ + "role:compute-user" + ], + "compute_extension:virtual_storage_arrays":[ + "role:compute-user" + ], + "compute_extension:volumes":[ + "role:compute-user" + ], + "compute_extension:volume_attachments:index":[ + "role:compute-user" + ], + "compute_extension:volume_attachments:show":[ + "role:compute-user" + ], + "compute_extension:volume_attachments:create":[ + "role:compute-user" + ], + "compute_extension:volume_attachments:delete":[ + "role:compute-user" + ], + "compute_extension:volumetypes":[ + "role:compute-user" + ], + "volume:create":[ + "role:compute-user" + ], + "volume:get_all":[ + "role:compute-user" + ], + "volume:get_volume_metadata":[ + "role:compute-user" + ], + "volume:get_snapshot":[ + "role:compute-user" + ], + "volume:get_all_snapshots":[ + "role:compute-user" + ], + "network:get_all_networks":[ + "role:compute-user" + ], + "network:get_network":[ + "role:compute-user" + ], + "network:delete_network":[ + "role:compute-user" + ], + "network:disassociate_network":[ + "role:compute-user" + ], + "network:get_vifs_by_instance":[ + "role:compute-user" + ], + "network:allocate_for_instance":[ + "role:compute-user" + ], + "network:deallocate_for_instance":[ + "role:compute-user" + ], + "network:validate_networks":[ + "role:compute-user" + ], + "network:get_instance_uuids_by_ip_filter":[ + "role:compute-user" + ], + "network:get_floating_ip":[ + "role:compute-user" + ], + "network:get_floating_ip_pools":[ + "role:compute-user" + ], + "network:get_floating_ip_by_address":[ + "role:compute-user" + ], + "network:get_floating_ips_by_project":[ + "role:compute-user" + ], + "network:get_floating_ips_by_fixed_address":[ + "role:compute-user" + ], + "network:allocate_floating_ip":[ + "role:compute-user" + ], + "network:deallocate_floating_ip":[ + "role:compute-user" + ], + "network:associate_floating_ip":[ + "role:compute-user" + ], + "network:disassociate_floating_ip":[ + "role:compute-user" + ], + "network:get_fixed_ip":[ + "role:compute-user" + ], + "network:add_fixed_ip_to_instance":[ + "role:compute-user" + ], + "network:remove_fixed_ip_from_instance":[ + "role:compute-user" + ], + "network:add_network_to_project":[ + "role:compute-user" + ], + "network:get_instance_nw_info":[ + "role:compute-user" + ], + "network:get_dns_domains":[ + "role:compute-user" + ], + "network:add_dns_entry":[ + "role:compute-user" + ], + "network:modify_dns_entry":[ + "role:compute-user" + ], + "network:delete_dns_entry":[ + "role:compute-user" + ], + "network:get_dns_entries_by_address":[ + "role:compute-user" + ], + "network:get_dns_entries_by_name":[ + "role:compute-user" + ], + "network:create_private_dns_domain":[ + "role:compute-user" + ], + "network:create_public_dns_domain":[ + "role:compute-user" + ], + "network:delete_dns_domain":[ + "role:compute-user" + ] +} \ No newline at end of file diff --git a/doc/common/samples/server-scheduler-hints.json b/doc/common/samples/server-scheduler-hints.json new file mode 100644 index 0000000000..f0506ad9dd --- /dev/null +++ b/doc/common/samples/server-scheduler-hints.json @@ -0,0 +1,13 @@ +{ + "server":{ + "name":"server-1", + "imageRef":"cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef":"1" + }, + "os:scheduler_hints":{ + "different_host":[ + "a0cf03a5-d921-4877-bb5c-86d26cf818e1", + "8c19174f-4220-44f0-824a-cd1eeef10287" + ] + } +} \ No newline at end of file diff --git a/doc/common/samples/server-scheduler-hints2.json b/doc/common/samples/server-scheduler-hints2.json new file mode 100644 index 0000000000..b991f4350a --- /dev/null +++ b/doc/common/samples/server-scheduler-hints2.json @@ -0,0 +1,10 @@ +{ + "server":{ + "name":"server-1", + "imageRef":"cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef":"1" + }, + "os:scheduler_hints":{ + "query":"[>=,$free_ram_mb,1024]" + } +} \ No newline at end of file diff --git a/doc/common/samples/server-scheduler-hints3.json b/doc/common/samples/server-scheduler-hints3.json new file mode 100644 index 0000000000..5c65981936 --- /dev/null +++ b/doc/common/samples/server-scheduler-hints3.json @@ -0,0 +1,13 @@ +{ + "server":{ + "name":"server-1", + "imageRef":"cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef":"1" + }, + "os:scheduler_hints":{ + "same_host":[ + "a0cf03a5-d921-4877-bb5c-86d26cf818e1", + "8c19174f-4220-44f0-824a-cd1eeef10287" + ] + } +} \ No newline at end of file diff --git a/doc/common/samples/server-scheduler-hints4.json b/doc/common/samples/server-scheduler-hints4.json new file mode 100644 index 0000000000..28359f5c0d --- /dev/null +++ b/doc/common/samples/server-scheduler-hints4.json @@ -0,0 +1,11 @@ +{ + "server":{ + "name":"server-1", + "imageRef":"cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef":"1" + }, + "os:scheduler_hints":{ + "build_near_host_ip":"192.168.1.1", + "cidr":"24" + } +} \ No newline at end of file diff --git a/doc/common/samples/token.json b/doc/common/samples/token.json new file mode 100644 index 0000000000..9d633f3031 --- /dev/null +++ b/doc/common/samples/token.json @@ -0,0 +1,13 @@ +{ + "token":{ + "expires":"2013-06-26T16:52:50Z", + "id":"MIIKXAY...", + "issued_at":"2013-06-25T16:52:50.622502", + "tenant":{ + "description":null, + "enabled":true, + "id":"912426c8f4c04fb0a07d2547b0704185", + "name":"demo" + } + } +} \ No newline at end of file diff --git a/doc/common/section_keystone-concepts-user-management.xml b/doc/common/section_keystone-concepts-user-management.xml index eafa68e739..ef44406861 100644 --- a/doc/common/section_keystone-concepts-user-management.xml +++ b/doc/common/section_keystone-concepts-user-management.xml @@ -109,108 +109,5 @@ "volume:create": ["role:compute-user"], To restrict all Compute service requests to require this role, the resulting file would look like: - { - "admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]], - "default": [["rule:admin_or_owner"]], - - "compute:create": ["role":"compute-user"], - "compute:create:attach_network": ["role":"compute-user"], - "compute:create:attach_volume": ["role":"compute-user"], - "compute:get_all": ["role":"compute-user"], - "compute:unlock_override": ["rule":"admin_api"], - - "admin_api": [["role:admin"]], - "compute_extension:accounts": [["rule:admin_api"]], - "compute_extension:admin_actions": [["rule:admin_api"]], - "compute_extension:admin_actions:pause": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:resume": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:lock": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:unlock": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]], - "compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]], - "compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:migrateLive": [["rule:admin_api"]], - "compute_extension:admin_actions:migrate": [["rule:admin_api"]], - "compute_extension:aggregates": [["rule:admin_api"]], - "compute_extension:certificates": ["role":"compute-user"], - "compute_extension:cloudpipe": [["rule:admin_api"]], - "compute_extension:console_output": ["role":"compute-user"], - "compute_extension:consoles": ["role":"compute-user"], - "compute_extension:createserverext": ["role":"compute-user"], - "compute_extension:deferred_delete": ["role":"compute-user"], - "compute_extension:disk_config": ["role":"compute-user"], - "compute_extension:evacuate": [["rule:admin_api"]], - "compute_extension:extended_server_attributes": [["rule:admin_api"]], - "compute_extension:extended_status": ["role":"compute-user"], - "compute_extension:flavorextradata": ["role":"compute-user"], - "compute_extension:flavorextraspecs": ["role":"compute-user"], - "compute_extension:flavormanage": [["rule:admin_api"]], - "compute_extension:floating_ip_dns": ["role":"compute-user"], - "compute_extension:floating_ip_pools": ["role":"compute-user"], - "compute_extension:floating_ips": ["role":"compute-user"], - "compute_extension:hosts": [["rule:admin_api"]], - "compute_extension:keypairs": ["role":"compute-user"], - "compute_extension:multinic": ["role":"compute-user"], - "compute_extension:networks": [["rule:admin_api"]], - "compute_extension:quotas": ["role":"compute-user"], - "compute_extension:rescue": ["role":"compute-user"], - "compute_extension:security_groups": ["role":"compute-user"], - "compute_extension:server_action_list": [["rule:admin_api"]], - "compute_extension:server_diagnostics": [["rule:admin_api"]], - "compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]], - "compute_extension:simple_tenant_usage:list": [["rule:admin_api"]], - "compute_extension:users": [["rule:admin_api"]], - "compute_extension:virtual_interfaces": ["role":"compute-user"], - "compute_extension:virtual_storage_arrays": ["role":"compute-user"], - "compute_extension:volumes": ["role":"compute-user"], - "compute_extension:volume_attachments:index": ["role":"compute-user"], - "compute_extension:volume_attachments:show": ["role":"compute-user"], - "compute_extension:volume_attachments:create": ["role":"compute-user"], - "compute_extension:volume_attachments:delete": ["role":"compute-user"], - "compute_extension:volumetypes": ["role":"compute-user"], - - "volume:create": ["role":"compute-user"], - "volume:get_all": ["role":"compute-user"], - "volume:get_volume_metadata": ["role":"compute-user"], - "volume:get_snapshot": ["role":"compute-user"], - "volume:get_all_snapshots": ["role":"compute-user"], - - "network:get_all_networks": ["role":"compute-user"], - "network:get_network": ["role":"compute-user"], - "network:delete_network": ["role":"compute-user"], - "network:disassociate_network": ["role":"compute-user"], - "network:get_vifs_by_instance": ["role":"compute-user"], - "network:allocate_for_instance": ["role":"compute-user"], - "network:deallocate_for_instance": ["role":"compute-user"], - "network:validate_networks": ["role":"compute-user"], - "network:get_instance_uuids_by_ip_filter": ["role":"compute-user"], - - "network:get_floating_ip": ["role":"compute-user"], - "network:get_floating_ip_pools": ["role":"compute-user"], - "network:get_floating_ip_by_address": ["role":"compute-user"], - "network:get_floating_ips_by_project": ["role":"compute-user"], - "network:get_floating_ips_by_fixed_address": ["role":"compute-user"], - "network:allocate_floating_ip": ["role":"compute-user"], - "network:deallocate_floating_ip": ["role":"compute-user"], - "network:associate_floating_ip": ["role":"compute-user"], - "network:disassociate_floating_ip": ["role":"compute-user"], - - "network:get_fixed_ip": ["role":"compute-user"], - "network:add_fixed_ip_to_instance": ["role":"compute-user"], - "network:remove_fixed_ip_from_instance": ["role":"compute-user"], - "network:add_network_to_project": ["role":"compute-user"], - "network:get_instance_nw_info": ["role":"compute-user"], - - "network:get_dns_domains": ["role":"compute-user"], - "network:add_dns_entry": ["role":"compute-user"], - "network:modify_dns_entry": ["role":"compute-user"], - "network:delete_dns_entry": ["role":"compute-user"], - "network:get_dns_entries_by_address": ["role":"compute-user"], - "network:get_dns_entries_by_name": ["role":"compute-user"], - "network:create_private_dns_domain": ["role":"compute-user"], - "network:create_public_dns_domain": ["role":"compute-user"], - "network:delete_dns_domain": ["role":"compute-user"] -} + diff --git a/doc/config-reference/block-storage/drivers/emc-volume-driver.xml b/doc/config-reference/block-storage/drivers/emc-volume-driver.xml index 1d5658f052..e0be6d45cb 100644 --- a/doc/config-reference/block-storage/drivers/emc-volume-driver.xml +++ b/doc/config-reference/block-storage/drivers/emc-volume-driver.xml @@ -2,6 +2,7 @@ xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> + EMC SMI-S iSCSI driver The EMC SMI-S iSCSI driver, which is based on the iSCSI driver, can create, delete, attach, and detach volumes. It can @@ -12,8 +13,8 @@ HTTP. The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to - perform CIM operations over HTTP by using SMI-S in the back-end for - EMC storage operations. + perform CIM operations over HTTP by using SMI-S in the + back-end for EMC storage operations. The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports VMAX and VNX storage systems. @@ -29,8 +30,7 @@
Supported operations - VMAX and - VNX arrays support these operations: + VMAX and VNX arrays support these operations: Create volume @@ -73,9 +73,9 @@ To set up the EMC SMI-S iSCSI driver - Install the python-pywbem package for your - distribution. See . + Install the python-pywbem + package for your distribution. See . Download SMI-S from PowerLink and install it. @@ -93,11 +93,12 @@
- Install the <package>python-pywbem</package> package + Install the <package>python-pywbem</package> + package - Install the python-pywbem package for your - distribution: + Install the python-pywbem + package for your distribution: On Ubuntu: @@ -119,14 +120,16 @@ Set up SMI-S You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of - Windows, Red Hat, and SUSE Linux. The host can be either a - physical server or VM hosted by an ESX server. See - the EMC SMI-S Provider release notes for supported - platforms and installation instructions. + Windows, Red Hat, and SUSE Linux. The host can be + either a physical server or VM hosted by an ESX + server. See the EMC SMI-S Provider release notes for + supported platforms and installation + instructions. You must discover storage arrays on the SMI-S - server before you can use the Cinder driver. Follow - instructions in the SMI-S release notes. + server before you can use the Cinder driver. + Follow instructions in the SMI-S release + notes. SMI-S is usually installed at /opt/emc/ECIM/ECOM/bin on @@ -146,29 +149,33 @@ Register with VNX To export a VNX volume to a Compute node, you must register the node with VNX. - On the Compute node 1.1.1.1, run these commands (assume 10.10.61.35 + On the Compute node 1.1.1.1, run + these commands (assume 10.10.61.35 is the iscsi target): $ sudo /etc/init.d/open-iscsi start $ sudo iscsiadm -m discovery -t st -p 10.10.61.35 $ cd /etc/iscsi $ sudo more initiatorname.iscsi $ iscsiadm -m node - Log in to VNX from the Compute node by using the target - corresponding to the SPA port: + Log in to VNX from the Compute node by using the + target corresponding to the SPA port: $ sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l - Assume + Assume that iqn.1993-08.org.debian:01:1a2b3c4d5f6g is the initiator name of the Compute node. Log in to Unisphere, go to VNX00000->Hosts->Initiators, - refresh and wait until initiator + refresh, and wait until initiator iqn.1993-08.org.debian:01:1a2b3c4d5f6g with SP Port A-8v0 appears. - Click Register, select CLARiiON/VNX, - and enter the myhost1 host name and myhost1 - IP address. Click Register. - Now the 1.1.1.1 host appears under - Hosts Host List as well. + Click Register, select + CLARiiON/VNX, and enter the + myhost1 host name and + myhost1 IP address. Click + Register. Now the + 1.1.1.1 host appears under + Hosts + Host List as well. Log out of VNX on the Compute node: $ sudo iscsiadm -m node -u Log in to VNX from the Compute node using the target @@ -184,7 +191,7 @@ For VMAX, you must set up the Unisphere for VMAX server. On the Unisphere for VMAX server, create initiator group, storage group, and port group and put - them in a masking view. Initiator group contains the + them in a masking view. initiator group contains the initiator names of the OpenStack hosts. Storage group must have at least six gatekeepers.
@@ -219,37 +226,23 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml change. For VMAX, add the following lines to the XML file: - <?xml version='1.0' encoding='UTF-8'?> -<EMC> -<StorageType>xxxx</StorageType> -<MaskingView>xxxx</MaskingView> -<EcomServerIp>x.x.x.x</EcomServerIp> -<EcomServerPort>xxxx</EcomServerPort> -<EcomUserName>xxxxxxxx</EcomUserName> -<EcomPassword>xxxxxxxx</EcomPassword> -</EMC< + For VNX, add the following lines to the XML file: - <?xml version='1.0' encoding='UTF-8'?> -<EMC> -<StorageType>xxxx</StorageType> -<EcomServerIp>x.x.x.x</EcomServerIp> -<EcomServerPort>xxxx</EcomServerPort> -<EcomUserName>xxxxxxxx</EcomUserName> -<EcomPassword>xxxxxxxx</EcomPassword> -</EMC< + To attach VMAX volumes to an OpenStack VM, you must - create a Masking View by using Unisphere for VMAX. The - Masking View must have an Initiator Group that + create a masking view by using Unisphere for VMAX. The + masking view must have an initiator group that contains the initiator of the OpenStack compute node that hosts the VM. - StorageType is the thin pool where the user wants to - create the volume from. Only thin LUNs are supported - by the plug-in. Thin pools can be created using - Unisphere for VMAX and VNX. - EcomServerIp and EcomServerPort are the IP address - and port number of the ECOM server which is packaged - with SMI-S. EcomUserName and EcomPassword are + StorageType is the thin pool + where the user wants to create the volume from. Only + thin LUNs are supported by the plug-in. Thin pools can + be created using Unisphere for VMAX and VNX. + EcomServerIp and + EcomServerPort are the IP + address and port number of the ECOM server which is + packaged with SMI-S. EcomUserName and EcomPassword are credentials for the ECOM server.
diff --git a/doc/config-reference/block-storage/drivers/samples/emc-vmax.xml b/doc/config-reference/block-storage/drivers/samples/emc-vmax.xml new file mode 100644 index 0000000000..064bd70742 --- /dev/null +++ b/doc/config-reference/block-storage/drivers/samples/emc-vmax.xml @@ -0,0 +1,9 @@ + + + xxxx + xxxx + x.x.x.x + xxxx + xxxxxxxx + xxxxxxxx + diff --git a/doc/config-reference/block-storage/drivers/samples/emc-vnx.xml b/doc/config-reference/block-storage/drivers/samples/emc-vnx.xml new file mode 100644 index 0000000000..04be95ba9e --- /dev/null +++ b/doc/config-reference/block-storage/drivers/samples/emc-vnx.xml @@ -0,0 +1,8 @@ + + + xxxx + x.x.x.x + xxxx + xxxxxxxx + xxxxxxxx + diff --git a/doc/config-reference/compute/section_compute-scheduler.xml b/doc/config-reference/compute/section_compute-scheduler.xml index 1caeb6be39..79f4eaa2c7 100644 --- a/doc/config-reference/compute/section_compute-scheduler.xml +++ b/doc/config-reference/compute/section_compute-scheduler.xml @@ -5,17 +5,22 @@ xmlns:ns5="http://www.w3.org/1999/xhtml" xmlns:ns4="http://www.w3.org/2000/svg" xmlns:ns3="http://www.w3.org/1998/Math/MathML" - xmlns:ns="http://docbook.org/ns/docbook" - version="5.0"> + xmlns:ns="http://docbook.org/ns/docbook" version="5.0"> + Scheduling - Compute uses the nova-scheduler service to - determine how to dispatch compute requests. For example, the nova-scheduler service determines which host a VM should - launch on. The term host in the context of filters means a physical node that has a - nova-compute service running on it. - You can configure the scheduler through a variety of options. - Compute is configured with the following default scheduler options: - scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler + Compute uses the nova-scheduler service to determine how to + dispatch compute and volume requests. For example, the + nova-scheduler + service determines which host a VM should launch on. The term + host in the context of filters + means a physical node that has a nova-compute service running on it. You can + configure the scheduler through a variety of options. + Compute is configured with the following default scheduler + options: + scheduler_driver=nova.scheduler.multi.MultiScheduler +compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter By default, the scheduler_driver is configured as a filter @@ -40,16 +45,22 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi (ComputeFilter). - Satisfy the extra specs associated with the instance type - (ComputeCapabilitiesFilter). + Satisfy the extra specs associated with the instance + type + (ComputeCapabilitiesFilter). - Satisfy any architecture, hypervisor type, or virtual - machine mode properties specified on the instance's image - properties. + Satisfy any architecture, hypervisor type, or + virtual machine mode properties specified on the + instance's image properties. (ImagePropertiesFilter). + For information on the volume scheduler, refer the Block + Storage section of + OpenStack Cloud Administrator + Guide for information.
Filter scheduler The Filter Scheduler @@ -80,7 +91,7 @@ scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFi - + The scheduler_available_filters configuration option in nova.conf provides the Compute service with the list of the filters @@ -100,9 +111,10 @@ scheduler_available_filters=myfilter.MyFilter The scheduler_default_filters configuration option in nova.conf defines the list of filters that are applied by the - nova-scheduler service. As - mentioned, the default filters are: - scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter + nova-scheduler service. As mentioned, + the default filters are: + scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter The following sections describe the available filters.
@@ -117,11 +129,12 @@ scheduler_available_filters=myfilter.MyFilter AggregateInstanceExtraSpecsFilter Matches properties defined in an instance type's extra specs against admin-defined properties on a host - aggregate. Works with specifications that are unscoped, - or are scoped with aggregate_instance_extra_specs. - See the host aggregates section for documentation - on how to use this filter. + aggregate. Works with specifications that are + unscoped, or are scoped with + aggregate_instance_extra_specs. + See the host + aggregates section for documentation on how + to use this filter.
AggregateMultiTenancyIsolation @@ -175,7 +188,7 @@ scheduler_available_filters=myfilter.MyFilter Passes all hosts that are operational and enabled. In general, this filter should always be enabled. - +
CoreFilter @@ -214,18 +227,7 @@ scheduler_available_filters=myfilter.MyFilter With the API, use the os:scheduler_hints key. For example: - - { - 'server': { - 'name': 'server-1', - 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', - 'flavorRef': '1' - }, - 'os:scheduler_hints': { - 'different_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1', - '8c19174f-4220-44f0-824a-cd1eeef10287'], - } - +
DiskFilter @@ -238,8 +240,9 @@ scheduler_available_filters=myfilter.MyFilter Configuration option in nova.conf. The default setting is: - disk_allocation_ratio=1.0 - Adjusting this value to greater than 1.0 enables scheduling instances while over committing disk + disk_allocation_ratio=1.0 + Adjusting this value to greater than 1.0 enables + scheduling instances while over committing disk resources on the node. This might be desirable if you use an image format that is sparse or copy on write such that each virtual instance does not require a 1:1 @@ -248,11 +251,11 @@ scheduler_available_filters=myfilter.MyFilter
GroupAffinityFilter The GroupAffinityFilter ensures that an instance is - scheduled on to a host from a set of group hosts. - To take advantage of this filter, the requester must pass a - scheduler hint, using group as the - key and an arbitrary name as the value. Using - the nova command-line tool, use the + scheduled on to a host from a set of group hosts. To + take advantage of this filter, the requester must pass + a scheduler hint, using group as + the key and an arbitrary name as the value. Using the + nova command-line tool, use the --hint flag. For example: $ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \ @@ -264,8 +267,8 @@ scheduler_available_filters=myfilter.MyFilter instance in a group is on a different host. To take advantage of this filter, the requester must pass a scheduler hint, using group as the - key and an arbitrary name as the value. Using - the nova command-line tool, use the + key and an arbitrary name as the value. Using the + nova command-line tool, use the --hint flag. For example: $ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \ @@ -314,8 +317,10 @@ scheduler_available_filters=myfilter.MyFilter of images and a special (isolated) set of hosts, such that the isolated images can only run on the isolated hosts, and the isolated hosts can only run isolated - images. The flag restrict_isolated_hosts_to_isolated_images - can be used to force isolated hosts to only run isolated images. + images. The flag + restrict_isolated_hosts_to_isolated_images + can be used to force isolated hosts to only run + isolated images. The admin must specify the isolated set of images and hosts in the nova.conf file using the isolated_hosts and @@ -323,7 +328,7 @@ scheduler_available_filters=myfilter.MyFilter options. For example: isolated_hosts=server1,server2 isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09 - +
JsonFilter @@ -380,18 +385,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1 --flavor 1 --hint query='[">=","$free_ram_mb",1024]' server1 With the API, use the os:scheduler_hints key: - - { - 'server': { - 'name': 'server-1', - 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', - 'flavorRef': '1' - }, - 'os:scheduler_hints': { - 'query': '[">=","$free_ram_mb",1024]', - } -} - +
RamFilter @@ -416,8 +410,9 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1 Filter out hosts that have already been attempted for scheduling purposes. If the scheduler selects a host to respond to a service request, and the host - fails to respond to the request, this filter prevents the scheduler from retrying that host for the - service request. + fails to respond to the request, this filter prevents + the scheduler from retrying that host for the service + request. This filter is only useful if the scheduler_max_attempts configuration option is set to a value greater than @@ -439,19 +434,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1 --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1 With the API, use the os:scheduler_hints key: - - { - 'server': { - 'name': 'server-1', - 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', - 'flavorRef': '1' - }, - 'os:scheduler_hints': { - 'same_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1', - '8c19174f-4220-44f0-824a-cd1eeef10287'], - } -} - +
SimpleCIDRAffinityFilter @@ -485,18 +468,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1 --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1 With the API, use the os:scheduler_hints key: - { - { - 'server': { - 'name': 'server-1', - 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', - 'flavorRef': '1' - }, - 'os:scheduler_hints': { - 'build_near_host_ip': '192.168.1.1', - 'cidr': '24' - } -} +
@@ -519,13 +491,15 @@ ram_weight_multiplier=1.0
Chance scheduler - As an administrator, you work with the - Filter Scheduler. However, the Compute service also uses - the Chance Scheduler, + As an administrator, you work with the Filter Scheduler. + However, the Compute service also uses the Chance + Scheduler, nova.scheduler.chance.ChanceScheduler, - which randomly selects from lists of filtered hosts. + which randomly selects from lists of filtered + hosts.
- +
Configuration reference diff --git a/doc/security-guide/ch024_authentication.xml b/doc/security-guide/ch024_authentication.xml index b227d74641..7310bdf76b 100644 --- a/doc/security-guide/ch024_authentication.xml +++ b/doc/security-guide/ch024_authentication.xml @@ -1,162 +1,306 @@ - - Identity - The OpenStack Identity Service (Keystone) supports multiple methods of authentication, including username & password, LDAP, and external authentication methods.  Upon successful authentication, The Identity Service provides the user with an authorization token used for subsequent service requests. - Transport Layer Security TLS/SSL provides authentication between services and persons using X.509 certificates.  Although the default mode for SSL is server-side only authentication, certificates may also be used for client authentication. -
- Authentication -
- Invalid Login Attempts - The Identity Service does not provide a method to limit access to accounts after repeated unsuccessful login attempts. Repeated failed login attempts are likely brute-force attacks (Refer figure Attack-types). This is a more significant issue in Public clouds. - Prevention is possible by using an external authentication system that blocks out an account after some configured number of failed login attempts. The account then may only be unlocked with further side-channel intervention. - If prevention is not an option, detection can be used to mitigate damage.Detection involves frequent review of access control logs to identify unauthorized attempts to access accounts. Possible remediation would include reviewing the strength of the user password, or blocking the network source of the attack via firewall rules. Firewall rules on the keystone server that restrict the number of connections could be used to reduce the attack effectiveness, and thus dissuade the attacker. - In addition, it is useful to examine account activity for unusual login times and suspicious actions, with possibly disable the account. Often times this approach is taken by credit card providers for fraud detection and alert. -
-
- Multi-factor Authentication - Employ multi-factor authentication for network access to privileged user accounts. The Identity Service supports external authentication services through the Apache web server that can provide this functionality. Servers may also enforce client-side authentication using certificates. - This recommendation provides insulation from brute force, social engineering, and both spear and mass phishing attacks that may compromise administrator passwords. -
+ + + Identity + The OpenStack Identity Service (Keystone) supports multiple + methods of authentication, including username & password, + LDAP, and external authentication methods. Upon successful + authentication, The Identity Service provides the user with an + authorization token used for subsequent service requests. + Transport Layer Security TLS/SSL provides authentication + between services and persons using X.509 certificates. Although + the default mode for SSL is server-side only authentication, + certificates may also be used for client authentication. +
+ Authentication +
+ Invalid Login Attempts + The Identity Service does not provide a method to limit + access to accounts after repeated unsuccessful login attempts. + Repeated failed login attempts are likely brute-force attacks + (Refer figure Attack-types). This is a more significant issue + in Public clouds. + Prevention is possible by using an external authentication + system that blocks out an account after some configured number + of failed login attempts. The account then may only be + unlocked with further side-channel intervention. + If prevention is not an option, detection can be used to + mitigate damage.Detection involves frequent review of access + control logs to identify unauthorized attempts to access + accounts. Possible remediation would include reviewing the + strength of the user password, or blocking the network source + of the attack via firewall rules. Firewall rules on the + keystone server that restrict the number of connections could + be used to reduce the attack effectiveness, and thus dissuade + the attacker. + In addition, it is useful to examine account activity for + unusual login times and suspicious actions, with possibly + disable the account. Often times this approach is taken by + credit card providers for fraud detection and alert.
-
- Authentication Methods -
- Internally Implemented Authentication Methods - The Identity Service can store user credentials in an SQL Database, or may use an LDAP-compliant directory server. The Identity database may be separate from databases used by other OpenStack services to reduce the risk of a compromise of the stored credentials. - When authentication is provided via username and password, the Identity Service does not enforce policies on password strength, expiration, or failed authentication attempts as recommended by NIST Special Publication 800-118 (draft). Organizations that desire to enforce stronger password policies should consider using Keystone Identity Service Extensions or external authentication services. - LDAP simplifies integration of Identity authentication into an organization's existing directory service and user account management processes. - Authentication and authorization policy in OpenStack may be delegated to an external LDAP server. A typical use case is an organization that seeks to deploy a private cloud and already has a database of employees, the users. This may be in an LDAP system. Using LDAP as a source of authority authentication, requests to Identity Service are delegated to the LDAP service, which will authorize or deny requests based on locally set policies. A token is generated on successful authentication. - Note that if the LDAP system has attributes defined for the user such as admin, finance, HR etc, these must be mapped into roles and groups within Identity for use by the various OpenStack services. The etc/keystone.conf file provides the mapping from the LDAP attributes to Identity attributes. - The Identity Service MUST NOT be allowed to write to LDAP services used for authentication outside of the OpenStack deployment as this would allow a sufficiently privileged keystone user to make changes to the LDAP directory. This would allow privilege escalation within the wider organization or facilitate unauthorized access to other information and resources. In such a deployment, user provisioning would be out of the realm of the OpenStack deployment. - - There is an OpenStack Security Note (OSSN) regarding keystone.conf permissions. - There is an OpenStack Security Note (OSSN) regarding potential DoS attacks. - -
-
- External Authentication Methods - Organizations may desire to implement external authentication for compatibility with existing authentication services or to enforce stronger authentication policy requirements. Although passwords are the most common form of authentication, they can be compromised through numerous methods, including keystroke logging and password compromise. External authentication services can provide alternative forms of authentication that minimize the risk from weak passwords. - These include: - - Password Policy Enforcement: Requires user passwords to conform to minimum standards for length, diversity of characters, expiration, or failed login attempts. - - - Multi-factor authentication: The authentication +
+ Multi-factor Authentication + Employ multi-factor authentication for network access to + privileged user accounts. The Identity Service supports + external authentication services through the Apache web server + that can provide this functionality. Servers may also enforce + client-side authentication using certificates. + This recommendation provides insulation from brute force, + social engineering, and both spear and mass phishing attacks + that may compromise administrator passwords. +
+
+
+ Authentication Methods +
+ Internally Implemented Authentication Methods + The Identity Service can store user credentials in an SQL + Database, or may use an LDAP-compliant directory server. The + Identity database may be separate from databases used by other + OpenStack services to reduce the risk of a compromise of the + stored credentials. + When authentication is provided via username and password, + the Identity Service does not enforce policies on password + strength, expiration, or failed authentication attempts as + recommended by NIST Special Publication 800-118 (draft). + Organizations that desire to enforce stronger password + policies should consider using Keystone Identity Service + Extensions or external authentication services. + LDAP simplifies integration of Identity authentication + into an organization's existing directory service and user + account management processes. + Authentication and authorization policy in OpenStack may + be delegated to an external LDAP server. A typical use case is + an organization that seeks to deploy a private cloud and + already has a database of employees, the users. This may be in + an LDAP system. Using LDAP as a source of authority + authentication, requests to Identity Service are delegated to + the LDAP service, which will authorize or deny requests based + on locally set policies. A token is generated on successful + authentication. + Note that if the LDAP system has attributes defined for + the user such as admin, finance, HR etc, these must be mapped + into roles and groups within Identity for use by the various + OpenStack services. The etc/keystone.conf + file provides the mapping from the LDAP attributes to Identity + attributes. + The Identity Service MUST + NOT be allowed to write to LDAP services used for + authentication outside of the OpenStack deployment as this + would allow a sufficiently privileged keystone user to make + changes to the LDAP directory. This would allow privilege + escalation within the wider organization or facilitate + unauthorized access to other information and resources. In + such a deployment, user provisioning would be out of the realm + of the OpenStack deployment. + + There is an OpenStack Security Note (OSSN) regarding keystone.conf + permissions. + There is an OpenStack Security Note (OSSN) regarding potential DoS + attacks. + +
+
+ External Authentication Methods + Organizations may desire to implement external + authentication for compatibility with existing authentication + services or to enforce stronger authentication policy + requirements. Although passwords are the most common form of + authentication, they can be compromised through numerous + methods, including keystroke logging and password compromise. + External authentication services can provide alternative forms + of authentication that minimize the risk from weak + passwords. + These include: + + + Password Policy Enforcement: Requires user passwords + to conform to minimum standards for length, diversity of + characters, expiration, or failed login attempts. + + + Multi-factor authentication: The authentication service requires the user to provide information based on something they have, such as a one-time password token or X.509 certificate, and something they know, such as a password. - - - Kerberos - - -
+ + + Kerberos + +
-
- Authorization - The Identity Service supports the notion of groups and roles. Users belong to groups. A group has a list of roles. OpenStack services reference the roles of the user attempting to access the service. The OpenStack policy enforcer middleware takes into consideration the policy rule associated with each resource and the user's group/roles and tenant association to determine if he/she has access to the requested resource. - The Policy enforcement middleware enables fine-grained access control to OpenStack resources. Only admin users can provision new users and have access to various management functionality. The cloud tenant would be able to only spin up instances, attach volumes, etc. -
- Establish Formal Access Control Policies - Prior to configuring roles, groups, and users, document your required access control policies for the OpenStack installation. The policies should be consistent with any regulatory or legal requirements for the organization. Future modifications to access control configuration should be done consistently with the formal policies. The policies should include the conditions and processes for creating, deleting, disabling, and enabling accounts, and for assigning privileges to the accounts. Periodically review the policies and ensure that configuration is in compliance with approved policies. -
-
- Service Authorization - As described in the OpenStack Cloud Administrator Guide, cloud administrators must define a user for each service, with a role of Admin. This service user account provides the service with the authorization to authenticate users. - The Compute and Object Storage services can be configured to use either the "tempAuth" file or Identity Service to store authentication information. The "tempAuth" solution MUST NOT be deployed in a production environment since it stores passwords in plain text. - The Identity Service supports client authentication for +
+
+ Authorization + The Identity Service supports the notion of groups and + roles. Users belong to groups. A group has a list of roles. + OpenStack services reference the roles of the user attempting to + access the service. The OpenStack policy enforcer middleware + takes into consideration the policy rule associated with each + resource and the user's group/roles and tenant association to + determine if he/she has access to the requested resource. + The Policy enforcement middleware enables fine-grained + access control to OpenStack resources. Only admin users can + provision new users and have access to various management + functionality. The cloud tenant would be able to only spin up + instances, attach volumes, etc. +
+ Establish Formal Access Control Policies + Prior to configuring roles, groups, and users, document + your required access control policies for the OpenStack + installation. The policies should be consistent with any + regulatory or legal requirements for the organization. Future + modifications to access control configuration should be done + consistently with the formal policies. The policies should + include the conditions and processes for creating, deleting, + disabling, and enabling accounts, and for assigning privileges + to the accounts. Periodically review the policies and ensure + that configuration is in compliance with approved + policies. +
+
+ Service Authorization + As described in the OpenStack Cloud Administrator + Guide, cloud administrators must define + a user for each service, with a role of Admin. This service + user account provides the service with the authorization to + authenticate users. + The Compute and Object Storage services can be configured + to use either the "tempAuth" file or Identity Service to store + authentication information. The "tempAuth" solution MUST NOT + be deployed in a production environment since it stores + passwords in plain text. + The Identity Service supports client authentication for SSL which may be enabled. SSL client authentication provides an additional authentication factor, in addition to the username / password, that provides greater reliability on user identification. It reduces the risk of unauthorized access - when user names and passwords may be compromised.  However, + when user names and passwords may be compromised. However, there is additional administrative overhead and cost to issue certificates to users that may not be feasible in every deployment. - - We recommend that you use client authentication with SSL for the authentication of services to the Identity Service. - - The cloud administrator should protect sensitive configuration files for unauthorized modification. This can be achieved with mandatory access control frameworks such as SELinux, including /etc/keystone.conf and X.509 certificates. + + We recommend that you use client authentication with SSL + for the authentication of services to the Identity + Service. + + The cloud administrator should protect sensitive + configuration files for unauthorized modification. This can be + achieved with mandatory access control frameworks such as + SELinux, including /etc/keystone.conf and + X.509 certificates. - For client authentication with SSL, you need to issue + For client authentication with SSL, you need to issue certificates. These certificates can be signed by an external authority or by the cloud administrator. OpenStack services by default check the signatures of certificates and connections fail if the signature cannot be checked. If the administrator uses self-signed certificates, the check might need to be - disabled. To disable these certificates, set - insecure=False in the - [filter:authtoken] section in the - /etc/nova/api.paste.ini file. This + disabled. To disable these certificates, set + insecure=False in the + [filter:authtoken] section in the + /etc/nova/api.paste.ini file. This setting also disables certificates for other components. -
-
- Administrative Users - We recommend that admin users authenticate using - Identity Service and an external authentication service that - supports 2-factor authentication, such as a certificate.  This - reduces the risk from passwords that may be compromised. This +
+
+ Administrative Users + We recommend that admin users authenticate using Identity + Service and an external authentication service that supports + 2-factor authentication, such as a certificate. This reduces + the risk from passwords that may be compromised. This recommendation is in compliance with NIST 800-53 IA-2(1) guidance in the use of multi factor authentication for network access to privileged accounts. -
-
- End Users - The Identity Service can directly provide end-user authentication, or can be configured to use external authentication methods to conform to an organization's security policies and requirements. -
-
- Policies - Each OpenStack service has a policy file in json format, called policy.json. The policy file specifies rules, and the rule that governs each resource. A resource could be API access, the ability to attach to a volume, or to fire up instances. - The policies can be updated by the cloud administrator to further control access to the various resources. The middleware could also be further customized. Note that your users must be assigned to groups/roles that you refer to in your policies. - Below is a snippet of the Block Storage service policy.json file. - -{ - "context_is_admin": [["role:admin"]], - "admin_or_owner": [["is_admin:True"], ["project_id:%(project_id)s"]], - "default": [["rule:admin_or_owner"]], - - "admin_api": [["is_admin:True"]], - - "volume:create": [], - "volume:get_all": [], - "volume:get_volume_metadata": [], - "volume:get_snapshot": [], - "volume:get_all_snapshots": [], - - "volume_extension:types_manage": [["rule:admin_api"]], - "volume_extension:types_extra_specs": [["rule:admin_api"]], - ... -} - Note the default rule specifies that the user must be either an admin or the owner of the volume. It essentially says only the owner of a volume or the admin may create/delete/update volumes. Certain other operations such as managing volume types are accessible only to admin users. +
+ End Users + The Identity Service can directly provide end-user + authentication, or can be configured to use external + authentication methods to conform to an organization's + security policies and requirements.
-
- Tokens - Once a user is authenticated, a token is generated and used internally in OpenStack for authorization and access. The default token lifespan is 24 hours. It is recommended that this value be set lower but caution needs to be taken as some internal services will need sufficient time to complete their work. The cloud may not provide services if tokens expire too early. An example of this would be the time needed by the Compute Service to transfer a disk image onto the hypervisor for local caching. - The following example shows a PKI token. Note that, in +
+
+ Policies + Each OpenStack service has a policy file in json format, + called policy.json. The policy + file specifies rules, and the rule that governs each resource. A + resource could be API access, the ability to attach to a volume, + or to fire up instances. + The policies can be updated by the cloud administrator to + further control access to the various resources. The middleware + could also be further customized. Note that your users must be + assigned to groups/roles that you refer to in your + policies. + Below is a snippet of the Block Storage service policy.json + file. + + Note the default rule + specifies that the user must be either an admin or the owner of + the volume. It essentially says only the owner of a volume or + the admin may create/delete/update volumes. Certain other + operations such as managing volume types are accessible only to + admin users. +
+
+ Tokens + Once a user is authenticated, a token is generated and used + internally in OpenStack for authorization and access. The + default token lifespan + is 24 hours. It is + recommended that this value be set lower but caution needs to be + taken as some internal services will need sufficient time to + complete their work. The cloud may not provide services if + tokens expire too early. An example of this would be the time + needed by the Compute Service to transfer a disk image onto the + hypervisor for local caching. + The following example shows a PKI token. Note that, in practice, the token id value is about 3500 bytes. We shorten it in this example. -   - "token": { - "expires": "2013-06-26T16:52:50Z", - "id": "MIIKXAY...", - "issued_at": "2013-06-25T16:52:50.622502", - "tenant": { - "description": null, - "enabled": true, - "id": "912426c8f4c04fb0a07d2547b0704185", - "name": "demo" - } - } - Note that the token is often passed within the structure of a larger context of an Identity Service response. These responses also provide a catalog of the various OpenStack services. Each service is listed with its name, access endpoints for internal, admin, and public access. - The Identity Service supports token revocation. This manifests as an API to revoke a token, to list revoked tokens and individual OpenStack services that cache tokens to query for the revoked tokens and remove them from their cache and append the same to their list of cached revoked tokens. -
-
- Future - Domains are high-level containers for projects, users and groups. As such, they can be used to centrally manage all Keystone-based identity components. With the introduction of account Domains, server, storage and other resources can now be logically grouped into multiple Projects (previously called Tenants) which can themselves be grouped under a master account-like container. In addition, multiple users can be managed within an account Domain and assigned roles that vary for each Project. - Keystone's V3 API supports multiple domains. Users of different domains may be represented in different authentication backends and even have different attributes that must be mapped to a single set of roles and privileges, that are used in the policy definitions to access the various service resources. - Where a rule may specify access to only admin users and users belonging to the tenant, the mapping may be trivial. In other scenarios the cloud administrator may need to approve the mapping routines per tenant. -
- + + Note that the token is often passed within the structure of + a larger context of an Identity Service response. These + responses also provide a catalog of the various OpenStack + services. Each service is listed with its name, access endpoints + for internal, admin, and public access. + The Identity Service supports token revocation. This + manifests as an API to revoke a token, to list revoked tokens + and individual OpenStack services that cache tokens to query for + the revoked tokens and remove them from their cache and append + the same to their list of cached revoked tokens. +
+
+ Future + Domains are high-level containers for projects, users and + groups. As such, they can be used to centrally manage all + Keystone-based identity components. With the introduction of + account Domains, server, storage and other resources can now be + logically grouped into multiple Projects (previously called + Tenants) which can themselves be grouped under a master + account-like container. In addition, multiple users can be + managed within an account Domain and assigned roles that vary + for each Project. + Keystone's V3 API supports multiple domains. Users of + different domains may be represented in different authentication + backends and even have different attributes that must be mapped + to a single set of roles and privileges, that are used in the + policy definitions to access the various service + resources. + Where a rule may specify access to only admin users and + users belonging to the tenant, the mapping may be trivial. In + other scenarios the cloud administrator may need to approve the + mapping routines per tenant. +
+