openstack-manuals/doc/config-reference/object-storage/section_object-storage-features.xml
asettle 2b15542181 Including link to dev docs for SLO support
SLO support is documented in the swift dev docs
but this is currently unreferenced in the config
docs. The info update required in this bug is
relevant to openstack-manuals but should be documented
in swift dev docs.

Change-Id: Ibb8030973a3f6f69392aba4a085d40000839f8b7
Closes-bug: #1499152
2015-10-07 14:01:17 +10:00

801 lines
39 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="configuring-object-storage-features">
<title>Configure Object Storage features</title>
<section xml:id="swift-zones">
<title>Object Storage zones</title>
<para>In OpenStack Object Storage, data is placed across
different tiers of failure domains. First, data is spread
across regions, then zones, then servers, and finally
across drives. Data is placed to get the highest failure
domain isolation. If you deploy multiple regions, the
Object Storage service places the data across the regions.
Within a region, each replica of the data should be stored
in unique zones, if possible. If there is only one zone,
data should be placed on different servers. And if there
is only one server, data should be placed on different
drives.</para>
<para>Regions are widely separated installations with a
high-latency or otherwise constrained network link between
them. Zones are arbitrarily assigned, and it is up to the
administrator of the Object Storage cluster to choose an
isolation level and attempt to maintain the isolation
level through appropriate zone assignment. For example, a
zone may be defined as a rack with a single power source.
Or a zone may be a DC room with a common utility provider.
Servers are identified by a unique IP/port. Drives are
locally attached storage volumes identified by mount
point.</para>
<para>In small clusters (five nodes or fewer), everything is
normally in a single zone. Larger Object Storage
deployments may assign zone designations differently; for
example, an entire cabinet or rack of servers may be
designated as a single zone to maintain replica
availability if the cabinet becomes unavailable (for
example, due to failure of the top of rack switches or a
dedicated circuit). In very large deployments, such as
service provider level deployments, each zone might have
an entirely autonomous switching and power infrastructure,
so that even the loss of an electrical circuit or
switching aggregator would result in the loss of a single
replica at most.</para>
<section xml:id="swift-zones-rackspacerecs">
<title>Rackspace zone recommendations</title>
<para>For ease of maintenance on OpenStack Object Storage,
Rackspace recommends that you set up at least five
nodes. Each node is assigned its own zone (for a total
of five zones), which gives you host level redundancy.
This enables you to take down a single zone for
maintenance and still guarantee object availability in
the event that another zone fails during your
maintenance.</para>
<para>You could keep each server in its own cabinet to achieve cabinet level isolation,
but you may wish to wait until your Object Storage service is better established
before developing cabinet-level isolation. OpenStack Object Storage is flexible; if
you later decide to change the isolation level, you can take down one zone at a time
and move them to appropriate new homes.</para>
</section>
</section>
<section xml:id="swift-raid-controller">
<title>RAID controller configuration</title>
<para>OpenStack Object Storage does not require RAID. In fact,
most RAID configurations cause significant performance
degradation. The main reason for using a RAID controller
is the battery-backed cache. It is very important for data
integrity reasons that when the operating system confirms
a write has been committed that the write has actually
been committed to a persistent location. Most disks lie
about hardware commits by default, instead writing to a
faster write cache for performance reasons. In most cases,
that write cache exists only in non-persistent memory. In
the case of a loss of power, this data may never actually
get committed to disk, resulting in discrepancies that the
underlying file system must handle.</para>
<para>
OpenStack Object Storage works best on the XFS file system, and
this document assumes that the hardware being used is configured
appropriately to be mounted with the <command>nobarriers</command>
option. For more information, see the <link
xlink:href="http://xfs.org/index.php/XFS_FAQ">XFS FAQ</link>.
</para>
<para>To get the most out of your hardware, it is essential
that every disk used in OpenStack Object Storage is
configured as a standalone, individual RAID 0 disk; in the
case of 6 disks, you would have six RAID 0s or one JBOD.
Some RAID controllers do not support JBOD or do not
support battery backed cache with JBOD. To ensure the
integrity of your data, you must ensure that the
individual drive caches are disabled and the battery
backed cache in your RAID card is configured and used.
Failure to configure the controller properly in this case
puts data at risk in the case of sudden loss of
power.</para>
<para>You can also use hybrid drives or similar options for
battery backed up cache configurations without a RAID
controller.</para>
</section>
<section xml:id="object-storage-rate-limits">
<?dbhtml stop-chunking?>
<title>Throttle resources through rate limits</title>
<para>Rate limiting in OpenStack Object Storage is implemented
as a pluggable middleware that you configure on the proxy
server. Rate limiting is performed on requests that result
in database writes to the account and container SQLite
databases. It uses memcached and is dependent on the proxy
servers having highly synchronized time. The rate limits
are limited by the accuracy of the proxy server
clocks.</para>
<section xml:id="configuration-for-rate-limiting">
<title>Configure rate limiting</title>
<para>All configuration is optional. If no account or
container limits are provided, no rate limiting
occurs. Available configuration options
include:</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-ratelimit.xml"/>
<para>The container rate limits are linearly interpolated
from the values given. A sample container rate
limiting could be:</para>
<para>container_ratelimit_100 = 100</para>
<para>container_ratelimit_200 = 50</para>
<para>container_ratelimit_500 = 20</para>
<para>This would result in:</para>
<table rules="all">
<caption>Values for Rate Limiting with Sample
Configuration Settings</caption>
<tbody>
<tr>
<td>Container Size</td>
<td>Rate Limit</td>
</tr>
<tr>
<td>0-99</td>
<td>No limiting</td>
</tr>
<tr>
<td>100</td>
<td>100</td>
</tr>
<tr>
<td>150</td>
<td>75</td>
</tr>
<tr>
<td>500</td>
<td>20</td>
</tr>
<tr>
<td>1000</td>
<td>20</td>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="object-storage-healthcheck">
<title>Health check</title>
<para>Provides an easy way to monitor whether the Object Storage proxy server is alive. If
you access the proxy with the path <filename>/healthcheck</filename>, it responds with
<literal>OK</literal> in the response body, which monitoring tools can use.</para>
<xi:include
href="../../common/tables/swift-account-server-filter-healthcheck.xml"
/>
</section>
<section xml:id="object-storage-domain-remap">
<title>Domain remap</title>
<para>Middleware that translates container and account parts
of a domain to path parameters that the proxy server
understands.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-domain_remap.xml"
/>
</section>
<section xml:id="object-storage-cname-lookup">
<title>CNAME lookup</title>
<para>Middleware that translates an unknown domain in the host
header to something that ends with the configured
<code>storage_domain</code> by looking up the given domain's CNAME
record in DNS.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-cname_lookup.xml"
/>
</section>
<section xml:id="object-storage-tempurl">
<?dbhtml stop-chunking?>
<title>Temporary URL</title>
<para>Allows the creation of URLs to provide temporary access to objects. For example, a
website may wish to provide a link to download a large object in OpenStack Object
Storage, but the Object Storage account has no public access. The website can generate a
URL that provides GET access for a limited time to the resource. When the web browser
user clicks on the link, the browser downloads the object directly from Object Storage,
eliminating the need for the website to act as a proxy for the request. If the user
shares the link with all his friends, or accidentally posts it on a forum, the direct
access is limited to the expiration time set when the website created the link.</para>
<para>A temporary URL is the typical URL associated with an
object, with two additional query parameters:<variablelist>
<varlistentry>
<term><literal>temp_url_sig</literal></term>
<listitem>
<para>A cryptographic signature</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>temp_url_expires</literal></term>
<listitem>
<para>An expiration date, in Unix time</para>
</listitem>
</varlistentry>
</variablelist></para>
<para>An example of a temporary
URL:<programlisting>
https://swift-cluster.example.com/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&amp;
temp_url_expires=1323479485
</programlisting></para>
<para>
To create temporary URLs, first set the
<literal>X-Account-Meta-Temp-URL-Key</literal> header on your
Object Storage account to an arbitrary string. This string serves
as a secret key. For example, to set a key of
<literal>b3968d0207b54ece87cccc06515a89d4</literal> by using the
<command>swift</command> command-line tool:
</para>
<screen><prompt>$</prompt> <userinput>swift post -m "Temp-URL-Key:<replaceable>b3968d0207b54ece87cccc06515a89d4</replaceable>"</userinput></screen>
<para>Next, generate an HMAC-SHA1 (RFC 2104) signature to
specify:</para>
<itemizedlist>
<listitem>
<para>Which HTTP method to allow (typically
<literal>GET</literal> or
<literal>PUT</literal>)</para>
</listitem>
<listitem>
<para>The expiry date as a Unix timestamp</para>
</listitem>
<listitem>
<para>The full path to the object</para>
</listitem>
<listitem>
<para>The secret key set as the
<literal>X-Account-Meta-Temp-URL-Key</literal></para>
</listitem>
</itemizedlist>
<para>Here is code generating the signature for a GET for 24
hours on
<code>/v1/AUTH_account/container/object</code>:</para>
<programlisting language="python">import hmac
from hashlib import sha1
from time import time
method = 'GET'
duration_in_seconds = 60*60*24
expires = int(time() + duration_in_seconds)
path = '/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object'
key = 'mykey'
hmac_body = '%s\n%s\n%s' % (method, expires, path)
sig = hmac.new(key, hmac_body, sha1).hexdigest()
s = 'https://{host}/{path}?temp_url_sig={sig}&amp;temp_url_expires={expires}'
url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=expires)</programlisting>
<para>
Any alteration of the resource path or query arguments results in
a <errorcode>401</errorcode> <errortext>Unauthorized</errortext>
error. Similarly, a PUT where GET was the allowed method returns a
<errorcode>401</errorcode> error. HEAD is allowed if GET or PUT is
allowed. Using this in combination with browser form post
translation middleware could also allow direct-from-browser
uploads to specific locations in Object Storage.
</para>
<note>
<para>
Changing the <literal>X-Account-Meta-Temp-URL-Key</literal>
invalidates any previously generated
temporary URLs within 60 seconds, which is the memcache time for
the key. Object Storage supports up to two keys,
specified by <literal>X-Account-Meta-Temp-URL-Key</literal>
and <literal>X-Account-Meta-Temp-URL-Key-2</literal>.
Signatures are checked against both keys,
if present. This process enables key rotation without
invalidating all existing temporary URLs.
</para>
</note>
<para>
Object Storage includes the <command>swift-temp-url</command>
script that generates the query parameters automatically:
</para>
<screen><prompt>$</prompt> <userinput>bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey</userinput>
<computeroutput>/v1/AUTH_account/container/object?
temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91&amp;
temp_url_expires=1374497657</computeroutput></screen>
<para>Because this command only returns the path, you must
prefix the Object Storage host name (for example,
<literal>https://swift-cluster.example.com</literal>).</para>
<para>With GET Temporary URLs, a
<literal>Content-Disposition</literal> header is set
on the response so that browsers interpret this as a file
attachment to be saved. The file name chosen is based on
the object name, but you can override this with a
<literal>filename</literal> query parameter. The
following example specifies a filename of <filename>My
Test File.pdf</filename>:</para>
<programlisting>https://swift-cluster.example.com/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&amp;
temp_url_expires=1323479485&amp;
filename=My+Test+File.pdf</programlisting>
<para>If you do not want the object to be downloaded, you can cause
<literal>Content-Disposition: inline</literal>
to be set on the response by adding
the <literal>inline</literal> parameter to the query string, as follows:</para>
<programlisting>https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&amp;
temp_url_expires=1323479485&amp;inline</programlisting>
<para>To enable Temporary URL functionality, edit
<filename>/etc/swift/proxy-server.conf</filename> to
add <literal>tempurl</literal> to the
<literal>pipeline</literal> variable defined in the
<literal>[pipeline:main]</literal> section. The
<literal>tempurl</literal> entry should appear
immediately before the authentication filters in the
pipeline, such as <literal>authtoken</literal>,
<literal>tempauth</literal> or
<literal>keystoneauth</literal>. For
example:<programlisting>[pipeline:main]
pipeline = pipeline = healthcheck cache <emphasis role="bold">tempurl</emphasis> authtoken keystoneauth proxy-server</programlisting></para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-tempurl.xml"
/>
</section>
<section xml:id="object-storage-name-check">
<title>Name check filter</title>
<para>Name Check is a filter that disallows any paths that
contain defined forbidden characters or that exceed a
defined length.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-name_check.xml"
/>
</section>
<section xml:id="object-storage-constraints">
<title>Constraints</title>
<para>To change the OpenStack Object Storage internal limits,
update the values in the
<literal>swift-constraints</literal> section in the
<filename>swift.conf</filename> file. Use caution when
you update these values because they affect the
performance in the entire cluster.</para>
<xi:include
href="../../common/tables/swift-swift-swift-constraints.xml"
/>
</section>
<section xml:id="object-storage-dispersion">
<title>Cluster health</title>
<para>
Use the <command>swift-dispersion-report</command> tool to measure
overall cluster health. This tool checks if a set of deliberately
distributed containers and objects are currently in their proper
places within the cluster. For instance, a common deployment has
three replicas of each object. The health of that object can be
measured by checking if each replica is in its proper place. If
only 2 of the 3 is in place the object's health can be said to be
at 66.66%, where 100% would be perfect. A single object's health,
especially an older object, usually reflects the health of that
entire partition the object is in. If you make enough objects on a
distinct percentage of the partitions in the cluster,you get a
good estimate of the overall cluster health.
</para>
<para>
In practice, about 1% partition coverage seems to balance well
between accuracy and the amount of time it takes to gather
results. To provide this health value, you must create an account
solely for this usage. Next, you must place the containers and
objects throughout the system so that they are on distinct
partitions. Use the <command>swift-dispersion-populate</command>
tool to create random container and object names until they fall
on distinct partitions.
</para>
<para>
Last, and repeatedly for the life of the cluster, you must run the
<command>swift-dispersion-report</command> tool to check the
health of each container and object.
</para>
<para>
These tools must have direct access to the entire cluster and ring
files. Installing them on a proxy server suffices.
</para>
<para>
The <command>swift-dispersion-populate</command> and
<command>swift-dispersion-report</command> commands both use the
same <filename>/etc/swift/dispersion.conf</filename> configuration
file. Example <filename>dispersion.conf</filename> file:
</para>
<programlisting language="ini">
[dispersion]
auth_url = http://localhost:8080/auth/v1.0
auth_user = test:tester
auth_key = testing
</programlisting>
<para>
You can use configuration options to specify the dispersion
coverage, which defaults to 1%, retries, concurrency, and so on.
However, the defaults are usually fine. After the configuration is
in place, run the <command>swift-dispersion-populate</command>
tool to populate the containers and objects throughout the
cluster. Now that those containers and objects are in place, you
can run the <command>swift-dispersion-report</command> tool to get
a dispersion report or view the overall health of the cluster.
Here is an example of a cluster in perfect health:
</para>
<screen><prompt>$</prompt> <userinput>swift-dispersion-report</userinput>
<computeroutput>Queried 2621 containers for dispersion reporting, 19s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
</computeroutput></screen>
<para>Now, deliberately double the weight of a device in the
object ring (with replication turned off) and re-run the
dispersion report to show what impact that has:</para>
<screen><prompt>$</prompt> <userinput>swift-ring-builder object.builder set_weight d0 200</userinput>
<prompt>$</prompt> <userinput>swift-ring-builder object.builder rebalance</userinput>
...
<prompt>$</prompt> <userinput>swift-dispersion-report</userinput>
<computeroutput>Queried 2621 containers for dispersion reporting, 8s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
There were 1763 partitions missing one copy.
77.56% of object copies found (6094 of 7857)
Sample represents 1.00% of the object partition space
</computeroutput></screen>
<para>
You can see the health of the objects in the cluster has gone down
significantly. Of course, this test environment has just four
devices, in a production environment with many devices the impact
of one device change is much less. Next, run the replicators to
get everything put back into place and then rerun the dispersion
report:
</para>
<programlisting>
... start object replicators and monitor logs until they're caught up ...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 17s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
</programlisting>
<para>Alternatively, the dispersion report can also be output
in JSON format. This allows it to be more easily consumed
by third-party utilities:</para>
<screen><prompt>$</prompt> <userinput>swift-dispersion-report -j</userinput>
<computeroutput>{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0,
"copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container":
{"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one": 0, "copies_expected":
12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}}</computeroutput></screen>
<xi:include
href="../../common/tables/swift-dispersion-dispersion.xml"
/>
</section>
<section xml:id="object-storage-slo">
<title>Static Large Object (SLO) support</title>
<para>This feature is very similar to Dynamic Large Object
(DLO) support in that it enables the user to upload many
objects concurrently and afterwards download them as a
single object. It is different in that it does not rely on
eventually consistent container listings to do so.
Instead, a user-defined manifest of the object segments is
used.</para>
<para>For more information regarding SLO usage and support, please
see: <link xlink:href="http://docs.openstack.org/developer/swift/middleware.html#slo-doc">
Static Large Objects</link>.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-slo.xml"
/>
</section>
<section xml:id="object-storage-container-quotas">
<title>Container quotas</title>
<para>
The <code>container_quotas</code> middleware implements simple
quotas that can be imposed on Object Storage containers by a user
with the ability to set container metadata, most likely the
account administrator. This can be useful for limiting the scope
of containers that are delegated to non-admin users, exposed to
form &POST; uploads, or just as a self-imposed sanity check.
</para>
<para>Any object &PUT; operations that exceed these quotas
return a <literal>Forbidden (403)</literal> status code.</para>
<para>Quotas are subject to several limitations: eventual
consistency, the timeliness of the cached container_info
(60 second TTL by default), and it is unable to reject
chunked transfer uploads that exceed the quota (though
once the quota is exceeded, new chunked transfers are
refused).</para>
<para>Set quotas by adding meta values to the container. These
values are validated when you set them:</para>
<itemizedlist>
<listitem>
<para>X-Container-Meta-Quota-Bytes: Maximum size of
the container, in bytes.</para>
</listitem>
<listitem>
<para>X-Container-Meta-Quota-Count: Maximum object
count of the container.</para>
</listitem>
</itemizedlist>
<xi:include
href="../../common/tables/swift-proxy-server-filter-container-quotas.xml"
/>
</section>
<section xml:id="object-storage-account-quotas">
<title>Account quotas</title>
<para>The <literal>x-account-meta-quota-bytes</literal>
metadata entry must be requests (PUT, POST) if a given
account quota (in bytes) is exceeded while DELETE requests
are still allowed.</para>
<para>The <literal>x-account-meta-quota-bytes</literal>
metadata entry must be
set to store and enable the quota. Write requests to this
metadata entry are only permitted for resellers. There is
no account quota limitation on a reseller account even if
<literal>x-account-meta-quota-bytes</literal> is set.
</para>
<para>Any object PUT operations that exceed the quota return a
413 response (request entity too large) with a descriptive
body.</para>
<para>The following command uses an admin account that own the
Reseller role to set a quota on the test account:</para>
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
--os-storage-url http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:10000</userinput></screen>
<para>Here is the stat listing of an account where quota has
been set:</para>
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat</userinput>
<computeroutput>Account: AUTH_test
Containers: 0
Objects: 0
Bytes: 0
Meta Quota-Bytes: 10000
X-Timestamp: 1374075958.37454
X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
<para>This command removes the account quota:</para>
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin --os-storage-url http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:</userinput></screen>
</section>
<section xml:id="object-storage-bulk-delete">
<title>Bulk delete</title>
<para>Use <code>bulk-delete</code> to delete multiple files
from an account
with a single request. Responds to DELETE requests with a
header 'X-Bulk-Delete: true_value'. The body of the DELETE
request is a new line-separated list of files to delete.
The files listed must be URL encoded and in the
form:</para>
<programlisting>
/container_name/obj_name
</programlisting>
<para>If all files are successfully deleted (or did not
exist), the operation returns <code>HTTPOk</code>. If any
files failed to delete, the operation returns
<code>HTTPBadGateway</code>. In both cases, the response body
is a JSON dictionary that shows the number of files that were
successfully deleted or not found. The files that failed are
listed.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-bulk.xml"
/>
</section>
<xi:include href="section_configure_s3.xml"/>
<section xml:id="object-storage-drive-audit">
<title>Drive audit</title>
<para>
The <option>swift-drive-audit</option> configuration items
reference a script that can be run by using
<command>cron</command> to watch for bad drives. If errors are
detected, it unmounts the bad drive so that OpenStack Object
Storage can work around it. It takes the following options:
</para>
<xi:include
href="../../common/tables/swift-drive-audit-drive-audit.xml"
/>
</section>
<section xml:id="object-storage-form-post">
<title>Form post</title>
<para>
Middleware that enables you to upload objects to a cluster by
using an HTML form &POST;.
</para>
<para>
The format of the form is:
</para>
<programlisting>&lt;![CDATA[
&lt;form action="&lt;swift-url&gt;" method="POST"
enctype="multipart/form-data"&gt;
&lt;input type="hidden" name="redirect" value="&lt;redirect-url&gt;" /&gt;
&lt;input type="hidden" name="max_file_size" value="&lt;bytes&gt;" /&gt;
&lt;input type="hidden" name="max_file_count" value="&lt;count&gt;" /&gt;
&lt;input type="hidden" name="expires" value="&lt;unix-timestamp&gt;" /&gt;
&lt;input type="hidden" name="signature" value="&lt;hmac&gt;" /&gt;
&lt;input type="hidden" name="x_delete_at" value="&lt;unix-timestamp>"/&gt;
&lt;input type="hidden" name="x_delete_after" value="&lt;seconds>"/&gt;
&lt;input type="file" name="file1" /&gt;&lt;br /&gt;
&lt;input type="submit" /&gt;
&lt;/form&gt;]]&gt;
</programlisting>
<para>
In the form:
</para>
<itemizedlist>
<listitem>
<para>
<emphasis role="bold"><literal>action="&lt;swift-url&gt;"</literal></emphasis>
</para>
<para>
The URL to the Object Storage destination, such as
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>.
</para>
<para>
The name of each uploaded file is appended to the specified
<literal>swift-url</literal>. So, you can upload directly to the root of
container with a URL like
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/</uri>.
</para>
<para>
Optionally, you can include an object prefix to
separate different users' uploads, such as
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>.
</para>
</listitem>
<listitem>
<para>
<emphasis
role="bold"><literal>method="POST"</literal></emphasis></para>
<para>
The form <literal>method</literal> must be &POST;.
</para>
</listitem>
<listitem>
<para>
<emphasis
role="bold"><literal>enctype="multipart/form-data</literal></emphasis></para>
<para>
The <literal>enctype</literal> must be set to <literal>multipart/form-data</literal>.
</para>
</listitem>
<listitem>
<para>
<emphasis
role="bold"><literal>name="redirect"</literal></emphasis></para>
<para>
The URL to which to redirect the browser after the upload
completes. The URL has status and message query parameters
added to it that indicate the HTTP status code for the upload
and, optionally, additional error information. The 2<emphasis
role="italic">nn</emphasis> status code indicates success. If
an error occurs, the URL might include error information, such
as <literal>"max_file_size exceeded"</literal>.
</para>
</listitem>
<listitem>
<para>
<emphasis
role="bold"><literal>name="max_file_size"</literal></emphasis></para>
<para>
Required. The maximum number of bytes that can be uploaded in
a single file upload.
</para>
</listitem>
<listitem>
<para>
<emphasis
role="bold"><literal>name="max_file_count"</literal></emphasis></para>
<para>
Required. The maximum number of files that can be uploaded
with the form.
</para>
</listitem>
<listitem>
<para>
<emphasis
role="bold"><literal>name="expires"</literal></emphasis>
</para>
<para>
The expiration date and time for the form in <link
xlink:href="https://en.wikipedia.org/wiki/Unix_time">UNIX
Epoch time stamp format</link>. After this date and time, the
form is no longer valid.
</para>
<para>
For example, <code>1440619048</code> is equivalent to
<code>Mon, Wed, 26 Aug 2015 19:57:28 GMT</code>.
</para>
</listitem>
<listitem>
<para>
<emphasis
role="bold"><literal>name="signature"</literal></emphasis>
</para>
<para>
The HMAC-SHA1 signature of the form. This sample Python code
shows how to compute the signature:
</para>
<programlisting language="python">import hmac
from hashlib import sha1
from time import time
path = '/v1/account/container/object_prefix'
redirect = 'https://myserver.com/some-page'
max_file_size = 104857600
max_file_count = 10
expires = int(time() + 600)
key = 'mykey'
hmac_body = '%s\n%s\n%s\n%s\n%s' % (path, redirect,
max_file_size, max_file_count, expires)
signature = hmac.new(key, hmac_body, sha1).hexdigest()</programlisting>
<para>The key is the value of the
<literal>X-Account-Meta-Temp-URL-Key</literal> header
on the account.</para>
<para>
Use the full path from the <literal>/v1/</literal> value and
onward.
</para>
<para>
During testing, you can use the
<command>swift-form-signature</command> command-line tool to compute the
<literal>expires</literal> and <literal>signature</literal>
values.
</para>
</listitem>
<listitem>
<para>
<emphasis
role="bold"><literal>name="x_delete_at"</literal></emphasis>
</para>
<para>
The date and time in <link
xlink:href="https://en.wikipedia.org/wiki/Unix_time">UNIX Epoch
time stamp format</link> when the object will be removed.
</para>
<para>
For example, <code>1440619048</code> is equivalent to
<code>Mon, Wed, 26 Aug 2015 19:57:28 GMT</code>.
</para>
<para>
This attribute enables you to specify the <literal>X-Delete-
At</literal> header value in the form &POST;.
</para>
</listitem>
<listitem>
<para>
<emphasis
role="bold"><literal>name="x_delete_after"</literal></emphasis>
</para>
<para>
The number of seconds after which the object is removed.
Internally, the Object Storage system stores this value in the
<literal>X-Delete-At</literal> metadata item. This attribute
enables you to specify the <literal>X-Delete-After</literal>
header value in the form &POST;.
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold"><literal>type="file" name="filexx"</literal></emphasis>
</para>
<para>
Optional. One or more files to upload. Must appear after the other
attributes to be processed correctly. If attributes come after the
<literal>file</literal> attribute, they are not sent with the sub-
request because on the server side, all attributes in the file
cannot be parsed unless the whole file is read into memory and the
server does not have enough memory to service these requests. So,
attributes that follow the <literal>file</literal> attribute are
ignored.
</para>
</listitem>
</itemizedlist>
<xi:include href="../../common/tables/swift-proxy-server-filter-formpost.xml"/>
</section>
<section xml:id="object-storage-static-web">
<title>Static web sites</title>
<para>When configured, this middleware serves container data
as a static web site with index file and error file
resolution and optional file listings. This mode is
normally only active for anonymous requests.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-staticweb.xml"
/>
</section>
<xi:include href="section_object-storage-cors.xml"/>
<xi:include href="section_object-storage-listendpoints.xml"/>
</section>