robots.txt: keep web crawlers out of /trunk
We don't want users to accidentally refer to documentation for the upcoming release. Search engines should not lead them to work in progress. With this patch, robots.txt instructs web search engines to ignore docs.openstack.org/trunk/. Change-Id: I9484d7d527d025c5ff3631bccb9321df583920b3
This commit is contained in:
parent
b38b6e6ae5
commit
3558d2961d
@ -1,2 +1,7 @@
|
|||||||
User-agent: Googlebot
|
# openstack-manuals/www/robots.txt
|
||||||
Disallow: /*.pdf$
|
User-agent: Googlebot
|
||||||
|
Disallow: /*.pdf$
|
||||||
|
|
||||||
|
# Keep OpenStack users from accidentally using docs for the upcoming release
|
||||||
|
User-agent: *
|
||||||
|
Disallow: /trunk/
|
||||||
|
Loading…
Reference in New Issue
Block a user