The Nova EC2 API is disabled by default, the default value
of the enabled_apis parameter in nova.conf is "osapi_compute, metadata"
The EC2 API is marked as deprecated and will be removed from Nova in
the future.
Change-Id: I6b9d66017e066cde5749be45b367194d2192ead3
Closes-bug: #1586605
It's impossible to drop root for the HAProxy container.
But HAProxy provides a possibility to use a chroot jail.
When attaching to the HAProxy container, we see that
the root directory is changed:
$ sudo docker exec -ti haproxy bash
(haproxy)[root@operator /]# ls -di /
259 /
Co-Authored-By: Vikram Hosakote <vhosakot@cisco.com>
Closes-Bug: #1552289
Change-Id: I9d55e9b741b8560cac53dc8b837a24a3029a4dc0
Use HAProxy to terminate a TLS connection on port 5601 for the
Kibana dashboard when TLS is enabled for Kolla. x-forwarded-for
and x-forwarded-proto headers are set to give Kibana the info it
needs to write returned URLs.
Change-Id: I03a2dd3a8e2513d38281b30bf4bae6449fec0316
Closes-bug: #1566117
haproxy 1.6+ does not allow the formatting that was used for stats
listener. We need to adjust it to the correct syntax
TrivialFix
Change-Id: I5f0111c756d40a0cf7385e6963ebbb57adb36b35
Kibana is a tool for operators. It should not be accessible though
the external VIP.
Closes-Bug: #1554977
Change-Id: I1dc101de18e4e01ebde9d317ab7e3193e307a14e
When configured with a separate external VIP, glance registry
should listen on only the internal VIP.
TrivialFix
Change-Id: Ie186f2ea391b53b9ea0cb230c573c9e09efd44b2
This patch includes changes relative to integrating Heka with
Elasticsearch and Kibana.
The main change is the addition of an Heka ElasticSearchOutput plugin
to make Heka send the logs it collects to Elasticsearch.
Since Logstash is not used the enable_elk deploy variable is renamed
to enable_central_logging.
If enable_central_logging is false then Elasticsearch and Kibana are
not started, and Heka won't attempt to send logs to Elasticsearch.
By default enable_central_logging is set to false. If
enable_central_logging is set to true after deployment then the Heka
container needs to be recreated (for Heka to get the new
configuration).
The Kibana configuration used property names that are deprecated in
Kibana 4.2. This is changed to use non-deprecated property names.
Previously logs read from files and from Syslog had a different Type
in Heka. This is changed to always use "log" for the Type. In this
way just one index instead of two is used in Elasticsearch, making
things easier to the user on the visualization side.
The HAProxy configuration is changed to add entries for Kibana.
Kibana server is now accessible via the internal VIP, and also via
the external VIP if there's one configured.
The HAProxy configuration is changed to add an entry for
Elasticsearch. So Elasticsearch is now accessible via the internal
VIP. Heka uses that channel for communicating with Elasticsearch.
Note that currently the Heka logs include "Plugin
elasticsearch_output" errors when Heka starts. This occurs when Heka
starts processing logs while Elasticsearch is not yet started. These
are transient errors that go away when Elasticsearch is ready. And
with buffering enabled on the ElasticSearchOuput plugin logs will be
buffered and then retransmitted when Elasticsearch is ready.
Change-Id: I6ff7a4f0ad04c4c666e174693a35ff49914280bb
Implements: blueprint central-logging-service
TLS can be used to encrypt and authenticate the connection with
OpenStack endpoints. This patch provides the necessary
parameters and changes the resulting service configurations to
enable TLS for the Kolla deployed OpenStack cloud.
The new input parameters are:
kolla_enable_tls_external: "yes" or "no" (default is "no")
kolla_external_fqdn_cert: "/etc/kolla/certificates/haproxy.pem"
kolla_external_fqdn_cacert: "/etc/kolla/certificates/haproxy-ca.crt"
Implements: blueprint kolla-ssl
Change-Id: I48ef8a781c3035d58817f9bf6f36d59a488bab41
Due to poor planning on our variable names we have a situation where
we have "internal_address" which must be a VIP, but "external_address"
which should be a DNS name. Now with two vips "external_vip_address"
is a new variable.
This corrects that issue by deprecating kolla_internal_address and
replacing it with 4 nicely named variables.
kolla_internal_vip_address
kolla_internal_fqdn
kolla_external_vip_address
kolla_external_fqdn
The default behaviour will remain the same, and the way the variable
inheritance is setup the kolla_internal_address variable can still be
set in globals.yml and propogate out to these 4 new variables like it
normally would, but all reference to kolla_internal_address has been
completely removed.
Change-Id: I4556dcdbf4d91a8d2751981ef9c64bad44a719e5
Partially-Implements: blueprint ssl-kolla
HAProxy: change to use option forwardfor to pass origin IP address
to backend via X-Forwarded-For header
Keystone: Apache does the audit logs for keystone. Change the
LogFormat to display the passed address instead of the connection
address which is that of the load balancer.
Nova, Cinder, Glance: these services can make use of the address
passed in X-Forwarded-For. With this setting the API logs for
these services include the client IP address.
Change-Id: Ia861ecc11a7c7d463d0366586926d1a842853f69
Closes-Bug: #1548935
To improve security, operators have asked for two VIPs for
their cloud.
VIP 1 is the internal VIP that can reach internal and admin endpoints.
In addition, the internal VIP can also reach other internal services,
such as the database and message services.
VIP 2 is the external VIP that can only reach public endpoints.
With one VIP only, all services are reached at the same address.
To add a second VIP, this patch adds two new configuration parameters.
kolla_external_vip_address: is an IPv4 address to use for created VIP
kolla_external_vip_interface: is the network interface to use for VIP
In this scenario, the first VIP (the internal VIP), is defined by
the original parameters (kolla_internal address and network_interface).
When using two VIPs, the existing kolla_external_address parameter
should be/point to/resolve to the kolla_external_vip_address.
Closes-bug: 1535333
Change-Id: I5bfcefaf7899298455cdade8209c34324aebfecb
In heterogeneous environment, api_interfaces are different each other.
So we should specify it from hostvars.
Implements: bp configure-network-interface
Change-Id: Id15d70bfb9ebb62a64a3847a6b77407efb171dbe
Due bad rebases there is a huge section of the spice patch missing
from the implementation unfortunately. This patch finishes the rest
of this patch out properly.
Change-Id: I693c6745e9594fd91eb6453f6de9dfcbd410e89c
Paritally-Implements: blueprint nova-proxies
The bootstrap must occur on the nova-api node due to binding in the
nova-api directory (same goes for all other services)
Closes-Bug: #1513439
Backport: Liberty
Change-Id: Iab88b49712828085e4d7e7f85e6d8f0b7999a9bf
Adjust all the configs to list all the rabbitmq hosts rather than
running rabbitmq through the VIP. This is made possible by clusterer
which has already merged.
Change-Id: I5db48f5f10ec68f4c8863a29bc13984f6845a4f9
Partially-Implements: blueprint rabbitmq-clusterer
Configuration based off upstream documentation here:
http://docs.openstack.org/developer/ironic/deploy/install-guide.html
A few notes:
-ironic-api is not configured to use mod_wsgi
-several places it's noted that discoverd is going away and needs to be
replaced with ironic-inspector - (sqlite connection should be changed
too)
-currently enabling ironic reconfigures nova compute (driver and
scheduler) as well as changes neutron network settings
-a nice enhancement would be to configure the web console
Required post-deployment configuration:
Create the flat network to launch the instances:
neutron net-create --tenant-id $TENANT_ID sharednet1 --shared \
--provider:network_type flat --provider:physical_network physnet1
neutron subnet-create sharednet1 $NETWORK_CIDR --name $SUBNET_NAME \
--ip-version=4 --gateway=$GATEWAY_IP --allocation-pool \
start=$START_IP,end=$END_IP --enable-dhcp
And then the above ID is used to set cleaning_network_uuid in the neutron
section of ironic.conf.
Change-Id: I572e7ff1f23c4e57a2c50817cafe9269fd9950dd
Implements: blueprint ironic-container
Haproxy is currently setup to listen on all services, even ones that
aren't being installed (e.g. cinder or swift). This patch places
conditionals around those groups.
Change-Id: Ia1ff873ce075768dfebf442aabf13604076ce637
Closes-Bug: #1500157
This changes bootstrapping of the Heat container to bootstrap
the Heat container with a heat domain user. This requires some
work from bootstrap.yml to pass in several environment variables
needed by the heat domain setup script.
Co-Authored-By: Sam Yaple <sam@yaple.net>
Change-Id: Iab05983754fa514835cb5ff54d775faa18773110
Partially-implements: blueprint ansible-heat
Cleanup all options in galera.cnf. Bind to all interfaces and ports
appropriately.
Change-Id: I516613d09673ba61aadda2c7bbb4abbbe4ea47ac
Partially-Implements: blueprint update-configs
Closes-Bug: #1478330
Cleanup all options in the minimal nova.conf. Remove options where
the default value was specified explicitly. Updated ports and bindings
to be configurable.
Partially-Implements: blueprint update-configs
Change-Id: I0bca7a8f9c4c6fa40145d66a95de7e98edc0edce
Cleanup all options in the rabbitmq confs. Allow all ports to be
configurable.
Change-Id: I9b3b485a4f3a25d20c0f19d13638f717daa169dc
Partially-Implements: blueprint update-configs
Also included is removing the executable bit on haproxy.cfg.j2 as it
should not have those permissions in the repo. It has no affect on the
templating process.
Change-Id: I9c76e528896bdf1799b8eeb62ae77bc4ad0b4449
Closes-Bug: #1482832
This patch checks that haproxy is alive and running. It does this by
using socat to talk to the haproxy socket. That socket will only
respond successfully when haproxy is active and functional.
Change-Id: I528588d5742071103c28109a69842a6f935232c2
Closes-Bug: #1478570
The original purpose for having an abstract like 'database' rather than
the service name of 'mariadb' has been change. Our direction is different
and this patch reflects consistent naming throughout
Change-Id: I704896191cc5243f9dab2a4cca9120e9dc2ceb2c
Closes-Bug: #1478328
This commit consists HAProxy ansible bits including config generation,
container deployment and hot reloads.
Closes-Bug: #1477915
Co-Authored-By: Sam Yaple <sam@yaple.net>
Change-Id: Ie93fa68fdb6b2885889c992ff1267d38b68e0cbc
Partially-implements: blueprint ansible-service