As suggested it was created the variable KOLLA_SERVICE_NAME to identify
the container service name through PS1 shell variable.
This method it was previously discussed in IRC.
https://goo.gl/k7AdEg
The other option it was usage hostname param in kolla_docker, but
currently docker does not support it due this issue:
https://github.com/docker/compose/issues/2460
The final result is like this:
$ docker exec -it heka /bin/bash
(heka)[heka@kolla-control /]$
$ docker exec -it mariadb /bin/bash
(mariadb)[mysql@kolla-control /]$
More details can be accessed through this link
http://paste.openstack.org/show/493689/
Closes-Bug: #1557454
Change-Id: I6aab8d640a8ebb17baa9d6d4f1edd6e331674713
The comment was confusing and not explaining what the real issue is
when binding erlang to an IPv4 address.
Change-Id: I819ea137fa37c0b2711efb1e7cb1e518ae26b9ab
Related-Bug: #1562701
Please refer to the Closes-Bug identifier for detailed information
pertaining to this issue.
Closes-Bug: #1562701
Change-Id: I77563930e14e11ea48e7edfef0bff80002279381
erlang parser, cant parse hostname with minus symbol, it returns:
Ignoring external configuration due to error: {1,erl_parse,"bad term"}
Adding single quotes fix this issue.
Co-Authored-By: weiyu <weiyu@unitedstack.com>
Closes-Bug: #1540234
Change-Id: I80e0789aa31febd552a851e6dc3a835d89c0e9d1
When horizon is used to launch 2000 VMs, nova-conductor is very
busy making database connections. All 55 database connections are
in use, resulting in an inability to garbage collect database
connections. Instead raise the max pool to 50 which will allow
50 concurrent database connections and the max overflow to 1000
which permits the database connections to finish the job at
large nodecount scales.
Closes-Bug: #1565105
Change-Id: I26dc2f7fda8760197888a1d61fbc45dfada2dd06
The Kolla design is for services to use the internalURL for
service to service communication. In Mitaka, Neutron added
a new config parameter specifying which URL to use to to
contact Nova, making the default 'public'. This patch sets
the value to 'internal'.
Change-Id: I2d36f3b4a860af9e9034ebfb2b5cea56450e5e4e
Closes-Bug: #1565624
At high scale, such as 64 nodes with 13TB ram and 2600 cores, nova
seems to struggle when scheduling 100+ VMs at the same time. The
issue is unrelated to the database, as the error printed indicates
the max_scheduling_attempts have been reached. Increase that value
to something more fitting of a 100 node cluster.
Change-Id: I8982d77c7c66db8f7c95b9fd73f58ceb66dbd723
Closes-Bug: #1563664
Previously the code looked at mariadb.pid, but this seemed
flakey in function. It seemed racey and prone to failure on
slower connections to a registry. The original task was
extremely complex and it didnt really verify that the MariaDB
was ready to serve connections. Use wait_for with a regex
instead.
Change-Id: I3aafac04f03639b08e0ef4d6a9c9e1a4499f000c
Closes-Bug: #1564278
This patch set makes "kolla-ansible prechecks" flag an error if
any password is empty in /etc/kolla/passwords.yml.
Change-Id: I87dee25b79c97be64ca49a5638c7f5a30d4cf464
Closes-Bug: #1563506
Added general_log to ansible/roles/mariadb/templates/galera.cnf.j2
to improve mariadb logging.
This will be helpful to debug mariadb issues especially when
mariadb is scaled.
Test results of this patch set are at:
http://paste.openstack.org/show/492852/
Change-Id: I80438d1bbdd1ed2a1f47489c6f9c45b8107340a0
Closes-Bug: #1563668
Currently Heka writes the keepalived logs in
/var/log/kolla/haproxy/keepalived.log.
This commit changes this to /var/log/kolla/keepalived/keepalived.log.
Closes-Bug: #1565499
Change-Id: I3033097bd77ddbf72948697b34a6a499ea903083
Add a nova-ssh container to handle the `nova migrate` and
`nova resize` case, in which the nova will use ssh to copy
files between machines.
Change-Id: Ie6675943f3aeabfbba8589d308d55b9c89d732db
Closes-Bug: #1562141
To be kolla deploy multiple clouds, we need to be able to configure
virtual_router_id other wise haproxy will fail setup the VIP for the
second cloud.
Partially-Implements: blueprint multiple-cloud
Closes-Bug: #1564547
Change-Id: I9eb27dd6fba61205841eadafc96601e235d2fe6d
Currently the delegate_to doesnt happen and the neutron role creation is
attempted once on the first server and is skipped. The re-ordering of hosts in
site.yml seems to make the first host to be one inside neutron-server group
yielding the expected results. This patch needs to be re-visited as soon as a
version of ansible is chosen that fixes the issues with delegate_to
Co-Authored-By: Steven Dake <stdake@cisco.com>
Co-Authored-By: Vikram Hosakote <vhosakot@cisco.com>
Co-Authored-By: Nate Potter <nathaniel.potter@intel.com>
Co-Authored-By: Ganesh Mahalingam <ganesh.mahalingam@intel.com>
Change-Id: Ia712b323aa9d750d470a11ee899ab1b3054a903f
Partial-Bug: #1546789
When a node uses two physical interfaces for its two VIPs, these
physical interfaces should be tied together, so both VIPs will
be taken out of scheduling if either one fails. Without this change,
if a request comes into one interface that needs access to the
second interface to process the request, the original request
unnecessarily fails. Repeating this results in a black hole where
a failing server keeps getting new requests.
Change-Id: Ic51e6584c1fbda3eb7821cb47f759c77e562cc65
Closes-Bug: #1550455
On AIO installation we cannot assume that the public IP address
will be the first entry in "getent ahostsv4" result, because
it may be also a localhost address. To make this check positive
in AIO, we should look for the public IP in the whole output.
Change-Id: I1da7b95d7f00c7f87ff68ead46bf55fdea812599
Closes-Bug: 1564564