Merge "Improve devmode=flase when building the image"
This commit is contained in:
commit
77dab835e6
@ -404,7 +404,7 @@ function create_mgmt_subnet_v4 {
|
|||||||
local name=$3
|
local name=$3
|
||||||
local ip_range=$4
|
local ip_range=$4
|
||||||
|
|
||||||
subnet_id=$(openstack subnet create --project ${project_id} --ip-version 4 --subnet-range ${ip_range} --gateway none --network ${net_id} $name -c id -f value)
|
subnet_id=$(openstack subnet create --project ${project_id} --ip-version 4 --subnet-range ${ip_range} --gateway none --dns-nameserver 8.8.8.8 --network ${net_id} $name -c id -f value)
|
||||||
die_if_not_set $LINENO subnet_id "Failed to create private IPv4 subnet for network: ${net_id}, project: ${project_id}"
|
die_if_not_set $LINENO subnet_id "Failed to create private IPv4 subnet for network: ${net_id}, project: ${project_id}"
|
||||||
echo $subnet_id
|
echo $subnet_id
|
||||||
}
|
}
|
||||||
|
@ -55,9 +55,9 @@ Operating System and Database
|
|||||||
A Trove Guest Instance contains at least a functioning Operating
|
A Trove Guest Instance contains at least a functioning Operating
|
||||||
System and the database software that the instance wishes to provide
|
System and the database software that the instance wishes to provide
|
||||||
(as a Service). For example, if your chosen operating system is Ubuntu
|
(as a Service). For example, if your chosen operating system is Ubuntu
|
||||||
and you wish to deliver MySQL version 5.5, then your guest instance is
|
and you wish to deliver MySQL version 5.7, then your guest instance is
|
||||||
a Nova instance running the Ubuntu operating system and will have
|
a Nova instance running the Ubuntu operating system and will have
|
||||||
MySQL version 5.5 installed on it.
|
MySQL version 5.7 installed on it.
|
||||||
|
|
||||||
-----------------
|
-----------------
|
||||||
Trove Guest Agent
|
Trove Guest Agent
|
||||||
@ -77,7 +77,7 @@ Guest Agent API is the common API used by Trove to communicate with
|
|||||||
any guest database, and the Guest Agent is the implementation of that
|
any guest database, and the Guest Agent is the implementation of that
|
||||||
API for the specific database.
|
API for the specific database.
|
||||||
|
|
||||||
The Trove Guest Agent runs on the Trove Guest Instance.
|
The Trove Guest Agent runs inside the Trove Guest Instance.
|
||||||
|
|
||||||
------------------------------------------
|
------------------------------------------
|
||||||
Injected Configuration for the Guest Agent
|
Injected Configuration for the Guest Agent
|
||||||
@ -87,11 +87,11 @@ When TaskManager launches the guest VM it injects the specific settings
|
|||||||
for the guest into the VM, into the file /etc/trove/conf.d/guest_info.conf.
|
for the guest into the VM, into the file /etc/trove/conf.d/guest_info.conf.
|
||||||
The file is injected one of three ways.
|
The file is injected one of three ways.
|
||||||
|
|
||||||
If use_nova_server_config_drive=True, it is injected via ConfigDrive. Otherwise
|
If ``use_nova_server_config_drive=True``, it is injected via ConfigDrive.
|
||||||
it is passed to the nova create call as the 'files' parameter and will be
|
Otherwise it is passed to the nova create call as the 'files' parameter and
|
||||||
injected based on the configuration of Nova; the Nova default is to discard the
|
will be injected based on the configuration of Nova; the Nova default is to
|
||||||
files. If the settings in guest_info.conf are not present on the guest
|
discard the files. If the settings in guest_info.conf are not present on the
|
||||||
Guest Agent will fail to start up.
|
guest Guest Agent will fail to start up.
|
||||||
|
|
||||||
------------------------------
|
------------------------------
|
||||||
Persistent Storage, Networking
|
Persistent Storage, Networking
|
||||||
@ -99,9 +99,13 @@ Persistent Storage, Networking
|
|||||||
|
|
||||||
The database stores data on persistent storage on Cinder (if
|
The database stores data on persistent storage on Cinder (if
|
||||||
configured, see trove.conf and the volume_support parameter) or
|
configured, see trove.conf and the volume_support parameter) or
|
||||||
ephemeral storage on the Nova instance. The database is accessible
|
ephemeral storage on the Nova instance. The database service is accessible
|
||||||
over the network and the Guest Instance is configured for network
|
over the tenant network provided when creating the database instance.
|
||||||
access by client applications.
|
|
||||||
|
The cloud administrator is able to config a management
|
||||||
|
networks(``CONF.management_networks``) that is invisible to the cloud tenants,
|
||||||
|
database instance can talk to the control plane services(e.g. the message
|
||||||
|
queue) via that network.
|
||||||
|
|
||||||
Building Guest Images using DIB
|
Building Guest Images using DIB
|
||||||
===============================
|
===============================
|
||||||
|
@ -1,74 +0,0 @@
|
|||||||
.. _guest_cloud_init:
|
|
||||||
|
|
||||||
.. role:: bash(code)
|
|
||||||
:language: bash
|
|
||||||
|
|
||||||
===========================
|
|
||||||
Guest Images via Cloud-Init
|
|
||||||
===========================
|
|
||||||
|
|
||||||
.. If section numbers are desired, unindent this
|
|
||||||
.. sectnum::
|
|
||||||
|
|
||||||
.. If a TOC is desired, unindent this
|
|
||||||
.. contents::
|
|
||||||
|
|
||||||
Overview
|
|
||||||
========
|
|
||||||
|
|
||||||
While creating an image is the preferred method for providing a base
|
|
||||||
for the Guest Instance, there may be cases where creating an image
|
|
||||||
is impractical. In those cases a Guest instance can be based on
|
|
||||||
an available Cloud Image and configured at boot via cloud-init.
|
|
||||||
|
|
||||||
Currently the most tested Guest image is Ubunutu 14.04 (trusty).
|
|
||||||
|
|
||||||
Setting up the Image
|
|
||||||
====================
|
|
||||||
|
|
||||||
* Visit the `Ubuntu Cloud Archive <https://cloud-images.ubuntu.com/releases/trusty/release>`_ and download ``ubuntu-14.04-server-cloudimg-amd64-disk1.img``.
|
|
||||||
|
|
||||||
* Upload that image to glance, and note the glance ID for the image.
|
|
||||||
|
|
||||||
* Cloud-Init files go into the directory set by the ``cloudinit_location``
|
|
||||||
configuration parameter, usually ``/etc/trove/cloudinit``. Files in
|
|
||||||
that directory are of the format ``[datastore].cloudinit``, for
|
|
||||||
example ``mysql.cloudinit``.
|
|
||||||
|
|
||||||
* Create a cloud-init file for your datastore and put it into place.
|
|
||||||
For this example, it is assumed you are using Ubuntu 16.04, with
|
|
||||||
the MySQL database and a Trove Agent from the Pike release. You
|
|
||||||
would put this into ``/etc/trove/cloudinit/mysql.cloudinit``.
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
#cloud-config
|
|
||||||
# For Ubuntu-16.04 cloudimage
|
|
||||||
apt_sources:
|
|
||||||
- source: "cloud-archive:pike"
|
|
||||||
packages:
|
|
||||||
- trove-guestagent
|
|
||||||
- mysql-server-5.7
|
|
||||||
write_files:
|
|
||||||
- path: /etc/sudoers.d/trove
|
|
||||||
content: |
|
|
||||||
Defaults:trove !requiretty
|
|
||||||
trove ALL=(ALL) NOPASSWD:ALL
|
|
||||||
runcmd:
|
|
||||||
- stop trove-guestagent
|
|
||||||
- cat /etc/trove/trove-guestagent.conf /etc/trove/conf.d/guest_info.conf >/etc/trove/trove.conf
|
|
||||||
- start trove-guestagent
|
|
||||||
|
|
||||||
|
|
||||||
* If you need to debug guests failing to launch simply append
|
|
||||||
the cloud-init to add a user to allow you to login and
|
|
||||||
debug the instance.
|
|
||||||
|
|
||||||
* When using ``trove-manage datastore_version_update`` to
|
|
||||||
define your datastore simply use the Glance ID you have for
|
|
||||||
the Trusty Cloud image.
|
|
||||||
|
|
||||||
When trove launches the Guest Instance, the cloud-init will install
|
|
||||||
the Pike Trove Guest Agent and MySQL database, and then adjust
|
|
||||||
the configuration files and launch the Guest Agent.
|
|
||||||
|
|
@ -8,6 +8,5 @@
|
|||||||
basics
|
basics
|
||||||
building_guest_images
|
building_guest_images
|
||||||
database_module_usage
|
database_module_usage
|
||||||
guest_cloud_init
|
|
||||||
secure_oslo_messaging
|
secure_oslo_messaging
|
||||||
trovestack
|
trovestack
|
||||||
|
@ -34,23 +34,21 @@ The trove guest agent image could be created by running the following command:
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ CONTROLLER_IP=10.0.17.132 \
|
$ ./trovestack build-image \
|
||||||
./trovestack build-image \
|
|
||||||
${datastore_type} \
|
${datastore_type} \
|
||||||
${guest_os} \
|
${guest_os} \
|
||||||
${guest_os_release} \
|
${guest_os_release} \
|
||||||
${dev_mode}
|
${dev_mode}
|
||||||
|
|
||||||
* Currently, only ``guest_os=ubuntu`` and ``guest_os_release=xenial`` are fully
|
* Currently, only ``guest_os=ubuntu`` and ``guest_os_release=xenial`` are fully
|
||||||
tested.
|
tested and supported.
|
||||||
|
|
||||||
* ``dev_mode=true`` is mainly for testing purpose for trove developers. When
|
* ``dev_mode=true`` is mainly for testing purpose for trove developers and it's
|
||||||
``dev_mode=true``, ``CONTROLLER_IP`` could be ignored. You need to build the
|
necessary to build the image on the trove controller host, because the host
|
||||||
image on the trove controller service host, because the host and the guest VM
|
and the guest VM need to ssh into each other without password. In this mode,
|
||||||
need to ssh into each other without password. In this mode, when the trove
|
when the trove guest agent code is changed, the image doesn't need to be
|
||||||
guest agent code is changed, the image doesn't need to be rebuilt which is
|
rebuilt which is convenient for debugging. Trove guest agent will ssh into
|
||||||
convenient for debugging. Trove guest agent will ssh into the host and
|
the host and download trove code during the service initialization.
|
||||||
download trove code when the service is initialized.
|
|
||||||
|
|
||||||
* if ``dev_mode=false``, the trove code for guest agent is injected into the
|
* if ``dev_mode=false``, the trove code for guest agent is injected into the
|
||||||
image at the building time. Now ``dev_mode=false`` is still in experimental
|
image at the building time. Now ``dev_mode=false`` is still in experimental
|
||||||
@ -62,7 +60,8 @@ The trove guest agent image could be created by running the following command:
|
|||||||
also need to create a Nova keypair and set ``nova_keypair`` option in Trove
|
also need to create a Nova keypair and set ``nova_keypair`` option in Trove
|
||||||
config file in order to ssh into the guest agent.
|
config file in order to ssh into the guest agent.
|
||||||
|
|
||||||
For example, build a MySQL image for Ubuntu Xenial operating system:
|
For example, in order to build a MySQL image for Ubuntu Xenial operating
|
||||||
|
system:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
|
@ -1,6 +1,4 @@
|
|||||||
Element to install an Trove guest agent.
|
Element to install an Trove guest agent.
|
||||||
|
|
||||||
Note: this requires a system base image modified to include OpenStack
|
Note: this requires a system base image modified to include Trove source code
|
||||||
repositories
|
repositories
|
||||||
|
|
||||||
the ubuntu-guest element could be removed.
|
|
||||||
|
@ -1,28 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
set -e
|
|
||||||
set -o xtrace
|
|
||||||
|
|
||||||
# CONTEXT: HOST prior to IMAGE BUILD as SCRIPT USER
|
|
||||||
# PURPOSE: creates the SSH key on the host if it doesn't exist. Then this copies the keys over to a staging area where
|
|
||||||
# they will be duplicated in the guest VM.
|
|
||||||
# This process allows the host to log into the guest but more importantly the guest phones home to get the trove
|
|
||||||
# source
|
|
||||||
|
|
||||||
source $_LIB/die
|
|
||||||
|
|
||||||
HOST_USERNAME=${HOST_USERNAME:-"ubuntu"}
|
|
||||||
SSH_DIR=${SSH_DIR:-"/home/${HOST_USERNAME}/.ssh"}
|
|
||||||
|
|
||||||
[ -n "${TMP_HOOKS_PATH}" ] || die "Temp hook path not set"
|
|
||||||
|
|
||||||
# copy files over the "staging" area for the guest image (they'll later be put in the correct location by the guest user
|
|
||||||
# not these keys should not be overridden otherwise a) you won't be able to ssh in and b) the guest won't be able to
|
|
||||||
# rsync the files
|
|
||||||
if [ -f ${SSH_DIR}/id_rsa ]; then
|
|
||||||
dd if=${SSH_DIR}/authorized_keys of=${TMP_HOOKS_PATH}/ssh-authorized-keys
|
|
||||||
dd if=${SSH_DIR}/id_rsa of=${TMP_HOOKS_PATH}/ssh-id_rsa
|
|
||||||
dd if=${SSH_DIR}/id_rsa.pub of=${TMP_HOOKS_PATH}/ssh-id_rsa.pub
|
|
||||||
else
|
|
||||||
die "SSH Authorized Keys file must exist along with pub and private key"
|
|
||||||
fi
|
|
22
integration/scripts/files/elements/guest-agent/install.d/50-user
Executable file
22
integration/scripts/files/elements/guest-agent/install.d/50-user
Executable file
@ -0,0 +1,22 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# PURPOSE: Add the guest image user that will own the trove agent source if the
|
||||||
|
# user does not already exist
|
||||||
|
|
||||||
|
if [ ${DIB_DEBUG_TRACE:-1} -gt 0 ]; then
|
||||||
|
set -x
|
||||||
|
fi
|
||||||
|
set -e
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
GUEST_USERNAME=${GUEST_USERNAME:-"ubuntu"}
|
||||||
|
|
||||||
|
if ! id -u ${GUEST_USERNAME} >/dev/null 2>&1; then
|
||||||
|
echo "Adding ${GUEST_USERNAME} user"
|
||||||
|
useradd -G sudo -m ${GUEST_USERNAME} -s /bin/bash
|
||||||
|
chown ${GUEST_USERNAME}:${GUEST_USERNAME} /home/${GUEST_USERNAME}
|
||||||
|
passwd ${GUEST_USERNAME} <<_EOF_
|
||||||
|
${GUEST_USERNAME}
|
||||||
|
${GUEST_USERNAME}
|
||||||
|
_EOF_
|
||||||
|
fi
|
@ -8,28 +8,35 @@ set -o pipefail
|
|||||||
|
|
||||||
SCRIPTDIR=$(dirname $0)
|
SCRIPTDIR=$(dirname $0)
|
||||||
GUEST_VENV=/opt/guest-agent-venv
|
GUEST_VENV=/opt/guest-agent-venv
|
||||||
|
GUEST_USERNAME=${GUEST_USERNAME:-"ubuntu"}
|
||||||
|
|
||||||
# Create a virtual environment to contain the guest agent
|
# Create a virtual environment for guest agent
|
||||||
${DIB_PYTHON} -m virtualenv $GUEST_VENV
|
${DIB_PYTHON} -m virtualenv ${GUEST_VENV}
|
||||||
$GUEST_VENV/bin/pip install pip --upgrade
|
${GUEST_VENV}/bin/pip install pip --upgrade
|
||||||
$GUEST_VENV/bin/pip install -U -c /opt/upper-constraints.txt /opt/guest-agent
|
${GUEST_VENV}/bin/pip install -U -c /opt/upper-constraints.txt /opt/guest-agent
|
||||||
|
chown -R ${GUEST_USERNAME}:root ${GUEST_VENV}
|
||||||
|
|
||||||
# Link the trove-guestagent out to /usr/local/bin where the startup scripts look
|
# Link the trove-guestagent out to /usr/local/bin where the startup scripts look for
|
||||||
ln -s $GUEST_VENV/bin/trove-guestagent /usr/local/bin/guest-agent || true
|
ln -s ${GUEST_VENV}/bin/trove-guestagent /usr/local/bin/guest-agent || true
|
||||||
|
|
||||||
mkdir -p /var/lib/trove /etc/trove/certs /var/log/trove
|
for folder in "/var/lib/trove" "/etc/trove" "/etc/trove/certs" "/etc/trove/conf.d" "/var/log/trove"; do
|
||||||
|
mkdir -p ${folder}
|
||||||
|
chown -R ${GUEST_USERNAME}:root ${folder}
|
||||||
|
done
|
||||||
|
|
||||||
install -D -g root -o root -m 0644 ${SCRIPTDIR}/guest-agent.logrotate /etc/logrotate.d/guest-agent
|
install -D -g root -o ${GUEST_USERNAME} -m 0644 ${SCRIPTDIR}/guest-agent.logrotate /etc/logrotate.d/guest-agent
|
||||||
|
|
||||||
case "$DIB_INIT_SYSTEM" in
|
case "$DIB_INIT_SYSTEM" in
|
||||||
upstart)
|
|
||||||
install -D -g root -o root -m 0644 ${SCRIPTDIR}/guest-agent.conf /etc/init/guest-agent.conf
|
|
||||||
;;
|
|
||||||
systemd)
|
systemd)
|
||||||
install -D -g root -o root -m 0644 ${SCRIPTDIR}/guest-agent.service /usr/lib/systemd/system/guest-agent.service
|
mkdir -p /usr/lib/systemd/system
|
||||||
|
touch /usr/lib/systemd/system/guest-agent.service
|
||||||
|
sed "s/GUEST_USERNAME/${GUEST_USERNAME}/g" ${SCRIPTDIR}/guest-agent.service > /usr/lib/systemd/system/guest-agent.service
|
||||||
|
;;
|
||||||
|
upstart)
|
||||||
|
install -D -g root -o ${GUEST_USERNAME} -m 0644 ${SCRIPTDIR}/guest-agent.conf /etc/init/guest-agent.conf
|
||||||
;;
|
;;
|
||||||
sysv)
|
sysv)
|
||||||
install -D -g root -o root -m 0644 ${SCRIPTDIR}/guest-agent.init /etc/init.d/guest-agent.init
|
install -D -g root -o ${GUEST_USERNAME} -m 0644 ${SCRIPTDIR}/guest-agent.init /etc/init.d/guest-agent.init
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
echo "Unsupported init system"
|
echo "Unsupported init system"
|
||||||
|
@ -4,11 +4,12 @@ After=network.target syslog.service
|
|||||||
Wants=syslog.service
|
Wants=syslog.service
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
|
User=GUEST_USERNAME
|
||||||
|
Group=GUEST_USERNAME
|
||||||
|
ExecStartPre=/bin/bash -c "sudo chown -R GUEST_USERNAME:root /etc/trove/conf.d"
|
||||||
ExecStart=/usr/local/bin/guest-agent --config-dir=/etc/trove/conf.d
|
ExecStart=/usr/local/bin/guest-agent --config-dir=/etc/trove/conf.d
|
||||||
KillMode=mixed
|
KillMode=mixed
|
||||||
Restart=always
|
Restart=always
|
||||||
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/guest-agent.pid"
|
|
||||||
PIDFile=/var/run/guest-agent.pid
|
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
|
@ -9,15 +9,6 @@ libssl-dev:
|
|||||||
python-dev:
|
python-dev:
|
||||||
installtype: source
|
installtype: source
|
||||||
|
|
||||||
python-sqlalchemy:
|
|
||||||
python-lxml:
|
|
||||||
python-eventlet:
|
|
||||||
python-webob:
|
|
||||||
python-httplib2:
|
|
||||||
python-iso8601:
|
|
||||||
python-pexpect:
|
|
||||||
python-mysqldb:
|
|
||||||
python-migrate:
|
|
||||||
acl:
|
acl:
|
||||||
acpid:
|
acpid:
|
||||||
apparmor:
|
apparmor:
|
||||||
@ -46,13 +37,14 @@ lsof:
|
|||||||
net-tools:
|
net-tools:
|
||||||
netbase:
|
netbase:
|
||||||
netcat-openbsd:
|
netcat-openbsd:
|
||||||
|
network-scripts:
|
||||||
open-vm-tools:
|
open-vm-tools:
|
||||||
|
arch: i386, amd64
|
||||||
openssh-client:
|
openssh-client:
|
||||||
openssh-server:
|
openssh-server:
|
||||||
pollinate:
|
pollinate:
|
||||||
psmisc:
|
psmisc:
|
||||||
rsyslog:
|
rsyslog:
|
||||||
screen:
|
|
||||||
socat:
|
socat:
|
||||||
tcpdump:
|
tcpdump:
|
||||||
ubuntu-cloudimage-keyring:
|
ubuntu-cloudimage-keyring:
|
||||||
|
@ -10,7 +10,7 @@
|
|||||||
"cloud-guest-utils": "",
|
"cloud-guest-utils": "",
|
||||||
"apparmor": "",
|
"apparmor": "",
|
||||||
"dmeventd": "",
|
"dmeventd": "",
|
||||||
"isc-dhcp-client": "",
|
"isc-dhcp-client": "dhcp-client",
|
||||||
"uuid-runtime": "",
|
"uuid-runtime": "",
|
||||||
"ubuntu-cloudimage-keyring": "",
|
"ubuntu-cloudimage-keyring": "",
|
||||||
"vim-tiny": "",
|
"vim-tiny": "",
|
||||||
|
@ -1,39 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# CONTEXT: GUEST during CONSTRUCTION as ROOT
|
|
||||||
# PURPOSE: take "staged" ssh keys (see extra-data.d/62-ssh-key) and put them in the GUEST_USERS home directory
|
|
||||||
|
|
||||||
set -e
|
|
||||||
set -o xtrace
|
|
||||||
|
|
||||||
SSH_DIR="/home/${GUEST_USERNAME}/.ssh"
|
|
||||||
TMP_HOOKS_DIR="/tmp/in_target.d"
|
|
||||||
|
|
||||||
if ! id -u ${GUEST_USERNAME} >/dev/null 2>&1; then
|
|
||||||
echo "Adding ${GUEST_USERNAME} user"
|
|
||||||
useradd -G sudo -m ${GUEST_USERNAME} -s /bin/bash
|
|
||||||
chown ${GUEST_USERNAME}:${GUEST_USERNAME} /home/${GUEST_USERNAME}
|
|
||||||
passwd ${GUEST_USERNAME} <<_EOF_
|
|
||||||
${GUEST_USERNAME}
|
|
||||||
${GUEST_USERNAME}
|
|
||||||
_EOF_
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -f "${TMP_HOOKS_DIR}/ssh-authorized-keys" ]; then
|
|
||||||
if [ ! -d ${SSH_DIR} ]; then
|
|
||||||
# this method worked more reliable in vmware fusion over doing sudo -Hiu ${GUEST_USERNAME}
|
|
||||||
mkdir ${SSH_DIR}
|
|
||||||
chown ${GUEST_USERNAME}:${GUEST_USERNAME} ${SSH_DIR}
|
|
||||||
fi
|
|
||||||
|
|
||||||
sudo -Hiu ${GUEST_USERNAME} dd of=${SSH_DIR}/authorized_keys conv=notrunc if=${TMP_HOOKS_DIR}/ssh-authorized-keys
|
|
||||||
if [ ! -f "${SSH_DIR}/id_rsa" ]; then
|
|
||||||
sudo -Hiu ${GUEST_USERNAME} dd of=${SSH_DIR}/id_rsa if=${TMP_HOOKS_DIR}/ssh-id_rsa
|
|
||||||
# perms have to be right on this file for ssh to work
|
|
||||||
sudo -Hiu ${GUEST_USERNAME} chmod 600 ${SSH_DIR}/id_rsa
|
|
||||||
sudo -Hiu ${GUEST_USERNAME} dd of=${SSH_DIR}/id_rsa.pub if=${TMP_HOOKS_DIR}/ssh-id_rsa.pub
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "SSH Keys were not staged by host"
|
|
||||||
exit -1
|
|
||||||
fi
|
|
@ -7,5 +7,3 @@ set -e
|
|||||||
set -o xtrace
|
set -o xtrace
|
||||||
|
|
||||||
apt-get clean
|
apt-get clean
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,11 +1,8 @@
|
|||||||
This element clears out /etc/resolv.conf and prevents dhclient from populating
|
This element clears out /etc/resolv.conf and prevents dhclient from populating
|
||||||
it with data from DHCP. This means that DNS resolution will not work from the
|
it with data from DHCP. This means that DNS resolution will not work from the
|
||||||
amphora. This is OK because all outbound connections from the guest will
|
guest. This is OK because all outbound connections from the guest will
|
||||||
be based using raw IP addresses.
|
be based using raw IP addresses.
|
||||||
|
|
||||||
In addition we remove dns from the nsswitch.conf hosts setting.
|
In addition we remove dns from the nsswitch.conf hosts setting.
|
||||||
|
|
||||||
This has the real benefit of speeding up host boot and configutation times.
|
This means that the guest never waits for DNS timeouts to occur.
|
||||||
This is especially helpful when running tempest tests in a devstack environment
|
|
||||||
where DNS resolution from the guest usually doesn't work anyway. This means
|
|
||||||
that the guest never waits for DNS timeouts to occur.
|
|
||||||
|
@ -5,6 +5,7 @@ set -e
|
|||||||
#CONTEXT: chroot on host
|
#CONTEXT: chroot on host
|
||||||
#PURPOSE: Allows mysqld to create temporary files when restoring backups
|
#PURPOSE: Allows mysqld to create temporary files when restoring backups
|
||||||
|
|
||||||
|
mkdir -p /etc/apparmor.d/local/
|
||||||
cat <<EOF >>/etc/apparmor.d/local/usr.sbin.mysqld
|
cat <<EOF >>/etc/apparmor.d/local/usr.sbin.mysqld
|
||||||
/tmp/ rw,
|
/tmp/ rw,
|
||||||
/tmp/** rwk,
|
/tmp/** rwk,
|
||||||
|
@ -21,6 +21,8 @@ function build_vm() {
|
|||||||
GUEST_CACHEDIR=${GUEST_CACHEDIR:-"$HOME/.cache/image-create"}
|
GUEST_CACHEDIR=${GUEST_CACHEDIR:-"$HOME/.cache/image-create"}
|
||||||
GUEST_WORKING_DIR=${GUEST_WORKING_DIR:-"$HOME/images"}
|
GUEST_WORKING_DIR=${GUEST_WORKING_DIR:-"$HOME/images"}
|
||||||
|
|
||||||
|
export GUEST_USERNAME=${guest_username}
|
||||||
|
|
||||||
# In dev mode, the trove guest agent needs to download trove code from
|
# In dev mode, the trove guest agent needs to download trove code from
|
||||||
# trove-taskmanager host during service initialization.
|
# trove-taskmanager host during service initialization.
|
||||||
if [[ "${dev_mode}" == "true" ]]; then
|
if [[ "${dev_mode}" == "true" ]]; then
|
||||||
@ -34,7 +36,6 @@ function build_vm() {
|
|||||||
export HOST_SCP_USERNAME=$(whoami)
|
export HOST_SCP_USERNAME=$(whoami)
|
||||||
export HOST_USERNAME=${HOST_SCP_USERNAME}
|
export HOST_USERNAME=${HOST_SCP_USERNAME}
|
||||||
export SSH_DIR=${SSH_DIR:-"$HOME/.ssh"}
|
export SSH_DIR=${SSH_DIR:-"$HOME/.ssh"}
|
||||||
export GUEST_USERNAME=${guest_username}
|
|
||||||
manage_ssh_keys
|
manage_ssh_keys
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@ -68,7 +69,6 @@ function build_vm() {
|
|||||||
if [[ "${dev_mode}" == "false" ]]; then
|
if [[ "${dev_mode}" == "false" ]]; then
|
||||||
elementes="$elementes pip-and-virtualenv"
|
elementes="$elementes pip-and-virtualenv"
|
||||||
elementes="$elementes pip-cache"
|
elementes="$elementes pip-cache"
|
||||||
elementes="$elementes no-resolvconf"
|
|
||||||
elementes="$elementes guest-agent"
|
elementes="$elementes guest-agent"
|
||||||
else
|
else
|
||||||
elementes="$elementes ${guest_os}-guest"
|
elementes="$elementes ${guest_os}-guest"
|
||||||
@ -101,7 +101,7 @@ function build_guest_image() {
|
|||||||
dev_mode=${4:-"true"}
|
dev_mode=${4:-"true"}
|
||||||
guest_username=${5:-"ubuntu"}
|
guest_username=${5:-"ubuntu"}
|
||||||
|
|
||||||
exclaim "Building a ${datastore_type} image of trove guest agent for ${guest_os} ${guest_release}."
|
exclaim "Building a ${datastore_type} image of trove guest agent for ${guest_os} ${guest_release}, dev_mode=${dev_mode}"
|
||||||
|
|
||||||
VALID_SERVICES='mysql percona mariadb redis cassandra couchbase mongodb postgresql couchdb vertica db2 pxc'
|
VALID_SERVICES='mysql percona mariadb redis cassandra couchbase mongodb postgresql couchdb vertica db2 pxc'
|
||||||
if ! [[ " $VALID_SERVICES " =~ " $datastore_type " ]]; then
|
if ! [[ " $VALID_SERVICES " =~ " $datastore_type " ]]; then
|
||||||
@ -109,7 +109,7 @@ function build_guest_image() {
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
image_name=${guest_os}_${datastore_type}
|
image_name=${guest_os}-${datastore_type}
|
||||||
image_folder=$HOME/images
|
image_folder=$HOME/images
|
||||||
mkdir -p $image_folder
|
mkdir -p $image_folder
|
||||||
image_path=${image_folder}/${image_name}
|
image_path=${image_folder}/${image_name}
|
||||||
|
@ -850,8 +850,13 @@ function cmd_build_and_upload_image() {
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
glance_imageid=$(openstack $CLOUD_ADMIN_ARG image list | grep "$datastore_type" | get_field 1)
|
image_var="${datastore_type^^}_IMAGE_ID"
|
||||||
echo "IMAGEID: $glance_imageid"
|
glance_imageid=`eval echo '$'"$image_var"`
|
||||||
|
|
||||||
|
if [[ -z $glance_imageid ]]; then
|
||||||
|
# Find the first image id with the name contains datastore_type.
|
||||||
|
glance_imageid=$(openstack $CLOUD_ADMIN_ARG image list | grep "$datastore_type" | awk 'NR==1 {print}' | awk '{print $2}')
|
||||||
|
|
||||||
if [[ -z $glance_imageid ]]; then
|
if [[ -z $glance_imageid ]]; then
|
||||||
build_guest_image ${datastore_type} ${guest_os} ${guest_release} ${dev_mode} ${guest_username}
|
build_guest_image ${datastore_type} ${guest_os} ${guest_release} ${dev_mode} ${guest_username}
|
||||||
|
|
||||||
@ -862,6 +867,9 @@ function cmd_build_and_upload_image() {
|
|||||||
[[ -z "$glance_imageid" ]] && echo "Glance upload failed!" && exit 1
|
[[ -z "$glance_imageid" ]] && echo "Glance upload failed!" && exit 1
|
||||||
echo "IMAGE ID: $glance_imageid"
|
echo "IMAGE ID: $glance_imageid"
|
||||||
fi
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "IMAGEID: $glance_imageid"
|
||||||
|
|
||||||
exclaim "Updating Datastores"
|
exclaim "Updating Datastores"
|
||||||
cmd_set_datastore "${glance_imageid}" "${datastore_type}" "${restart_trove}"
|
cmd_set_datastore "${glance_imageid}" "${datastore_type}" "${restart_trove}"
|
||||||
@ -1257,7 +1265,12 @@ function cmd_kick_start() {
|
|||||||
exclaim "Running kick-start for $DATASTORE_TYPE (restart trove: $RESTART_TROVE)"
|
exclaim "Running kick-start for $DATASTORE_TYPE (restart trove: $RESTART_TROVE)"
|
||||||
dump_env
|
dump_env
|
||||||
cmd_test_init "${DATASTORE_TYPE}"
|
cmd_test_init "${DATASTORE_TYPE}"
|
||||||
cmd_build_and_upload_image "${DATASTORE_TYPE}" "${RESTART_TROVE}"
|
|
||||||
|
export GUEST_OS=${GUEST_OS:-"ubuntu"}
|
||||||
|
export GUEST_OS_RELEASE=${GUEST_OS_RELEASE:-"xenial"}
|
||||||
|
export GUEST_OS_USERNAME=${GUEST_OS_USERNAME:-"ubuntu"}
|
||||||
|
export DEV_MOEE=${DEV_MODE:-"true"}
|
||||||
|
cmd_build_and_upload_image "${DATASTORE_TYPE}" "${RESTART_TROVE}" "${GUEST_OS}" "${GUEST_OS_RELEASE}" "${DEV_MOEE}" "${GUEST_OS_USERNAME}"
|
||||||
}
|
}
|
||||||
|
|
||||||
function cmd_gate_tests() {
|
function cmd_gate_tests() {
|
||||||
|
@ -108,3 +108,6 @@ SWIFT_DISK_IMAGE=${SWIFT_DATA_DIR}/drives/images/swift.img
|
|||||||
#export TROVE_RESIZE_TIME_OUT=3600
|
#export TROVE_RESIZE_TIME_OUT=3600
|
||||||
#export TROVE_USAGE_TIMEOUT=1500
|
#export TROVE_USAGE_TIMEOUT=1500
|
||||||
#export TROVE_STATE_CHANGE_WAIT_TIME=180
|
#export TROVE_STATE_CHANGE_WAIT_TIME=180
|
||||||
|
|
||||||
|
# Image
|
||||||
|
MYSQL_IMAGE_ID=${MYSQL_IMAGE_ID:-""}
|
||||||
|
@ -125,7 +125,6 @@ def import_tests():
|
|||||||
if not ADD_DOMAINS:
|
if not ADD_DOMAINS:
|
||||||
from tests.api import delete_all
|
from tests.api import delete_all
|
||||||
from tests.api import instances_pagination
|
from tests.api import instances_pagination
|
||||||
from tests.api import instances_quotas
|
|
||||||
from tests.api import instances_states
|
from tests.api import instances_states
|
||||||
from tests.dns import dns
|
from tests.dns import dns
|
||||||
from tests import initialize
|
from tests import initialize
|
||||||
|
@ -1,47 +0,0 @@
|
|||||||
from proboscis import before_class
|
|
||||||
from proboscis import test
|
|
||||||
from proboscis.asserts import assert_raises
|
|
||||||
|
|
||||||
from troveclient.compat import exceptions
|
|
||||||
from trove.tests.config import CONFIG
|
|
||||||
from trove.tests.util import create_client
|
|
||||||
|
|
||||||
|
|
||||||
@test(groups=['dbaas.api.instances.quotas'])
|
|
||||||
class InstanceQuotas(object):
|
|
||||||
|
|
||||||
created_instances = []
|
|
||||||
|
|
||||||
@before_class
|
|
||||||
def setup(self):
|
|
||||||
self.client = create_client(is_admin=False)
|
|
||||||
|
|
||||||
@test
|
|
||||||
def test_too_many_instances(self):
|
|
||||||
self.created_instances = []
|
|
||||||
if 'trove_max_instances_per_user' in CONFIG.values:
|
|
||||||
too_many = CONFIG.values['trove_max_instances_per_user']
|
|
||||||
already_there = len(self.client.instances.list())
|
|
||||||
flavor = 1
|
|
||||||
for i in range(too_many - already_there):
|
|
||||||
response = self.client.instances.create('too_many_%d' % i,
|
|
||||||
flavor,
|
|
||||||
{'size': 1})
|
|
||||||
self.created_instances.append(response)
|
|
||||||
# This one better fail, because we just reached our quota.
|
|
||||||
assert_raises(exceptions.OverLimit,
|
|
||||||
self.client.instances.create,
|
|
||||||
"too_many", flavor,
|
|
||||||
{'size': 1})
|
|
||||||
|
|
||||||
@test(runs_after=[test_too_many_instances])
|
|
||||||
def delete_excessive_entries(self):
|
|
||||||
# Delete all the instances called too_many*.
|
|
||||||
for id in self.created_instances:
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
self.client.instances.delete(id)
|
|
||||||
except exceptions.UnprocessableEntity:
|
|
||||||
continue
|
|
||||||
except exceptions.NotFound:
|
|
||||||
break
|
|
@ -206,7 +206,6 @@ def import_tests():
|
|||||||
from trove.tests.api import configurations # noqa
|
from trove.tests.api import configurations # noqa
|
||||||
from trove.tests.api import databases # noqa
|
from trove.tests.api import databases # noqa
|
||||||
from trove.tests.api import datastores # noqa
|
from trove.tests.api import datastores # noqa
|
||||||
from trove.tests.api import flavors # noqa
|
|
||||||
from trove.tests.api import header # noqa
|
from trove.tests.api import header # noqa
|
||||||
from trove.tests.api import instances as rd_instances # noqa
|
from trove.tests.api import instances as rd_instances # noqa
|
||||||
from trove.tests.api import instances_actions as rd_actions # noqa
|
from trove.tests.api import instances_actions as rd_actions # noqa
|
||||||
|
@ -447,6 +447,15 @@ class DBaaSInstanceCreate(DBaaSAPINotification):
|
|||||||
return ['instance_id']
|
return ['instance_id']
|
||||||
|
|
||||||
|
|
||||||
|
class DBaaSInstanceReboot(DBaaSAPINotification):
|
||||||
|
|
||||||
|
def event_type(self):
|
||||||
|
return 'instance_reboot'
|
||||||
|
|
||||||
|
def required_start_traits(self):
|
||||||
|
return ['instance_id']
|
||||||
|
|
||||||
|
|
||||||
class DBaaSInstanceRestart(DBaaSAPINotification):
|
class DBaaSInstanceRestart(DBaaSAPINotification):
|
||||||
|
|
||||||
def event_type(self):
|
def event_type(self):
|
||||||
|
@ -131,7 +131,14 @@ class MgmtInstanceController(InstanceController):
|
|||||||
|
|
||||||
def _action_reboot(self, context, instance, req, body):
|
def _action_reboot(self, context, instance, req, body):
|
||||||
LOG.debug("Rebooting instance %s.", instance.id)
|
LOG.debug("Rebooting instance %s.", instance.id)
|
||||||
|
|
||||||
|
context.notification = notification.DBaaSInstanceReboot(
|
||||||
|
context,
|
||||||
|
request=req
|
||||||
|
)
|
||||||
|
with StartNotification(context, instance_id=instance.id):
|
||||||
instance.reboot()
|
instance.reboot()
|
||||||
|
|
||||||
return wsgi.Result(None, 202)
|
return wsgi.Result(None, 202)
|
||||||
|
|
||||||
def _action_migrate(self, context, instance, req, body):
|
def _action_migrate(self, context, instance, req, body):
|
||||||
|
@ -36,13 +36,11 @@ from trove.instance import models, views
|
|||||||
from trove.module import models as module_models
|
from trove.module import models as module_models
|
||||||
from trove.module import views as module_views
|
from trove.module import views as module_views
|
||||||
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
LOG = logging.getLogger(__name__)
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class InstanceController(wsgi.Controller):
|
class InstanceController(wsgi.Controller):
|
||||||
|
|
||||||
"""Controller for instance functionality."""
|
"""Controller for instance functionality."""
|
||||||
schemas = apischema.instance.copy()
|
schemas = apischema.instance.copy()
|
||||||
|
|
||||||
|
@ -1259,7 +1259,6 @@ class BuiltInstanceTasks(BuiltInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
|
|
||||||
def reboot(self):
|
def reboot(self):
|
||||||
try:
|
try:
|
||||||
# Issue a guest stop db call to shutdown the db if running
|
|
||||||
LOG.debug("Stopping datastore on instance %s.", self.id)
|
LOG.debug("Stopping datastore on instance %s.", self.id)
|
||||||
try:
|
try:
|
||||||
self.guest.stop_db()
|
self.guest.stop_db()
|
||||||
@ -1268,15 +1267,27 @@ class BuiltInstanceTasks(BuiltInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
# Also we check guest state before issuing reboot
|
# Also we check guest state before issuing reboot
|
||||||
LOG.debug(str(e))
|
LOG.debug(str(e))
|
||||||
|
|
||||||
|
# Wait for the mysql stopped.
|
||||||
|
def _datastore_is_offline():
|
||||||
self._refresh_datastore_status()
|
self._refresh_datastore_status()
|
||||||
if not (self.datastore_status_matches(
|
return (
|
||||||
|
self.datastore_status_matches(
|
||||||
rd_instance.ServiceStatuses.SHUTDOWN) or
|
rd_instance.ServiceStatuses.SHUTDOWN) or
|
||||||
self.datastore_status_matches(
|
self.datastore_status_matches(
|
||||||
rd_instance.ServiceStatuses.CRASHED)):
|
rd_instance.ServiceStatuses.CRASHED)
|
||||||
# We will bail if db did not get stopped or is blocked
|
)
|
||||||
LOG.error("Cannot reboot instance. DB status is %s.",
|
|
||||||
|
try:
|
||||||
|
utils.poll_until(
|
||||||
|
_datastore_is_offline,
|
||||||
|
sleep_time=3,
|
||||||
|
time_out=CONF.reboot_time_out
|
||||||
|
)
|
||||||
|
except exception.PollTimeOut:
|
||||||
|
LOG.error("Cannot reboot instance, DB status is %s",
|
||||||
self.datastore_status.status)
|
self.datastore_status.status)
|
||||||
return
|
return
|
||||||
|
|
||||||
LOG.debug("The guest service status is %s.",
|
LOG.debug("The guest service status is %s.",
|
||||||
self.datastore_status.status)
|
self.datastore_status.status)
|
||||||
|
|
||||||
@ -1291,7 +1302,7 @@ class BuiltInstanceTasks(BuiltInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
|
|
||||||
utils.poll_until(
|
utils.poll_until(
|
||||||
update_server_info,
|
update_server_info,
|
||||||
sleep_time=2,
|
sleep_time=3,
|
||||||
time_out=reboot_time_out)
|
time_out=reboot_time_out)
|
||||||
|
|
||||||
# Set the status to PAUSED. The guest agent will reset the status
|
# Set the status to PAUSED. The guest agent will reset the status
|
||||||
@ -1302,7 +1313,6 @@ class BuiltInstanceTasks(BuiltInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
LOG.error("Failed to reboot instance %(id)s: %(e)s",
|
LOG.error("Failed to reboot instance %(id)s: %(e)s",
|
||||||
{'id': self.id, 'e': str(e)})
|
{'id': self.id, 'e': str(e)})
|
||||||
finally:
|
finally:
|
||||||
LOG.debug("Rebooting FINALLY %s", self.id)
|
|
||||||
self.reset_task_status()
|
self.reset_task_status()
|
||||||
|
|
||||||
def restart(self):
|
def restart(self):
|
||||||
|
@ -53,7 +53,7 @@ backup_count_prior_to_create = 0
|
|||||||
backup_count_for_instance_prior_to_create = 0
|
backup_count_for_instance_prior_to_create = 0
|
||||||
|
|
||||||
|
|
||||||
@test(depends_on_groups=[instances_actions.GROUP_STOP_MYSQL],
|
@test(depends_on_groups=[instances_actions.GROUP_RESIZE],
|
||||||
groups=[BACKUP_GROUP, tests.INSTANCES],
|
groups=[BACKUP_GROUP, tests.INSTANCES],
|
||||||
enabled=CONFIG.swift_enabled)
|
enabled=CONFIG.swift_enabled)
|
||||||
class CreateBackups(object):
|
class CreateBackups(object):
|
||||||
@ -380,7 +380,7 @@ class DeleteRestoreInstance(object):
|
|||||||
assert_raises(exceptions.NotFound, instance_info.dbaas.instances.get,
|
assert_raises(exceptions.NotFound, instance_info.dbaas.instances.get,
|
||||||
instance_id)
|
instance_id)
|
||||||
|
|
||||||
@test(runs_after=[VerifyRestore.test_database_restored_incremental])
|
@test(depends_on=[VerifyRestore.test_database_restored_incremental])
|
||||||
def test_delete_restored_instance_incremental(self):
|
def test_delete_restored_instance_incremental(self):
|
||||||
try:
|
try:
|
||||||
self._delete(incremental_restore_instance_id)
|
self._delete(incremental_restore_instance_id)
|
||||||
|
@ -1,265 +0,0 @@
|
|||||||
# Copyright (c) 2011 OpenStack Foundation
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
from nose.tools import assert_equal
|
|
||||||
from nose.tools import assert_false
|
|
||||||
from nose.tools import assert_true
|
|
||||||
from proboscis.asserts import assert_raises
|
|
||||||
from proboscis import before_class
|
|
||||||
from proboscis.decorators import time_out
|
|
||||||
from proboscis import test
|
|
||||||
from trove.common.utils import poll_until
|
|
||||||
from trove import tests
|
|
||||||
from trove.tests.api.instances import TIMEOUT_INSTANCE_CREATE
|
|
||||||
from trove.tests.config import CONFIG
|
|
||||||
from trove.tests.util.check import AttrCheck
|
|
||||||
from trove.tests.util import create_dbaas_client
|
|
||||||
from trove.tests.util import create_nova_client
|
|
||||||
from trove.tests.util import test_config
|
|
||||||
from trove.tests.util.users import Requirements
|
|
||||||
from troveclient.compat import exceptions
|
|
||||||
from troveclient.v1.flavors import Flavor
|
|
||||||
|
|
||||||
GROUP = "dbaas.api.flavors"
|
|
||||||
GROUP_DS = "dbaas.api.datastores"
|
|
||||||
FAKE_MODE = test_config.values['fake_mode']
|
|
||||||
|
|
||||||
servers_flavors = None
|
|
||||||
dbaas_flavors = None
|
|
||||||
user = None
|
|
||||||
|
|
||||||
|
|
||||||
def assert_attributes_equal(name, os_flavor, dbaas_flavor):
|
|
||||||
"""Given an attribute name and two objects,
|
|
||||||
ensures the attribute is equal.
|
|
||||||
"""
|
|
||||||
assert_true(hasattr(os_flavor, name),
|
|
||||||
"open stack flavor did not have attribute %s" % name)
|
|
||||||
assert_true(hasattr(dbaas_flavor, name),
|
|
||||||
"dbaas flavor did not have attribute %s" % name)
|
|
||||||
expected = getattr(os_flavor, name)
|
|
||||||
actual = getattr(dbaas_flavor, name)
|
|
||||||
assert_equal(expected, actual,
|
|
||||||
'DBaas flavor differs from Open Stack on attribute ' + name)
|
|
||||||
|
|
||||||
|
|
||||||
def assert_flavors_roughly_equivalent(os_flavor, dbaas_flavor):
|
|
||||||
assert_attributes_equal('name', os_flavor, dbaas_flavor)
|
|
||||||
assert_attributes_equal('ram', os_flavor, dbaas_flavor)
|
|
||||||
|
|
||||||
|
|
||||||
def assert_link_list_is_equal(flavor):
|
|
||||||
assert_true(hasattr(flavor, 'links'))
|
|
||||||
assert_true(flavor.links)
|
|
||||||
|
|
||||||
if flavor.id:
|
|
||||||
flavor_id = str(flavor.id)
|
|
||||||
else:
|
|
||||||
flavor_id = flavor.str_id
|
|
||||||
|
|
||||||
for link in flavor.links:
|
|
||||||
href = link['href']
|
|
||||||
|
|
||||||
if "self" in link['rel']:
|
|
||||||
expected_href = os.path.join(test_config.dbaas_url, "flavors",
|
|
||||||
str(flavor.id))
|
|
||||||
url = test_config.dbaas_url.replace('http:', 'https:', 1)
|
|
||||||
msg = ("REL HREF %s doesn't start with %s" %
|
|
||||||
(href, test_config.dbaas_url))
|
|
||||||
assert_true(href.startswith(url), msg)
|
|
||||||
url = os.path.join("flavors", flavor_id)
|
|
||||||
msg = "REL HREF %s doesn't end in '%s'" % (href, url)
|
|
||||||
assert_true(href.endswith(url), msg)
|
|
||||||
elif "bookmark" in link['rel']:
|
|
||||||
base_url = test_config.version_url.replace('http:', 'https:', 1)
|
|
||||||
expected_href = os.path.join(base_url, "flavors", flavor_id)
|
|
||||||
msg = 'bookmark "href" must be %s, not %s' % (expected_href, href)
|
|
||||||
assert_equal(href, expected_href, msg)
|
|
||||||
else:
|
|
||||||
assert_false(True, "Unexpected rel - %s" % link['rel'])
|
|
||||||
|
|
||||||
|
|
||||||
@test(groups=[tests.DBAAS_API, GROUP, GROUP_DS, tests.PRE_INSTANCES],
|
|
||||||
depends_on_groups=["services.initialize"])
|
|
||||||
class Flavors(object):
|
|
||||||
@before_class
|
|
||||||
def setUp(self):
|
|
||||||
rd_user = test_config.users.find_user(
|
|
||||||
Requirements(is_admin=False, services=["trove"]))
|
|
||||||
self.rd_client = create_dbaas_client(rd_user)
|
|
||||||
|
|
||||||
if test_config.nova_client is not None:
|
|
||||||
nova_user = test_config.users.find_user(
|
|
||||||
Requirements(services=["nova"]))
|
|
||||||
self.nova_client = create_nova_client(nova_user)
|
|
||||||
|
|
||||||
def get_expected_flavors(self):
|
|
||||||
# If we have access to the client, great! Let's use that as the flavors
|
|
||||||
# returned by Trove should be identical.
|
|
||||||
if test_config.nova_client is not None:
|
|
||||||
return self.nova_client.flavors.list()
|
|
||||||
# If we don't have access to the client the flavors need to be spelled
|
|
||||||
# out in the config file.
|
|
||||||
flavors = [Flavor(Flavors, flavor_dict, loaded=True)
|
|
||||||
for flavor_dict in test_config.flavors]
|
|
||||||
return flavors
|
|
||||||
|
|
||||||
@test
|
|
||||||
def confirm_flavors_lists_nearly_identical(self):
|
|
||||||
os_flavors = self.get_expected_flavors()
|
|
||||||
dbaas_flavors = self.rd_client.flavors.list()
|
|
||||||
|
|
||||||
print("Open Stack Flavors:")
|
|
||||||
print(os_flavors)
|
|
||||||
print("DBaaS Flavors:")
|
|
||||||
print(dbaas_flavors)
|
|
||||||
# Length of both flavors list should be identical.
|
|
||||||
assert_equal(len(os_flavors), len(dbaas_flavors))
|
|
||||||
for os_flavor in os_flavors:
|
|
||||||
found_index = None
|
|
||||||
for index, dbaas_flavor in enumerate(dbaas_flavors):
|
|
||||||
if os_flavor.name == dbaas_flavor.name:
|
|
||||||
msg = ("Flavor ID '%s' appears in elements #%s and #%d." %
|
|
||||||
(dbaas_flavor.id, str(found_index), index))
|
|
||||||
assert_true(found_index is None, msg)
|
|
||||||
assert_flavors_roughly_equivalent(os_flavor, dbaas_flavor)
|
|
||||||
found_index = index
|
|
||||||
msg = "Some flavors from OS list were missing in DBAAS list."
|
|
||||||
assert_false(found_index is None, msg)
|
|
||||||
for flavor in dbaas_flavors:
|
|
||||||
assert_link_list_is_equal(flavor)
|
|
||||||
|
|
||||||
@test
|
|
||||||
def test_flavor_list_attrs(self):
|
|
||||||
allowed_attrs = ['id', 'name', 'ram', 'vcpus', 'disk', 'links',
|
|
||||||
'ephemeral', 'local_storage', 'str_id']
|
|
||||||
flavors = self.rd_client.flavors.list()
|
|
||||||
attrcheck = AttrCheck()
|
|
||||||
for flavor in flavors:
|
|
||||||
flavor_dict = flavor._info
|
|
||||||
attrcheck.contains_allowed_attrs(
|
|
||||||
flavor_dict, allowed_attrs,
|
|
||||||
msg="Flavors list")
|
|
||||||
attrcheck.links(flavor_dict['links'])
|
|
||||||
|
|
||||||
@test
|
|
||||||
def test_flavor_get_attrs(self):
|
|
||||||
allowed_attrs = ['id', 'name', 'ram', 'vcpus', 'disk', 'links',
|
|
||||||
'ephemeral', 'local_storage', 'str_id']
|
|
||||||
flavor = self.rd_client.flavors.get(1)
|
|
||||||
attrcheck = AttrCheck()
|
|
||||||
flavor_dict = flavor._info
|
|
||||||
attrcheck.contains_allowed_attrs(
|
|
||||||
flavor_dict, allowed_attrs,
|
|
||||||
msg="Flavor Get 1")
|
|
||||||
attrcheck.links(flavor_dict['links'])
|
|
||||||
|
|
||||||
@test
|
|
||||||
def test_flavor_not_found(self):
|
|
||||||
assert_raises(exceptions.NotFound,
|
|
||||||
self.rd_client.flavors.get, "foo")
|
|
||||||
|
|
||||||
@test
|
|
||||||
def test_flavor_list_datastore_version_associated_flavors(self):
|
|
||||||
datastore = self.rd_client.datastores.get(
|
|
||||||
test_config.dbaas_datastore)
|
|
||||||
dbaas_flavors = (self.rd_client.flavors.
|
|
||||||
list_datastore_version_associated_flavors(
|
|
||||||
datastore=test_config.dbaas_datastore,
|
|
||||||
version_id=datastore.default_version))
|
|
||||||
os_flavors = self.get_expected_flavors()
|
|
||||||
assert_equal(len(dbaas_flavors), len(os_flavors))
|
|
||||||
# verify flavor lists are identical
|
|
||||||
for os_flavor in os_flavors:
|
|
||||||
found_index = None
|
|
||||||
for index, dbaas_flavor in enumerate(dbaas_flavors):
|
|
||||||
if os_flavor.name == dbaas_flavor.name:
|
|
||||||
msg = ("Flavor ID '%s' appears in elements #%s and #%d." %
|
|
||||||
(dbaas_flavor.id, str(found_index), index))
|
|
||||||
assert_true(found_index is None, msg)
|
|
||||||
assert_flavors_roughly_equivalent(os_flavor, dbaas_flavor)
|
|
||||||
found_index = index
|
|
||||||
msg = "Some flavors from OS list were missing in DBAAS list."
|
|
||||||
assert_false(found_index is None, msg)
|
|
||||||
for flavor in dbaas_flavors:
|
|
||||||
assert_link_list_is_equal(flavor)
|
|
||||||
|
|
||||||
|
|
||||||
@test(runs_after=[Flavors],
|
|
||||||
groups=[tests.DBAAS_API, GROUP, GROUP_DS],
|
|
||||||
depends_on_groups=["services.initialize"],
|
|
||||||
enabled=FAKE_MODE)
|
|
||||||
class DatastoreFlavorAssociation(object):
|
|
||||||
@before_class
|
|
||||||
def setUp(self):
|
|
||||||
rd_user = test_config.users.find_user(
|
|
||||||
Requirements(is_admin=False, services=["trove"]))
|
|
||||||
self.rd_client = create_dbaas_client(rd_user)
|
|
||||||
|
|
||||||
self.datastore = self.rd_client.datastores.get(
|
|
||||||
test_config.dbaas_datastore)
|
|
||||||
self.name1 = "test_instance1"
|
|
||||||
self.name2 = "test_instance2"
|
|
||||||
self.volume = {'size': 2}
|
|
||||||
self.instance_id = None
|
|
||||||
self.nics = None
|
|
||||||
shared_network = CONFIG.get('shared_network', None)
|
|
||||||
if shared_network:
|
|
||||||
self.nics = [{'net-id': shared_network}]
|
|
||||||
|
|
||||||
@test
|
|
||||||
@time_out(TIMEOUT_INSTANCE_CREATE)
|
|
||||||
def test_create_instance_with_valid_flavor_association(self):
|
|
||||||
# all the nova flavors are associated with the default datastore
|
|
||||||
result = self.rd_client.instances.create(
|
|
||||||
name=self.name1, flavor_id='1', volume=self.volume,
|
|
||||||
datastore=self.datastore.id,
|
|
||||||
nics=self.nics)
|
|
||||||
self.instance_id = result.id
|
|
||||||
assert_equal(200, self.rd_client.last_http_code)
|
|
||||||
|
|
||||||
def result_is_active():
|
|
||||||
instance = self.rd_client.instances.get(self.instance_id)
|
|
||||||
if instance.status == "ACTIVE":
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
# If its not ACTIVE, anything but BUILD must be
|
|
||||||
# an error.
|
|
||||||
assert_equal("BUILD", instance.status)
|
|
||||||
return False
|
|
||||||
|
|
||||||
poll_until(result_is_active)
|
|
||||||
self.rd_client.instances.delete(self.instance_id)
|
|
||||||
|
|
||||||
@test(runs_after=[test_create_instance_with_valid_flavor_association])
|
|
||||||
def test_create_instance_with_invalid_flavor_association(self):
|
|
||||||
dbaas_flavors = (self.rd_client.flavors.
|
|
||||||
list_datastore_version_associated_flavors(
|
|
||||||
datastore=test_config.dbaas_datastore,
|
|
||||||
version_id=self.datastore.default_version))
|
|
||||||
self.flavor_not_associated = None
|
|
||||||
os_flavors = Flavors().get_expected_flavors()
|
|
||||||
for os_flavor in os_flavors:
|
|
||||||
if os_flavor not in dbaas_flavors:
|
|
||||||
self.flavor_not_associated = os_flavor.id
|
|
||||||
break
|
|
||||||
if self.flavor_not_associated is not None:
|
|
||||||
assert_raises(exceptions.BadRequest,
|
|
||||||
self.rd_client.instances.create, self.name2,
|
|
||||||
flavor_not_associated, self.volume,
|
|
||||||
datastore=self.datastore.id,
|
|
||||||
nics=self.nics)
|
|
@ -16,11 +16,9 @@
|
|||||||
import netaddr
|
import netaddr
|
||||||
import os
|
import os
|
||||||
import time
|
import time
|
||||||
from time import sleep
|
|
||||||
import unittest
|
import unittest
|
||||||
import uuid
|
import uuid
|
||||||
|
|
||||||
from proboscis import after_class
|
|
||||||
from proboscis.asserts import assert_equal
|
from proboscis.asserts import assert_equal
|
||||||
from proboscis.asserts import assert_false
|
from proboscis.asserts import assert_false
|
||||||
from proboscis.asserts import assert_is_not_none
|
from proboscis.asserts import assert_is_not_none
|
||||||
@ -35,7 +33,6 @@ from proboscis import test
|
|||||||
from troveclient.compat import exceptions
|
from troveclient.compat import exceptions
|
||||||
|
|
||||||
from trove.common import cfg
|
from trove.common import cfg
|
||||||
from trove.common import exception as rd_exceptions
|
|
||||||
from trove.common.utils import poll_until
|
from trove.common.utils import poll_until
|
||||||
from trove.datastore import models as datastore_models
|
from trove.datastore import models as datastore_models
|
||||||
from trove import tests
|
from trove import tests
|
||||||
@ -116,15 +113,17 @@ class InstanceTestInfo(object):
|
|||||||
'eph.rd-tiny')
|
'eph.rd-tiny')
|
||||||
else:
|
else:
|
||||||
flavor_name = CONFIG.values.get('instance_flavor_name', 'm1.tiny')
|
flavor_name = CONFIG.values.get('instance_flavor_name', 'm1.tiny')
|
||||||
|
|
||||||
flavors = self.dbaas.find_flavors_by_name(flavor_name)
|
flavors = self.dbaas.find_flavors_by_name(flavor_name)
|
||||||
assert_equal(len(flavors), 1,
|
assert_equal(len(flavors), 1,
|
||||||
"Number of flavors with name '%s' "
|
"Number of flavors with name '%s' "
|
||||||
"found was '%d'." % (flavor_name, len(flavors)))
|
"found was '%d'." % (flavor_name, len(flavors)))
|
||||||
|
|
||||||
flavor = flavors[0]
|
flavor = flavors[0]
|
||||||
assert_true(flavor is not None, "Flavor '%s' not found!" % flavor_name)
|
|
||||||
flavor_href = self.dbaas.find_flavor_self_href(flavor)
|
flavor_href = self.dbaas.find_flavor_self_href(flavor)
|
||||||
assert_true(flavor_href is not None,
|
assert_true(flavor_href is not None,
|
||||||
"Flavor href '%s' not found!" % flavor_name)
|
"Flavor href '%s' not found!" % flavor_name)
|
||||||
|
|
||||||
return flavor, flavor_href
|
return flavor, flavor_href
|
||||||
|
|
||||||
def get_address(self, mgmt=False):
|
def get_address(self, mgmt=False):
|
||||||
@ -255,17 +254,10 @@ def test_delete_instance_not_found():
|
|||||||
groups=[GROUP, GROUP_QUOTAS],
|
groups=[GROUP, GROUP_QUOTAS],
|
||||||
runs_after_groups=[tests.PRE_INSTANCES])
|
runs_after_groups=[tests.PRE_INSTANCES])
|
||||||
class CreateInstanceQuotaTest(unittest.TestCase):
|
class CreateInstanceQuotaTest(unittest.TestCase):
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
import copy
|
|
||||||
|
|
||||||
self.test_info = copy.deepcopy(instance_info)
|
|
||||||
self.test_info.dbaas_datastore = CONFIG.dbaas_datastore
|
|
||||||
|
|
||||||
def tearDown(self):
|
def tearDown(self):
|
||||||
quota_dict = {'instances': CONFIG.trove_max_instances_per_tenant,
|
quota_dict = {'instances': CONFIG.trove_max_instances_per_tenant,
|
||||||
'volumes': CONFIG.trove_max_volumes_per_tenant}
|
'volumes': CONFIG.trove_max_volumes_per_tenant}
|
||||||
dbaas_admin.quota.update(self.test_info.user.tenant_id,
|
dbaas_admin.quota.update(instance_info.user.tenant_id,
|
||||||
quota_dict)
|
quota_dict)
|
||||||
|
|
||||||
def test_instance_size_too_big(self):
|
def test_instance_size_too_big(self):
|
||||||
@ -273,52 +265,48 @@ class CreateInstanceQuotaTest(unittest.TestCase):
|
|||||||
VOLUME_SUPPORT):
|
VOLUME_SUPPORT):
|
||||||
too_big = CONFIG.trove_max_accepted_volume_size
|
too_big = CONFIG.trove_max_accepted_volume_size
|
||||||
|
|
||||||
self.test_info.volume = {'size': too_big + 1}
|
|
||||||
self.test_info.name = "way_too_large"
|
|
||||||
assert_raises(exceptions.OverLimit,
|
assert_raises(exceptions.OverLimit,
|
||||||
dbaas.instances.create,
|
dbaas.instances.create,
|
||||||
self.test_info.name,
|
"volume_size_too_large",
|
||||||
self.test_info.dbaas_flavor_href,
|
instance_info.dbaas_flavor_href,
|
||||||
self.test_info.volume,
|
{'size': too_big + 1},
|
||||||
nics=instance_info.nics)
|
nics=instance_info.nics)
|
||||||
|
|
||||||
def test_update_quota_invalid_resource_should_fail(self):
|
def test_update_quota_invalid_resource_should_fail(self):
|
||||||
quota_dict = {'invalid_resource': 100}
|
quota_dict = {'invalid_resource': 100}
|
||||||
assert_raises(exceptions.NotFound, dbaas_admin.quota.update,
|
assert_raises(exceptions.NotFound, dbaas_admin.quota.update,
|
||||||
self.test_info.user.tenant_id, quota_dict)
|
instance_info.user.tenant_id, quota_dict)
|
||||||
|
|
||||||
def test_update_quota_volume_should_fail_volume_not_supported(self):
|
def test_update_quota_volume_should_fail_volume_not_supported(self):
|
||||||
if VOLUME_SUPPORT:
|
if VOLUME_SUPPORT:
|
||||||
raise SkipTest("Volume support needs to be disabled")
|
raise SkipTest("Volume support needs to be disabled")
|
||||||
quota_dict = {'volumes': 100}
|
quota_dict = {'volumes': 100}
|
||||||
assert_raises(exceptions.NotFound, dbaas_admin.quota.update,
|
assert_raises(exceptions.NotFound, dbaas_admin.quota.update,
|
||||||
self.test_info.user.tenant_id, quota_dict)
|
instance_info.user.tenant_id, quota_dict)
|
||||||
|
|
||||||
def test_create_too_many_instances(self):
|
def test_create_too_many_instances(self):
|
||||||
instance_quota = 0
|
instance_quota = 0
|
||||||
quota_dict = {'instances': instance_quota}
|
quota_dict = {'instances': instance_quota}
|
||||||
new_quotas = dbaas_admin.quota.update(self.test_info.user.tenant_id,
|
new_quotas = dbaas_admin.quota.update(instance_info.user.tenant_id,
|
||||||
quota_dict)
|
quota_dict)
|
||||||
|
|
||||||
set_quota = dbaas_admin.quota.show(self.test_info.user.tenant_id)
|
set_quota = dbaas_admin.quota.show(instance_info.user.tenant_id)
|
||||||
verify_quota = {q.resource: q.limit for q in set_quota}
|
verify_quota = {q.resource: q.limit for q in set_quota}
|
||||||
|
|
||||||
assert_equal(new_quotas['instances'], quota_dict['instances'])
|
assert_equal(new_quotas['instances'], quota_dict['instances'])
|
||||||
assert_equal(0, verify_quota['instances'])
|
assert_equal(0, verify_quota['instances'])
|
||||||
self.test_info.volume = None
|
|
||||||
|
|
||||||
|
volume = None
|
||||||
if VOLUME_SUPPORT:
|
if VOLUME_SUPPORT:
|
||||||
assert_equal(CONFIG.trove_max_volumes_per_tenant,
|
assert_equal(CONFIG.trove_max_volumes_per_tenant,
|
||||||
verify_quota['volumes'])
|
verify_quota['volumes'])
|
||||||
self.test_info.volume = {'size':
|
volume = {'size': CONFIG.get('trove_volume_size', 1)}
|
||||||
CONFIG.get('trove_volume_size', 1)}
|
|
||||||
|
|
||||||
self.test_info.name = "too_many_instances"
|
|
||||||
assert_raises(exceptions.OverLimit,
|
assert_raises(exceptions.OverLimit,
|
||||||
dbaas.instances.create,
|
dbaas.instances.create,
|
||||||
self.test_info.name,
|
"too_many_instances",
|
||||||
self.test_info.dbaas_flavor_href,
|
instance_info.dbaas_flavor_href,
|
||||||
self.test_info.volume,
|
volume,
|
||||||
nics=instance_info.nics)
|
nics=instance_info.nics)
|
||||||
|
|
||||||
assert_equal(413, dbaas.last_http_code)
|
assert_equal(413, dbaas.last_http_code)
|
||||||
@ -328,17 +316,15 @@ class CreateInstanceQuotaTest(unittest.TestCase):
|
|||||||
raise SkipTest("Volume support not enabled")
|
raise SkipTest("Volume support not enabled")
|
||||||
volume_quota = 3
|
volume_quota = 3
|
||||||
quota_dict = {'volumes': volume_quota}
|
quota_dict = {'volumes': volume_quota}
|
||||||
self.test_info.volume = {'size': volume_quota + 1}
|
new_quotas = dbaas_admin.quota.update(instance_info.user.tenant_id,
|
||||||
new_quotas = dbaas_admin.quota.update(self.test_info.user.tenant_id,
|
|
||||||
quota_dict)
|
quota_dict)
|
||||||
assert_equal(volume_quota, new_quotas['volumes'])
|
assert_equal(volume_quota, new_quotas['volumes'])
|
||||||
|
|
||||||
self.test_info.name = "too_large_volume"
|
|
||||||
assert_raises(exceptions.OverLimit,
|
assert_raises(exceptions.OverLimit,
|
||||||
dbaas.instances.create,
|
dbaas.instances.create,
|
||||||
self.test_info.name,
|
"too_large_volume",
|
||||||
self.test_info.dbaas_flavor_href,
|
instance_info.dbaas_flavor_href,
|
||||||
self.test_info.volume,
|
{'size': volume_quota + 1},
|
||||||
nics=instance_info.nics)
|
nics=instance_info.nics)
|
||||||
|
|
||||||
assert_equal(413, dbaas.last_http_code)
|
assert_equal(413, dbaas.last_http_code)
|
||||||
@ -474,6 +460,7 @@ class CreateInstanceFail(object):
|
|||||||
databases = []
|
databases = []
|
||||||
flavor_name = CONFIG.values.get('instance_flavor_name', 'm1.tiny')
|
flavor_name = CONFIG.values.get('instance_flavor_name', 'm1.tiny')
|
||||||
flavors = dbaas.find_flavors_by_name(flavor_name)
|
flavors = dbaas.find_flavors_by_name(flavor_name)
|
||||||
|
|
||||||
assert_raises(exceptions.BadRequest, dbaas.instances.create,
|
assert_raises(exceptions.BadRequest, dbaas.instances.create,
|
||||||
instance_name, flavors[0].id, None, databases,
|
instance_name, flavors[0].id, None, databases,
|
||||||
nics=instance_info.nics)
|
nics=instance_info.nics)
|
||||||
@ -1508,86 +1495,3 @@ class CheckInstance(AttrCheck):
|
|||||||
slave, allowed_attrs,
|
slave, allowed_attrs,
|
||||||
msg="Replica links not found")
|
msg="Replica links not found")
|
||||||
self.links(slave['links'])
|
self.links(slave['links'])
|
||||||
|
|
||||||
|
|
||||||
@test(groups=[GROUP])
|
|
||||||
class BadInstanceStatusBug(object):
|
|
||||||
|
|
||||||
@before_class()
|
|
||||||
def setUp(self):
|
|
||||||
self.instances = []
|
|
||||||
reqs = Requirements(is_admin=True)
|
|
||||||
self.user = CONFIG.users.find_user(
|
|
||||||
reqs, black_list=[])
|
|
||||||
self.client = create_dbaas_client(self.user)
|
|
||||||
self.mgmt = self.client.management
|
|
||||||
|
|
||||||
@test
|
|
||||||
def test_instance_status_after_double_migrate(self):
|
|
||||||
"""
|
|
||||||
This test is to verify that instance status returned is more
|
|
||||||
informative than 'Status is {}'. There are several ways to
|
|
||||||
replicate this error. A double migration is just one of them but
|
|
||||||
since this is a known way to recreate that error we will use it
|
|
||||||
here to be sure that the error is fixed. The actual code lives
|
|
||||||
in trove/instance/models.py in _validate_can_perform_action()
|
|
||||||
"""
|
|
||||||
# TODO(imsplitbit): test other instances where this issue could be
|
|
||||||
# replicated. Resizing a resized instance awaiting confirmation
|
|
||||||
# can be used as another case. This all boils back to the same
|
|
||||||
# piece of code so I'm not sure if it's relevant or not but could
|
|
||||||
# be done.
|
|
||||||
size = None
|
|
||||||
if VOLUME_SUPPORT:
|
|
||||||
size = {'size': 5}
|
|
||||||
|
|
||||||
result = self.client.instances.create('testbox',
|
|
||||||
instance_info.dbaas_flavor_href,
|
|
||||||
size,
|
|
||||||
nics=instance_info.nics)
|
|
||||||
id = result.id
|
|
||||||
self.instances.append(id)
|
|
||||||
|
|
||||||
def verify_instance_is_active():
|
|
||||||
result = self.client.instances.get(id)
|
|
||||||
print(result.status)
|
|
||||||
return result.status == 'ACTIVE'
|
|
||||||
|
|
||||||
def attempt_migrate():
|
|
||||||
print('attempting migration')
|
|
||||||
try:
|
|
||||||
self.mgmt.migrate(id)
|
|
||||||
except exceptions.UnprocessableEntity:
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Timing necessary to make the error occur
|
|
||||||
poll_until(verify_instance_is_active, time_out=120, sleep_time=1)
|
|
||||||
|
|
||||||
try:
|
|
||||||
poll_until(attempt_migrate, time_out=10, sleep_time=1)
|
|
||||||
except rd_exceptions.PollTimeOut:
|
|
||||||
fail('Initial migration timed out')
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.mgmt.migrate(id)
|
|
||||||
except exceptions.UnprocessableEntity as err:
|
|
||||||
assert('status was {}' not in err.message)
|
|
||||||
else:
|
|
||||||
# If we are trying to test what status is returned when an
|
|
||||||
# instance is in a confirm_resize state and another
|
|
||||||
# migration is attempted then we also need to
|
|
||||||
# assert that an exception is raised when running migrate.
|
|
||||||
# If one is not then we aren't able to test what the
|
|
||||||
# returned status is in the exception message.
|
|
||||||
fail('UnprocessableEntity was not thrown')
|
|
||||||
|
|
||||||
@after_class(always_run=True)
|
|
||||||
def tearDown(self):
|
|
||||||
while len(self.instances) > 0:
|
|
||||||
for id in self.instances:
|
|
||||||
try:
|
|
||||||
self.client.instances.delete(id)
|
|
||||||
self.instances.remove(id)
|
|
||||||
except exceptions.UnprocessableEntity:
|
|
||||||
sleep(1.0)
|
|
||||||
|
@ -45,6 +45,7 @@ GROUP_REBOOT = "dbaas.api.instances.actions.reboot"
|
|||||||
GROUP_RESTART = "dbaas.api.instances.actions.restart"
|
GROUP_RESTART = "dbaas.api.instances.actions.restart"
|
||||||
GROUP_RESIZE = "dbaas.api.instances.actions.resize"
|
GROUP_RESIZE = "dbaas.api.instances.actions.resize"
|
||||||
GROUP_STOP_MYSQL = "dbaas.api.instances.actions.stop"
|
GROUP_STOP_MYSQL = "dbaas.api.instances.actions.stop"
|
||||||
|
GROUP_UPDATE_GUEST = "dbaas.api.instances.actions.update_guest"
|
||||||
MYSQL_USERNAME = "test_user"
|
MYSQL_USERNAME = "test_user"
|
||||||
MYSQL_PASSWORD = "abcde"
|
MYSQL_PASSWORD = "abcde"
|
||||||
# stored in test conf
|
# stored in test conf
|
||||||
@ -104,7 +105,6 @@ def get_resize_timeout():
|
|||||||
|
|
||||||
|
|
||||||
TIME_OUT_TIME = get_resize_timeout()
|
TIME_OUT_TIME = get_resize_timeout()
|
||||||
USER_WAS_DELETED = False
|
|
||||||
|
|
||||||
|
|
||||||
class ActionTestBase(object):
|
class ActionTestBase(object):
|
||||||
@ -223,23 +223,25 @@ class RebootTestBase(ActionTestBase):
|
|||||||
def call_reboot(self):
|
def call_reboot(self):
|
||||||
raise NotImplementedError()
|
raise NotImplementedError()
|
||||||
|
|
||||||
def wait_for_broken_connection(self):
|
|
||||||
"""Wait until our connection breaks."""
|
|
||||||
if not USE_IP:
|
|
||||||
return
|
|
||||||
if not hasattr(self, "connection"):
|
|
||||||
return
|
|
||||||
poll_until(self.connection.is_connected,
|
|
||||||
lambda connected: not connected,
|
|
||||||
time_out=TIME_OUT_TIME)
|
|
||||||
|
|
||||||
def wait_for_successful_restart(self):
|
def wait_for_successful_restart(self):
|
||||||
"""Wait until status becomes running."""
|
"""Wait until status becomes running.
|
||||||
def is_finished_rebooting():
|
|
||||||
|
Reboot is an async operation, make sure the instance is rebooting
|
||||||
|
before active.
|
||||||
|
"""
|
||||||
|
def _is_rebooting():
|
||||||
instance = self.instance
|
instance = self.instance
|
||||||
if instance.status == "REBOOT":
|
if instance.status == "REBOOT":
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
poll_until(_is_rebooting, time_out=TIME_OUT_TIME)
|
||||||
|
|
||||||
|
def is_finished_rebooting():
|
||||||
|
instance = self.instance
|
||||||
|
asserts.assert_not_equal(instance.status, "ERROR")
|
||||||
|
if instance.status != "ACTIVE":
|
||||||
return False
|
return False
|
||||||
asserts.assert_equal("ACTIVE", instance.status)
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
poll_until(is_finished_rebooting, time_out=TIME_OUT_TIME)
|
poll_until(is_finished_rebooting, time_out=TIME_OUT_TIME)
|
||||||
@ -253,45 +255,10 @@ class RebootTestBase(ActionTestBase):
|
|||||||
|
|
||||||
def successful_restart(self):
|
def successful_restart(self):
|
||||||
"""Restart MySQL via the REST API successfully."""
|
"""Restart MySQL via the REST API successfully."""
|
||||||
self.fix_mysql()
|
|
||||||
self.call_reboot()
|
self.call_reboot()
|
||||||
self.wait_for_broken_connection()
|
|
||||||
self.wait_for_successful_restart()
|
self.wait_for_successful_restart()
|
||||||
self.assert_mysql_proc_is_different()
|
self.assert_mysql_proc_is_different()
|
||||||
|
|
||||||
def mess_up_mysql(self):
|
|
||||||
"""Ruin MySQL's ability to restart."""
|
|
||||||
server = create_server_connection(self.instance_id,
|
|
||||||
self.instance_mgmt_address)
|
|
||||||
cmd_template = "sudo cp /dev/null /var/lib/mysql/data/ib_logfile%d"
|
|
||||||
instance_info.dbaas_admin.management.stop(self.instance_id)
|
|
||||||
|
|
||||||
for index in range(2):
|
|
||||||
cmd = cmd_template % index
|
|
||||||
try:
|
|
||||||
server.execute(cmd)
|
|
||||||
except Exception as e:
|
|
||||||
asserts.fail("Failed to execute command %s, error: %s" %
|
|
||||||
(cmd, str(e)))
|
|
||||||
|
|
||||||
def fix_mysql(self):
|
|
||||||
"""Fix MySQL's ability to restart."""
|
|
||||||
if not FAKE_MODE:
|
|
||||||
server = create_server_connection(self.instance_id,
|
|
||||||
self.instance_mgmt_address)
|
|
||||||
cmd_template = "sudo rm /var/lib/mysql/data/ib_logfile%d"
|
|
||||||
# We want to stop mysql so that upstart does not keep trying to
|
|
||||||
# respawn it and block the guest agent from accessing the logs.
|
|
||||||
instance_info.dbaas_admin.management.stop(self.instance_id)
|
|
||||||
|
|
||||||
for index in range(2):
|
|
||||||
cmd = cmd_template % index
|
|
||||||
try:
|
|
||||||
server.execute(cmd)
|
|
||||||
except Exception as e:
|
|
||||||
asserts.fail("Failed to execute command %s, error: %s" %
|
|
||||||
(cmd, str(e)))
|
|
||||||
|
|
||||||
def wait_for_failure_status(self):
|
def wait_for_failure_status(self):
|
||||||
"""Wait until status becomes running."""
|
"""Wait until status becomes running."""
|
||||||
def is_finished_rebooting():
|
def is_finished_rebooting():
|
||||||
@ -306,19 +273,6 @@ class RebootTestBase(ActionTestBase):
|
|||||||
|
|
||||||
poll_until(is_finished_rebooting, time_out=TIME_OUT_TIME)
|
poll_until(is_finished_rebooting, time_out=TIME_OUT_TIME)
|
||||||
|
|
||||||
def unsuccessful_restart(self):
|
|
||||||
"""Restart MySQL via the REST when it should fail, assert it does."""
|
|
||||||
assert not FAKE_MODE
|
|
||||||
self.mess_up_mysql()
|
|
||||||
self.call_reboot()
|
|
||||||
self.wait_for_broken_connection()
|
|
||||||
self.wait_for_failure_status()
|
|
||||||
|
|
||||||
def restart_normally(self):
|
|
||||||
"""Fix iblogs and reboot normally."""
|
|
||||||
self.fix_mysql()
|
|
||||||
self.test_successful_restart()
|
|
||||||
|
|
||||||
|
|
||||||
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP, GROUP_RESTART],
|
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP, GROUP_RESTART],
|
||||||
depends_on_groups=[GROUP_START], depends_on=[create_user])
|
depends_on_groups=[GROUP_START], depends_on=[create_user])
|
||||||
@ -338,22 +292,14 @@ class RestartTests(RebootTestBase):
|
|||||||
"""Make sure MySQL is accessible before restarting."""
|
"""Make sure MySQL is accessible before restarting."""
|
||||||
self.ensure_mysql_is_running()
|
self.ensure_mysql_is_running()
|
||||||
|
|
||||||
@test(depends_on=[test_ensure_mysql_is_running], enabled=not FAKE_MODE)
|
@test(depends_on=[test_ensure_mysql_is_running])
|
||||||
def test_unsuccessful_restart(self):
|
|
||||||
"""Restart MySQL via the REST when it should fail, assert it does."""
|
|
||||||
if FAKE_MODE:
|
|
||||||
raise SkipTest("Cannot run this in fake mode.")
|
|
||||||
self.unsuccessful_restart()
|
|
||||||
|
|
||||||
@test(depends_on=[test_set_up],
|
|
||||||
runs_after=[test_ensure_mysql_is_running, test_unsuccessful_restart])
|
|
||||||
def test_successful_restart(self):
|
def test_successful_restart(self):
|
||||||
"""Restart MySQL via the REST API successfully."""
|
"""Restart MySQL via the REST API successfully."""
|
||||||
self.successful_restart()
|
self.successful_restart()
|
||||||
|
|
||||||
|
|
||||||
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP, GROUP_STOP_MYSQL],
|
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP, GROUP_STOP_MYSQL],
|
||||||
depends_on_groups=[GROUP_START], depends_on=[create_user])
|
depends_on_groups=[GROUP_RESTART], depends_on=[create_user])
|
||||||
class StopTests(RebootTestBase):
|
class StopTests(RebootTestBase):
|
||||||
"""Tests which involve stopping MySQL."""
|
"""Tests which involve stopping MySQL."""
|
||||||
|
|
||||||
@ -373,11 +319,10 @@ class StopTests(RebootTestBase):
|
|||||||
def test_stop_mysql(self):
|
def test_stop_mysql(self):
|
||||||
"""Stops MySQL."""
|
"""Stops MySQL."""
|
||||||
instance_info.dbaas_admin.management.stop(self.instance_id)
|
instance_info.dbaas_admin.management.stop(self.instance_id)
|
||||||
self.wait_for_broken_connection()
|
|
||||||
self.wait_for_failure_status()
|
self.wait_for_failure_status()
|
||||||
|
|
||||||
@test(depends_on=[test_stop_mysql])
|
@test(depends_on=[test_stop_mysql])
|
||||||
def test_instance_get_shows_volume_info_while_mysql_is_down(self):
|
def test_volume_info_while_mysql_is_down(self):
|
||||||
"""
|
"""
|
||||||
Confirms the get call behaves appropriately while an instance is
|
Confirms the get call behaves appropriately while an instance is
|
||||||
down.
|
down.
|
||||||
@ -392,15 +337,14 @@ class StopTests(RebootTestBase):
|
|||||||
check.true(isinstance(instance.volume.get('size', None), int))
|
check.true(isinstance(instance.volume.get('size', None), int))
|
||||||
check.true(isinstance(instance.volume.get('used', None), float))
|
check.true(isinstance(instance.volume.get('used', None), float))
|
||||||
|
|
||||||
@test(depends_on=[test_set_up],
|
@test(depends_on=[test_volume_info_while_mysql_is_down])
|
||||||
runs_after=[test_instance_get_shows_volume_info_while_mysql_is_down])
|
|
||||||
def test_successful_restart_when_in_shutdown_state(self):
|
def test_successful_restart_when_in_shutdown_state(self):
|
||||||
"""Restart MySQL via the REST API successfully when MySQL is down."""
|
"""Restart MySQL via the REST API successfully when MySQL is down."""
|
||||||
self.successful_restart()
|
self.successful_restart()
|
||||||
|
|
||||||
|
|
||||||
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP, GROUP_REBOOT],
|
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP, GROUP_REBOOT],
|
||||||
depends_on_groups=[GROUP_START], depends_on=[RestartTests, create_user])
|
depends_on_groups=[GROUP_STOP_MYSQL])
|
||||||
class RebootTests(RebootTestBase):
|
class RebootTests(RebootTestBase):
|
||||||
"""Tests restarting instance."""
|
"""Tests restarting instance."""
|
||||||
|
|
||||||
@ -418,14 +362,7 @@ class RebootTests(RebootTestBase):
|
|||||||
"""Make sure MySQL is accessible before restarting."""
|
"""Make sure MySQL is accessible before restarting."""
|
||||||
self.ensure_mysql_is_running()
|
self.ensure_mysql_is_running()
|
||||||
|
|
||||||
@test(depends_on=[test_ensure_mysql_is_running])
|
@after_class(depends_on=[test_ensure_mysql_is_running])
|
||||||
def test_unsuccessful_restart(self):
|
|
||||||
"""Restart MySQL via the REST when it should fail, assert it does."""
|
|
||||||
if FAKE_MODE:
|
|
||||||
raise SkipTest("Cannot run this in fake mode.")
|
|
||||||
self.unsuccessful_restart()
|
|
||||||
|
|
||||||
@after_class(depends_on=[test_set_up])
|
|
||||||
def test_successful_restart(self):
|
def test_successful_restart(self):
|
||||||
"""Restart MySQL via the REST API successfully."""
|
"""Restart MySQL via the REST API successfully."""
|
||||||
if FAKE_MODE:
|
if FAKE_MODE:
|
||||||
@ -434,8 +371,7 @@ class RebootTests(RebootTestBase):
|
|||||||
|
|
||||||
|
|
||||||
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP, GROUP_RESIZE],
|
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP, GROUP_RESIZE],
|
||||||
depends_on_groups=[GROUP_START], depends_on=[create_user],
|
depends_on_groups=[GROUP_REBOOT])
|
||||||
runs_after=[RebootTests])
|
|
||||||
class ResizeInstanceTest(ActionTestBase):
|
class ResizeInstanceTest(ActionTestBase):
|
||||||
|
|
||||||
"""
|
"""
|
||||||
@ -466,7 +402,6 @@ class ResizeInstanceTest(ActionTestBase):
|
|||||||
self.connection.connect()
|
self.connection.connect()
|
||||||
asserts.assert_true(self.connection.is_connected(),
|
asserts.assert_true(self.connection.is_connected(),
|
||||||
"Should be able to connect before resize.")
|
"Should be able to connect before resize.")
|
||||||
self.user_was_deleted = False
|
|
||||||
|
|
||||||
@test
|
@test
|
||||||
def test_instance_resize_same_size_should_fail(self):
|
def test_instance_resize_same_size_should_fail(self):
|
||||||
@ -484,8 +419,6 @@ class ResizeInstanceTest(ActionTestBase):
|
|||||||
poll_until(is_active, time_out=TIME_OUT_TIME)
|
poll_until(is_active, time_out=TIME_OUT_TIME)
|
||||||
asserts.assert_equal(self.instance.status, 'ACTIVE')
|
asserts.assert_equal(self.instance.status, 'ACTIVE')
|
||||||
|
|
||||||
self.get_flavor_href(
|
|
||||||
flavor_id=self.expected_old_flavor_id)
|
|
||||||
asserts.assert_raises(HTTPNotImplemented,
|
asserts.assert_raises(HTTPNotImplemented,
|
||||||
self.dbaas.instances.resize_instance,
|
self.dbaas.instances.resize_instance,
|
||||||
self.instance_id, flavors[0].id)
|
self.instance_id, flavors[0].id)
|
||||||
@ -517,11 +450,6 @@ class ResizeInstanceTest(ActionTestBase):
|
|||||||
flavor = flavors[0]
|
flavor = flavors[0]
|
||||||
self.old_dbaas_flavor = instance_info.dbaas_flavor
|
self.old_dbaas_flavor = instance_info.dbaas_flavor
|
||||||
instance_info.dbaas_flavor = flavor
|
instance_info.dbaas_flavor = flavor
|
||||||
asserts.assert_true(flavor is not None,
|
|
||||||
"Flavor '%s' not found!" % flavor_name)
|
|
||||||
flavor_href = self.dbaas.find_flavor_self_href(flavor)
|
|
||||||
asserts.assert_true(flavor_href is not None,
|
|
||||||
"Flavor href '%s' not found!" % flavor_name)
|
|
||||||
self.expected_new_flavor_id = flavor.id
|
self.expected_new_flavor_id = flavor.id
|
||||||
|
|
||||||
@test(depends_on=[test_instance_resize_same_size_should_fail])
|
@test(depends_on=[test_instance_resize_same_size_should_fail])
|
||||||
@ -579,45 +507,6 @@ class ResizeInstanceTest(ActionTestBase):
|
|||||||
expected = self.get_flavor_href(flavor_id=self.expected_new_flavor_id)
|
expected = self.get_flavor_href(flavor_id=self.expected_new_flavor_id)
|
||||||
asserts.assert_equal(actual, expected)
|
asserts.assert_equal(actual, expected)
|
||||||
|
|
||||||
@test(depends_on=[test_instance_has_new_flavor_after_resize])
|
|
||||||
@time_out(TIME_OUT_TIME)
|
|
||||||
def test_resize_down(self):
|
|
||||||
expected_dbaas_flavor = self.expected_dbaas_flavor
|
|
||||||
|
|
||||||
def is_active():
|
|
||||||
return self.instance.status == 'ACTIVE'
|
|
||||||
poll_until(is_active, time_out=TIME_OUT_TIME)
|
|
||||||
asserts.assert_equal(self.instance.status, 'ACTIVE')
|
|
||||||
|
|
||||||
old_flavor_href = self.get_flavor_href(
|
|
||||||
flavor_id=self.expected_old_flavor_id)
|
|
||||||
|
|
||||||
self.dbaas.instances.resize_instance(self.instance_id, old_flavor_href)
|
|
||||||
asserts.assert_equal(202, self.dbaas.last_http_code)
|
|
||||||
self.old_dbaas_flavor = instance_info.dbaas_flavor
|
|
||||||
instance_info.dbaas_flavor = expected_dbaas_flavor
|
|
||||||
self.wait_for_resize()
|
|
||||||
asserts.assert_equal(str(self.instance.flavor['id']),
|
|
||||||
str(self.expected_old_flavor_id))
|
|
||||||
|
|
||||||
@test(depends_on=[test_resize_down],
|
|
||||||
groups=["dbaas.usage"])
|
|
||||||
def test_resize_instance_down_usage_event_sent(self):
|
|
||||||
expected = self._build_expected_msg()
|
|
||||||
expected['old_instance_size'] = self.old_dbaas_flavor.ram
|
|
||||||
instance_info.consumer.check_message(instance_info.id,
|
|
||||||
'trove.instance.modify_flavor',
|
|
||||||
**expected)
|
|
||||||
|
|
||||||
|
|
||||||
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP,
|
|
||||||
GROUP + ".resize.instance"],
|
|
||||||
depends_on_groups=[GROUP_START], depends_on=[create_user],
|
|
||||||
runs_after=[RebootTests, ResizeInstanceTest])
|
|
||||||
def resize_should_not_delete_users():
|
|
||||||
if USER_WAS_DELETED:
|
|
||||||
asserts.fail("Somehow, the resize made the test user disappear.")
|
|
||||||
|
|
||||||
|
|
||||||
@test(depends_on=[ResizeInstanceTest],
|
@test(depends_on=[ResizeInstanceTest],
|
||||||
groups=[GROUP, tests.INSTANCES, INSTANCE_GROUP, GROUP_RESIZE],
|
groups=[GROUP, tests.INSTANCES, INSTANCE_GROUP, GROUP_RESIZE],
|
||||||
@ -708,9 +597,8 @@ class ResizeInstanceVolume(ActionTestBase):
|
|||||||
UPDATE_GUEST_CONF = CONFIG.values.get("guest-update-test", None)
|
UPDATE_GUEST_CONF = CONFIG.values.get("guest-update-test", None)
|
||||||
|
|
||||||
|
|
||||||
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP, GROUP + ".update_guest"],
|
@test(groups=[tests.INSTANCES, INSTANCE_GROUP, GROUP, GROUP_UPDATE_GUEST],
|
||||||
depends_on=[create_user],
|
depends_on_groups=[GROUP_RESIZE])
|
||||||
depends_on_groups=[GROUP_START])
|
|
||||||
class UpdateGuest(object):
|
class UpdateGuest(object):
|
||||||
|
|
||||||
def get_version(self):
|
def get_version(self):
|
||||||
|
@ -52,6 +52,7 @@ class TestBase(object):
|
|||||||
'm1.tiny')
|
'm1.tiny')
|
||||||
flavor2_name = test_config.values.get(
|
flavor2_name = test_config.values.get(
|
||||||
'instance_bigger_flavor_name', 'm1.small')
|
'instance_bigger_flavor_name', 'm1.small')
|
||||||
|
|
||||||
flavors = self.client.find_flavors_by_name(flavor_name)
|
flavors = self.client.find_flavors_by_name(flavor_name)
|
||||||
self.flavor_id = flavors[0].id
|
self.flavor_id = flavors[0].id
|
||||||
self.name = "TEST_" + str(uuid.uuid4())
|
self.name = "TEST_" + str(uuid.uuid4())
|
||||||
|
@ -18,7 +18,6 @@ from trove.tests.api import backups
|
|||||||
from trove.tests.api import configurations
|
from trove.tests.api import configurations
|
||||||
from trove.tests.api import databases
|
from trove.tests.api import databases
|
||||||
from trove.tests.api import datastores
|
from trove.tests.api import datastores
|
||||||
from trove.tests.api import flavors
|
|
||||||
from trove.tests.api import instances
|
from trove.tests.api import instances
|
||||||
from trove.tests.api import instances_actions
|
from trove.tests.api import instances_actions
|
||||||
from trove.tests.api.mgmt import accounts
|
from trove.tests.api.mgmt import accounts
|
||||||
@ -86,7 +85,6 @@ def register(group_names, *test_groups, **kwargs):
|
|||||||
depends_on_groups=build_group(*test_groups))
|
depends_on_groups=build_group(*test_groups))
|
||||||
|
|
||||||
black_box_groups = [
|
black_box_groups = [
|
||||||
flavors.GROUP,
|
|
||||||
users.GROUP,
|
users.GROUP,
|
||||||
user_access.GROUP,
|
user_access.GROUP,
|
||||||
databases.GROUP,
|
databases.GROUP,
|
||||||
@ -114,7 +112,6 @@ proboscis.register(groups=["blackbox", "mysql"],
|
|||||||
|
|
||||||
simple_black_box_groups = [
|
simple_black_box_groups = [
|
||||||
GROUP_SERVICES_INITIALIZE,
|
GROUP_SERVICES_INITIALIZE,
|
||||||
flavors.GROUP,
|
|
||||||
versions.GROUP,
|
versions.GROUP,
|
||||||
instances.GROUP_START_SIMPLE,
|
instances.GROUP_START_SIMPLE,
|
||||||
admin_required.GROUP,
|
admin_required.GROUP,
|
||||||
@ -141,7 +138,6 @@ proboscis.register(groups=["blackbox_mgmt"],
|
|||||||
# Base groups for all other groups
|
# Base groups for all other groups
|
||||||
base_groups = [
|
base_groups = [
|
||||||
GROUP_SERVICES_INITIALIZE,
|
GROUP_SERVICES_INITIALIZE,
|
||||||
flavors.GROUP,
|
|
||||||
versions.GROUP,
|
versions.GROUP,
|
||||||
GROUP_SETUP
|
GROUP_SETUP
|
||||||
]
|
]
|
||||||
|
@ -199,8 +199,8 @@ class InstanceCreateRunner(TestRunner):
|
|||||||
|
|
||||||
self.assert_equal(instance_info.name, instance._info['name'],
|
self.assert_equal(instance_info.name, instance._info['name'],
|
||||||
"Unexpected instance name")
|
"Unexpected instance name")
|
||||||
self.assert_equal(flavor.id,
|
self.assert_equal(str(flavor.id),
|
||||||
int(instance._info['flavor']['id']),
|
str(instance._info['flavor']['id']),
|
||||||
"Unexpected instance flavor")
|
"Unexpected instance flavor")
|
||||||
self.assert_equal(instance_info.dbaas_datastore,
|
self.assert_equal(instance_info.dbaas_datastore,
|
||||||
instance._info['datastore']['type'],
|
instance._info['datastore']['type'],
|
||||||
|
@ -802,10 +802,8 @@ class TestRunner(object):
|
|||||||
self.assert_equal(
|
self.assert_equal(
|
||||||
1, len(flavors),
|
1, len(flavors),
|
||||||
"Unexpected number of flavors with name '%s' found." % flavor_name)
|
"Unexpected number of flavors with name '%s' found." % flavor_name)
|
||||||
flavor = flavors[0]
|
|
||||||
self.assert_is_not_none(flavor, "Flavor '%s' not found." % flavor_name)
|
|
||||||
|
|
||||||
return flavor
|
return flavors[0]
|
||||||
|
|
||||||
def get_instance_flavor(self, fault_num=None):
|
def get_instance_flavor(self, fault_num=None):
|
||||||
name_format = 'instance%s%s_flavor_name'
|
name_format = 'instance%s%s_flavor_name'
|
||||||
|
@ -52,7 +52,7 @@ class TestDatastoreVersion(trove_testtools.TestCase):
|
|||||||
def test_version_create(self, mock_glance_client):
|
def test_version_create(self, mock_glance_client):
|
||||||
body = {"version": {
|
body = {"version": {
|
||||||
"datastore_name": "test_ds",
|
"datastore_name": "test_ds",
|
||||||
"name": "test_vr",
|
"name": "test_version",
|
||||||
"datastore_manager": "mysql",
|
"datastore_manager": "mysql",
|
||||||
"image": "image-id",
|
"image": "image-id",
|
||||||
"packages": "test-pkg",
|
"packages": "test-pkg",
|
||||||
|
@ -309,6 +309,7 @@ class FreshInstanceTasksTest(BaseFreshInstanceTasksTest):
|
|||||||
new_callable=PropertyMock,
|
new_callable=PropertyMock,
|
||||||
return_value='fake-hostname')
|
return_value='fake-hostname')
|
||||||
def test_servers_create_block_device_mapping_v2(self, mock_hostname):
|
def test_servers_create_block_device_mapping_v2(self, mock_hostname):
|
||||||
|
self.freshinstancetasks._prepare_userdata = Mock(return_value=None)
|
||||||
mock_nova_client = self.freshinstancetasks.nova_client = Mock()
|
mock_nova_client = self.freshinstancetasks.nova_client = Mock()
|
||||||
mock_servers_create = mock_nova_client.servers.create
|
mock_servers_create = mock_nova_client.servers.create
|
||||||
self.freshinstancetasks._create_server('fake-flavor', 'fake-image',
|
self.freshinstancetasks._create_server('fake-flavor', 'fake-image',
|
||||||
@ -867,26 +868,23 @@ class BuiltInstanceTasksTest(trove_testtools.TestCase):
|
|||||||
|
|
||||||
@patch.object(utils, 'poll_until')
|
@patch.object(utils, 'poll_until')
|
||||||
def test_reboot(self, mock_poll):
|
def test_reboot(self, mock_poll):
|
||||||
self.instance_task.datastore_status_matches = Mock(return_value=True)
|
|
||||||
self.instance_task._refresh_datastore_status = Mock()
|
|
||||||
self.instance_task.server.reboot = Mock()
|
self.instance_task.server.reboot = Mock()
|
||||||
self.instance_task.set_datastore_status_to_paused = Mock()
|
self.instance_task.set_datastore_status_to_paused = Mock()
|
||||||
self.instance_task.reboot()
|
self.instance_task.reboot()
|
||||||
self.instance_task._guest.stop_db.assert_any_call()
|
self.instance_task._guest.stop_db.assert_any_call()
|
||||||
self.instance_task._refresh_datastore_status.assert_any_call()
|
|
||||||
self.instance_task.server.reboot.assert_any_call()
|
self.instance_task.server.reboot.assert_any_call()
|
||||||
self.instance_task.set_datastore_status_to_paused.assert_any_call()
|
self.instance_task.set_datastore_status_to_paused.assert_any_call()
|
||||||
|
|
||||||
@patch.object(utils, 'poll_until')
|
@patch.object(utils, 'poll_until')
|
||||||
@patch('trove.taskmanager.models.LOG')
|
@patch('trove.taskmanager.models.LOG')
|
||||||
def test_reboot_datastore_not_ready(self, mock_logging, mock_poll):
|
def test_reboot_datastore_not_ready(self, mock_logging, mock_poll):
|
||||||
self.instance_task.datastore_status_matches = Mock(return_value=False)
|
mock_poll.side_effect = PollTimeOut
|
||||||
self.instance_task._refresh_datastore_status = Mock()
|
|
||||||
self.instance_task.server.reboot = Mock()
|
self.instance_task.server.reboot = Mock()
|
||||||
self.instance_task.set_datastore_status_to_paused = Mock()
|
self.instance_task.set_datastore_status_to_paused = Mock()
|
||||||
|
|
||||||
self.instance_task.reboot()
|
self.instance_task.reboot()
|
||||||
|
|
||||||
self.instance_task._guest.stop_db.assert_any_call()
|
self.instance_task._guest.stop_db.assert_any_call()
|
||||||
self.instance_task._refresh_datastore_status.assert_any_call()
|
|
||||||
assert not self.instance_task.server.reboot.called
|
assert not self.instance_task.server.reboot.called
|
||||||
assert not self.instance_task.set_datastore_status_to_paused.called
|
assert not self.instance_task.set_datastore_status_to_paused.called
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user