debian: Create debian version of collect
Debian and Centos use the same tools but they are installed in different places. In order for collect to work on Debian, make sure that we are trying not use to RPMs on Debian. This is done in the collect-patching script so that the "smart" program is not run. Also kdump uses the /var/lib/kdump path on Debian rather than /var/crash on Centos. Also checked for 'rpm -qa' usage and changed them to 'dpkg -l'. Test Plan PASS Build package PASS Build and install ISO PASS Run the collect -v -all Story: 2009101 Task: 43732 Depends-On: https://review.opendev.org/c/starlingx/tools/+/838327 Signed-off-by: Charles Short <charles.short@windriver.com> Change-Id: I66cf0615f8cab7fe877b6cb09d605557c9258c43
This commit is contained in:
parent
d5bec2b1b8
commit
87dd74faf0
202
tools/collector/debian-scripts/LICENSE
Normal file
202
tools/collector/debian-scripts/LICENSE
Normal file
@ -0,0 +1,202 @@
|
|||||||
|
|
||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright [yyyy] [name of copyright owner]
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
3101
tools/collector/debian-scripts/collect
Executable file
3101
tools/collector/debian-scripts/collect
Executable file
File diff suppressed because it is too large
Load Diff
81
tools/collector/debian-scripts/collect_ceph.sh
Executable file
81
tools/collector/debian-scripts/collect_ceph.sh
Executable file
@ -0,0 +1,81 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="ceph"
|
||||||
|
LOGFILE="${extradir}/ceph.info"
|
||||||
|
echo "${hostname}: Ceph Info .........: ${LOGFILE}"
|
||||||
|
|
||||||
|
function is_service_active {
|
||||||
|
active=`sm-query service management-ip | grep "enabled-active"`
|
||||||
|
if [ -z "$active" ] ; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function exit_if_timeout {
|
||||||
|
if [ "$?" = "124" ] ; then
|
||||||
|
echo "Exiting due to ceph command timeout" >> ${LOGFILE}
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Controller
|
||||||
|
###############################################################################
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
|
||||||
|
# Using timeout with all ceph commands because commands can hang for
|
||||||
|
# minutes if the ceph cluster is down. If ceph is not configured, the
|
||||||
|
# commands return immediately.
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ceph status"
|
||||||
|
timeout 30 ceph status >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
exit_if_timeout
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ceph mon dump"
|
||||||
|
timeout 30 ceph mon dump >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
exit_if_timeout
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ceph osd dump"
|
||||||
|
timeout 30 ceph osd dump >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
exit_if_timeout
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ceph osd tree"
|
||||||
|
timeout 30 ceph osd tree >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
exit_if_timeout
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ceph osd crush dump"
|
||||||
|
timeout 30 ceph osd crush dump >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
exit_if_timeout
|
||||||
|
|
||||||
|
is_service_active
|
||||||
|
if [ "$?" = "0" ] ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ceph df"
|
||||||
|
timeout 30 ceph df >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
exit_if_timeout
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ceph osd df tree"
|
||||||
|
timeout 30 ceph osd df tree >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
exit_if_timeout
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ceph health detail"
|
||||||
|
timeout 30 ceph health detail >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
exit_if_timeout
|
||||||
|
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
206
tools/collector/debian-scripts/collect_containerization.sh
Executable file
206
tools/collector/debian-scripts/collect_containerization.sh
Executable file
@ -0,0 +1,206 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2019-2021 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="containerization"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
LOGFILE_EVENT="${extradir}/${SERVICE}_events.info"
|
||||||
|
LOGFILE_API="${extradir}/${SERVICE}_api.info"
|
||||||
|
LOGFILE_HOST="${extradir}/${SERVICE}_host.info"
|
||||||
|
LOGFILE_IMG="${extradir}/${SERVICE}_images.info"
|
||||||
|
LOGFILE_KUBE="${extradir}/${SERVICE}_kube.info"
|
||||||
|
LOGFILE_PODS="${extradir}/${SERVICE}_pods.info"
|
||||||
|
LOGFILE_HELM="${extradir}/${SERVICE}_helm.info"
|
||||||
|
|
||||||
|
HELM_DIR="${extradir}/helm"
|
||||||
|
ETCD_DB_FILE="${extradir}/etcd_database.dump"
|
||||||
|
KUBE_CONFIG_FILE="/etc/kubernetes/admin.conf"
|
||||||
|
KUBE_CONFIG="--kubeconfig ${KUBE_CONFIG_FILE}"
|
||||||
|
echo "${hostname}: Containerization Info ...: ${LOGFILE}"
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# All nodes
|
||||||
|
###############################################################################
|
||||||
|
mkdir -p ${HELM_DIR}
|
||||||
|
source_openrc_if_needed
|
||||||
|
|
||||||
|
CMD="docker image ls -a"
|
||||||
|
delimiter ${LOGFILE_IMG} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_IMG}
|
||||||
|
|
||||||
|
CMD="crictl images"
|
||||||
|
delimiter ${LOGFILE_IMG} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_IMG}
|
||||||
|
|
||||||
|
CMD="ctr -n k8s.io images list"
|
||||||
|
delimiter ${LOGFILE_IMG} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_IMG}
|
||||||
|
|
||||||
|
CMD="docker container ps -a"
|
||||||
|
delimiter ${LOGFILE_IMG} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_IMG}
|
||||||
|
|
||||||
|
CMD="crictl ps -a"
|
||||||
|
delimiter ${LOGFILE_IMG} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_IMG}
|
||||||
|
|
||||||
|
CMD="cat /var/lib/kubelet/cpu_manager_state | python -m json.tool"
|
||||||
|
delimiter ${LOGFILE_HOST} "${CMD}"
|
||||||
|
eval ${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_HOST}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Active Controller
|
||||||
|
###############################################################################
|
||||||
|
if [ "$nodetype" = "controller" -a "${ACTIVE}" = true ] ; then
|
||||||
|
|
||||||
|
# Environment for kubectl and helm
|
||||||
|
export KUBECONFIG=${KUBE_CONFIG_FILE}
|
||||||
|
|
||||||
|
declare -a CMDS=()
|
||||||
|
CMDS+=("kubectl version")
|
||||||
|
CMDS+=("kubectl get nodes -o wide")
|
||||||
|
CMDS+=("kubectl get nodes --show-labels")
|
||||||
|
CMDS+=("kubectl get nodes -o json")
|
||||||
|
CMDS+=("kubectl describe nodes")
|
||||||
|
CMDS+=("kubectl describe nodes | grep -e Capacity: -B1 -A40 | grep -e 'System Info:' -B13 | grep -v 'System Info:'")
|
||||||
|
CMDS+=("kubectl services")
|
||||||
|
CMDS+=("kubectl get configmaps --all-namespaces")
|
||||||
|
CMDS+=("kubectl get daemonsets --all-namespaces")
|
||||||
|
CMDS+=("kubectl get pods --all-namespaces -o wide")
|
||||||
|
CMDS+=("kubectl get pvc --all-namespaces")
|
||||||
|
CMDS+=("kubectl get pvc --all-namespaces -o yaml")
|
||||||
|
CMDS+=("kubectl get pv --all-namespaces")
|
||||||
|
CMDS+=("kubectl get pv --all-namespaces -o yaml")
|
||||||
|
CMDS+=("kubectl get sc --all-namespaces")
|
||||||
|
CMDS+=("kubectl get serviceaccounts --all-namespaces")
|
||||||
|
CMDS+=("kubectl get deployments.apps --all-namespaces")
|
||||||
|
CMDS+=("kubectl get rolebindings.rbac.authorization.k8s.io --all-namespaces")
|
||||||
|
CMDS+=("kubectl get roles.rbac.authorization.k8s.io --all-namespaces")
|
||||||
|
CMDS+=("kubectl get clusterrolebindings.rbac.authorization.k8s.io")
|
||||||
|
CMDS+=("kubectl get clusterroles.rbac.authorization.k8s.io")
|
||||||
|
for CMD in "${CMDS[@]}" ; do
|
||||||
|
delimiter ${LOGFILE_KUBE} "${CMD}"
|
||||||
|
eval ${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_KUBE}
|
||||||
|
echo >>${LOGFILE_KUBE}
|
||||||
|
done
|
||||||
|
|
||||||
|
# api-resources; verbose, place in separate file
|
||||||
|
CMDS=()
|
||||||
|
CMDS+=("kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found --all-namespaces")
|
||||||
|
CMDS+=("kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found --all-namespaces -o yaml")
|
||||||
|
for CMD in "${CMDS[@]}" ; do
|
||||||
|
delimiter ${LOGFILE_API} "${CMD}"
|
||||||
|
eval ${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_API}
|
||||||
|
echo >>${LOGFILE_API}
|
||||||
|
done
|
||||||
|
|
||||||
|
# describe pods; verbose, place in separate file
|
||||||
|
CMDS=()
|
||||||
|
CMDS+=("kubectl describe pods --all-namespaces")
|
||||||
|
for CMD in "${CMDS[@]}" ; do
|
||||||
|
delimiter ${LOGFILE_PODS} "${CMD}"
|
||||||
|
eval ${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_PODS}
|
||||||
|
echo >>${LOGFILE_API}
|
||||||
|
done
|
||||||
|
|
||||||
|
# events; verbose, place in separate file
|
||||||
|
CMDS=()
|
||||||
|
CMDS+=("kubectl get events --all-namespaces --sort-by='.metadata.creationTimestamp' -o go-template='{{range .items}}{{printf \"%s %s\t%s\t%s\t%s\t%s\n\" .firstTimestamp .involvedObject.name .involvedObject.kind .message .reason .type}}{{end}}'")
|
||||||
|
for CMD in "${CMDS[@]}" ; do
|
||||||
|
delimiter ${LOGFILE_EVENT} "${CMD}"
|
||||||
|
eval ${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_EVENT}
|
||||||
|
echo >>${LOGFILE_EVENT}
|
||||||
|
done
|
||||||
|
|
||||||
|
# Helm related
|
||||||
|
CMD="helm version"
|
||||||
|
delimiter ${LOGFILE_HELM} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_HELM}
|
||||||
|
echo >>${LOGFILE_HELM}
|
||||||
|
|
||||||
|
HELM_VERSION=$(helm version --client --short)
|
||||||
|
if [[ $HELM_VERSION =~ v2 ]]; then
|
||||||
|
CMD="helm list -a"
|
||||||
|
delimiter ${LOGFILE_HELM} "${CMD}"
|
||||||
|
APPLIST=$(${CMD} 2>>${COLLECT_ERROR_LOG} | tee -a ${LOGFILE_HELM})
|
||||||
|
APPLIST=$(echo "${APPLIST}" | awk '{if (NR!=1) {print}}')
|
||||||
|
while read -r app; do
|
||||||
|
APPNAME=$(echo ${app} | awk '{print $1}')
|
||||||
|
APPREVISION=$(echo ${app} | awk '{print $2}')
|
||||||
|
helm status ${APPNAME} > ${HELM_DIR}/${APPNAME}.status
|
||||||
|
helm get values ${APPNAME} --revision ${APPREVISION} \
|
||||||
|
> ${HELM_DIR}/${APPNAME}.v${APPREVISION}
|
||||||
|
done <<< "${APPLIST}"
|
||||||
|
elif [[ $HELM_VERSION =~ v3 ]]; then
|
||||||
|
# NOTE: helm environment not configured for root user
|
||||||
|
CMD="sudo -u sysadmin KUBECONFIG=${KUBECONFIG} helm list --all --all-namespaces"
|
||||||
|
delimiter ${LOGFILE_HELM} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_HELM}
|
||||||
|
|
||||||
|
CMD="sudo -u sysadmin KUBECONFIG=${KUBECONFIG} helm search repo"
|
||||||
|
delimiter ${LOGFILE_HELM} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_HELM}
|
||||||
|
|
||||||
|
CMD="sudo -u sysadmin KUBECONFIG=${KUBECONFIG} helm repo list"
|
||||||
|
delimiter ${LOGFILE_HELM} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_HELM}
|
||||||
|
fi
|
||||||
|
|
||||||
|
HELM2CLI=$(which helmv2-cli)
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
CMD="helmv2-cli -- helm version --short"
|
||||||
|
delimiter ${LOGFILE_HELM} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_HELM}
|
||||||
|
|
||||||
|
CMD="helmv2-cli -- helm list -a"
|
||||||
|
delimiter ${LOGFILE_HELM} "${CMD}"
|
||||||
|
mapfile -t ARR < <( ${CMD} 2>>${COLLECT_ERROR_LOG} )
|
||||||
|
printf "%s\n" "${ARR[@]}" >> ${LOGFILE_HELM}
|
||||||
|
for((i=1; i < ${#ARR[@]}; i++))
|
||||||
|
do
|
||||||
|
APPNAME=$(echo ${ARR[$i]} | awk '{print $1}')
|
||||||
|
APPREVISION=$(echo ${ARR[$i]} | awk '{print $2}')
|
||||||
|
${HELM2CLI} -- helm status ${APPNAME} > ${HELM_DIR}/${APPNAME}.status
|
||||||
|
${HELM2CLI} -- helm get values ${APPNAME} --revision ${APPREVISION} \
|
||||||
|
> ${HELM_DIR}/${APPNAME}.v${APPREVISION}
|
||||||
|
done <<< "${APPLIST}"
|
||||||
|
|
||||||
|
CMD="helmv2-cli -- helm search"
|
||||||
|
delimiter ${LOGFILE_HELM} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_HELM}
|
||||||
|
|
||||||
|
CMD="helmv2-cli -- helm repo list"
|
||||||
|
delimiter ${LOGFILE_HELM} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE_HELM}
|
||||||
|
fi
|
||||||
|
|
||||||
|
CMD="cp -r /opt/platform/helm_charts ${HELM_DIR}/"
|
||||||
|
delimiter ${LOGFILE} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
export $(grep '^ETCD_LISTEN_CLIENT_URLS=' /etc/etcd/etcd.conf | tr -d '"')
|
||||||
|
|
||||||
|
CMD="sudo ETCDCTL_API=3 etcdctl \
|
||||||
|
--endpoints=$ETCD_LISTEN_CLIENT_URLS get / --prefix"
|
||||||
|
|
||||||
|
#Use certificate if secured access is detected
|
||||||
|
SEC_STR='https'
|
||||||
|
if [[ "$ETCD_LISTEN_CLIENT_URLS" == *"$SEC_STR"* ]]; then
|
||||||
|
CMD="$CMD --cert=/etc/etcd/etcd-server.crt \
|
||||||
|
--key=/etc/etcd/etcd-server.key --cacert=/etc/etcd/ca.crt"
|
||||||
|
fi
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "${CMD}"
|
||||||
|
${CMD} 2>>${COLLECT_ERROR_LOG} >> ${ETCD_DB_FILE}
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
35
tools/collector/debian-scripts/collect_coredump.sh
Normal file
35
tools/collector/debian-scripts/collect_coredump.sh
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="coredump"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
|
||||||
|
|
||||||
|
COREDUMPDIR="/var/lib/systemd/coredump"
|
||||||
|
|
||||||
|
echo "${hostname}: Core Dump Info ....: ${LOGFILE}"
|
||||||
|
|
||||||
|
files=`ls ${COREDUMPDIR} | wc -l`
|
||||||
|
if [ "${files}" == "0" ] ; then
|
||||||
|
echo "No core dumps" >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
else
|
||||||
|
COMMAND="ls -lrtd ${COREDUMPDIR}/*"
|
||||||
|
delimiter ${LOGFILE} "${COMMAND}"
|
||||||
|
${COMMAND} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
COMMAND="md5sum ${COREDUMPDIR}/*"
|
||||||
|
delimiter ${LOGFILE} "${COMMAND}"
|
||||||
|
${COMMAND} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
38
tools/collector/debian-scripts/collect_crash.sh
Normal file
38
tools/collector/debian-scripts/collect_crash.sh
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2016-2020 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="crash"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
|
||||||
|
|
||||||
|
CRASHDIR="/var/lib/kdump"
|
||||||
|
|
||||||
|
echo "${hostname}: Kernel Crash Info .: ${LOGFILE}"
|
||||||
|
|
||||||
|
COMMAND="find ${CRASHDIR}"
|
||||||
|
delimiter ${LOGFILE} "${COMMAND}"
|
||||||
|
${COMMAND} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
COMMAND="rsync -a --include=*.txt --include=*/ --exclude=* ${CRASHDIR} ${basedir}/var/"
|
||||||
|
delimiter ${LOGFILE} "${COMMAND}"
|
||||||
|
${COMMAND} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
COMMAND="ls -lrtd ${CRASHDIR}/*"
|
||||||
|
delimiter ${LOGFILE} "${COMMAND}"
|
||||||
|
${COMMAND} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
COMMAND="md5sum ${CRASHDIR}/*"
|
||||||
|
delimiter ${LOGFILE} "${COMMAND}"
|
||||||
|
${COMMAND} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
exit 0
|
1064
tools/collector/debian-scripts/collect_date
Executable file
1064
tools/collector/debian-scripts/collect_date
Executable file
File diff suppressed because it is too large
Load Diff
97
tools/collector/debian-scripts/collect_dc.sh
Executable file
97
tools/collector/debian-scripts/collect_dc.sh
Executable file
@ -0,0 +1,97 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2020-2021 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="distributed_cloud"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
RPMLOG="${extradir}/rpm.info"
|
||||||
|
|
||||||
|
function is_active_controller {
|
||||||
|
active_controller=`sm-query service management-ip | grep "enabled-active"`
|
||||||
|
if [ -z "$active_controller" ] ; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function is_distributed_cloud_env {
|
||||||
|
distributed_cloud=`sm-query service-group distributed-cloud-services | grep "active"`
|
||||||
|
if [ -z "$distributed_cloud" ] ; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function is_subcloud {
|
||||||
|
subcloud=`cat /etc/platform/platform.conf | grep "distributed_cloud_role" | grep "subcloud"`
|
||||||
|
if [ -z "$subcloud" ] ; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Must be a distributed cloud environment
|
||||||
|
is_distributed_cloud_env
|
||||||
|
if [ "$?" = "0" ] ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Controller
|
||||||
|
###############################################################################
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
|
||||||
|
# Must be an active controller
|
||||||
|
is_active_controller
|
||||||
|
if [ "$?" = "0" ] ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "${hostname}: Distributed Cloud ..: ${LOGFILE}"
|
||||||
|
|
||||||
|
is_subcloud
|
||||||
|
if [ "$?" = "1" ] ; then
|
||||||
|
# Subcloud
|
||||||
|
echo "Distributed Cloud Role: Subcloud" >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "Address Pool of System Controller"
|
||||||
|
# Prints the column names of the table
|
||||||
|
system addrpool-list --nowrap | head -3 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
# Prints the System Controller's address pool
|
||||||
|
system addrpool-list --nowrap | grep "system-controller-subnet" 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
else
|
||||||
|
# System Controller
|
||||||
|
echo "Distributed Cloud Role: System Controller" >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "dcmanager alarm summary"
|
||||||
|
dcmanager alarm summary 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "dcmanager subcloud list"
|
||||||
|
dcmanager subcloud list 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "dcmanager subcloud-group list"
|
||||||
|
dcmanager subcloud-group list 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
# copy the /opt/dc/ansible dir but exclude any iso files
|
||||||
|
rsync -a --exclude '*.iso' /opt/dc/ansible ${extradir}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "find /opt/dc-vault -ls"
|
||||||
|
find /opt/dc-vault -ls 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
fi
|
||||||
|
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
28
tools/collector/debian-scripts/collect_disk.sh
Normal file
28
tools/collector/debian-scripts/collect_disk.sh
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2020 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="disk"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Disk Info
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
echo "${hostname}: Disk Info .: ${LOGFILE}"
|
||||||
|
|
||||||
|
for device in $(lsblk -l -o NAME,TYPE,TRAN | grep -v usb | grep -e disk | cut -d ' ' -f1); do
|
||||||
|
delimiter ${LOGFILE} "smartctl -a ${device}"
|
||||||
|
smartctl -a "/dev/${device}" >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
done
|
||||||
|
|
||||||
|
exit 0
|
43
tools/collector/debian-scripts/collect_fm.sh
Normal file
43
tools/collector/debian-scripts/collect_fm.sh
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="alarms"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
|
||||||
|
function is_service_active {
|
||||||
|
active=`sm-query service management-ip | grep "enabled-active"`
|
||||||
|
if [ -z "$active" ] ; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Controller
|
||||||
|
###############################################################################
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
|
||||||
|
is_service_active
|
||||||
|
if [ "$?" = "0" ] ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "${hostname}: System Alarm List .: ${LOGFILE}"
|
||||||
|
|
||||||
|
# These go into the SERVICE.info file
|
||||||
|
delimiter ${LOGFILE} "fm alarm-list"
|
||||||
|
fm alarm-list 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
delimiter ${LOGFILE} "fm event-list --nopaging"
|
||||||
|
fm event-list --nopaging 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
488
tools/collector/debian-scripts/collect_host
Executable file
488
tools/collector/debian-scripts/collect_host
Executable file
@ -0,0 +1,488 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
########################################################################
|
||||||
|
#
|
||||||
|
# Copyright (c) 2016-2021 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
########################################################################
|
||||||
|
|
||||||
|
# make these platform.conf variables global.
|
||||||
|
# values are loaded in source_openrc_if_needed.
|
||||||
|
export nodetype=""
|
||||||
|
export subfunction=""
|
||||||
|
export system_type=""
|
||||||
|
export security_profile=""
|
||||||
|
export sdn_enabled=""
|
||||||
|
export region_config=""
|
||||||
|
export vswitch_type=""
|
||||||
|
export system_mode=""
|
||||||
|
export sw_version=""
|
||||||
|
|
||||||
|
# assume this is not the active controller until learned
|
||||||
|
export ACTIVE=false
|
||||||
|
|
||||||
|
#
|
||||||
|
# Import commands, variables and convenience functions available to
|
||||||
|
# all collectors ; common and user defined.
|
||||||
|
#
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
source_openrc_if_needed
|
||||||
|
|
||||||
|
#
|
||||||
|
# parse input parameters
|
||||||
|
#
|
||||||
|
COLLECT_NAME="${1}"
|
||||||
|
DEBUG=${8}
|
||||||
|
INVENTORY=${9}
|
||||||
|
set_debug_mode ${DEBUG}
|
||||||
|
|
||||||
|
# Calling parms
|
||||||
|
#
|
||||||
|
# 1 = collect name
|
||||||
|
# 2 = start date option
|
||||||
|
# 3 = start date
|
||||||
|
# 4 = "any" (ignored - no longer used ; kept to support upgrades/downgrades)
|
||||||
|
# 5 = end date option
|
||||||
|
# 6 = end date
|
||||||
|
# 7 = "any" (ignored - no longer used ; kept to support upgrades/downgrades)
|
||||||
|
# 8 = debug mode
|
||||||
|
# 9 = inventory
|
||||||
|
logger -t ${COLLECT_TAG} "${0} ${1} ${2} ${3} ${4} ${5} ${6} ${7} ${8} ${9}"
|
||||||
|
|
||||||
|
# parse out the start data/time data if it is present
|
||||||
|
STARTDATE_RANGE=false
|
||||||
|
STARTDATE="any"
|
||||||
|
if [ "${2}" == "${STARTDATE_OPTION}" ] ; then
|
||||||
|
if [ "${3}" != "any" -a ${#3} -gt 7 ] ; then
|
||||||
|
STARTDATE_RANGE=true
|
||||||
|
STARTDATE="${3}"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# parse out the end date/time if it is present
|
||||||
|
ENDDATE_RANGE=false
|
||||||
|
ENDDATE="any"
|
||||||
|
if [ "${5}" == "${ENDDATE_OPTION}" ] ; then
|
||||||
|
if [ "${6}" != "any" -a ${#6} -gt 7 ] ; then
|
||||||
|
ENDDATE_RANGE=true
|
||||||
|
ENDDATE="${6}"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
COLLECT_BASE_DIR="/scratch"
|
||||||
|
EXTRA="var/extra"
|
||||||
|
hostname="${HOSTNAME}"
|
||||||
|
COLLECT_NAME_DIR="${COLLECT_BASE_DIR}/${COLLECT_NAME}"
|
||||||
|
EXTRA_DIR="${COLLECT_NAME_DIR}/${EXTRA}"
|
||||||
|
TARBALL="${COLLECT_NAME_DIR}.tgz"
|
||||||
|
COLLECT_PATH="/etc/collect.d"
|
||||||
|
RUN_EXCLUDE="/etc/collect/run.exclude"
|
||||||
|
ETC_EXCLUDE="/etc/collect/etc.exclude"
|
||||||
|
VAR_LOG_EXCLUDE="/etc/collect/varlog.exclude"
|
||||||
|
COLLECT_INCLUDE="/var/run /etc /root"
|
||||||
|
FLIGHT_RECORDER_PATH="var/lib/sm/"
|
||||||
|
FLIGHT_RECORDER_FILE="sm.eru.v1"
|
||||||
|
VAR_LOG_INCLUDE_LIST="/tmp/${COLLECT_NAME}.lst"
|
||||||
|
COLLECT_DIR_USAGE_CMD="df -h ${COLLECT_BASE_DIR}"
|
||||||
|
COLLECT_DATE="/usr/local/sbin/collect_date"
|
||||||
|
COLLECT_SYSINV="${COLLECT_PATH}/collect_sysinv"
|
||||||
|
|
||||||
|
function log_space()
|
||||||
|
{
|
||||||
|
local msg=${1}
|
||||||
|
|
||||||
|
space="`${COLLECT_DIR_USAGE_CMD}`"
|
||||||
|
space1=`echo "${space}" | grep -v Filesystem`
|
||||||
|
ilog "${COLLECT_BASE_DIR} ${msg} ${space1}"
|
||||||
|
}
|
||||||
|
|
||||||
|
space_precheck ${HOSTNAME} ${COLLECT_BASE_DIR}
|
||||||
|
|
||||||
|
CURR_DIR=`pwd`
|
||||||
|
mkdir -p ${COLLECT_NAME_DIR}
|
||||||
|
cd ${COLLECT_NAME_DIR}
|
||||||
|
|
||||||
|
# create dump target extra-stuff directory
|
||||||
|
mkdir -p ${EXTRA_DIR}
|
||||||
|
|
||||||
|
RETVAL=0
|
||||||
|
|
||||||
|
# Remove any previous collect error log.
|
||||||
|
# Start this collect with an empty file.
|
||||||
|
#
|
||||||
|
# stderr is directed to this log during the collect process.
|
||||||
|
# By searching this log after collect_host is run we can find
|
||||||
|
# errors that occured during collect.
|
||||||
|
# The only real error that we care about right now is the
|
||||||
|
#
|
||||||
|
# "No space left on device" error
|
||||||
|
#
|
||||||
|
rm -f ${COLLECT_ERROR_LOG}
|
||||||
|
touch ${COLLECT_ERROR_LOG}
|
||||||
|
chmod 644 ${COLLECT_ERROR_LOG}
|
||||||
|
echo "`date '+%F %T'` :${COLLECT_NAME_DIR}" > ${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
ilog "creating local collect tarball ${COLLECT_NAME_DIR}.tgz"
|
||||||
|
|
||||||
|
################################################################################
|
||||||
|
# Run collect scripts to check system status
|
||||||
|
################################################################################
|
||||||
|
function collect_parts()
|
||||||
|
{
|
||||||
|
if [ -d ${COLLECT_PATH} ]; then
|
||||||
|
for i in ${COLLECT_PATH}/*; do
|
||||||
|
if [ -f $i ]; then
|
||||||
|
if [ ${i} = ${COLLECT_SYSINV} ]; then
|
||||||
|
$i ${COLLECT_NAME_DIR} ${EXTRA_DIR} ${hostname} ${INVENTORY}
|
||||||
|
else
|
||||||
|
$i ${COLLECT_NAME_DIR} ${EXTRA_DIR} ${hostname}
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function collect_extra()
|
||||||
|
{
|
||||||
|
# dump process lists
|
||||||
|
LOGFILE="${EXTRA_DIR}/process.info"
|
||||||
|
echo "${hostname}: Process Info ......: ${LOGFILE}"
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ps -e -H -o ..."
|
||||||
|
${PROCESS_DETAIL_CMD} >> ${LOGFILE}
|
||||||
|
|
||||||
|
# Collect process and thread info (tree view)
|
||||||
|
delimiter ${LOGFILE} "pstree --arguments --ascii --long --show-pids"
|
||||||
|
pstree --arguments --ascii --long --show-pids >> ${LOGFILE}
|
||||||
|
|
||||||
|
# Collect process, thread and scheduling info (worker subfunction only)
|
||||||
|
# (also gets process 'affinity' which is useful on workers;
|
||||||
|
which ps-sched.sh >/dev/null 2>&1
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
delimiter ${LOGFILE} "ps-sched.sh"
|
||||||
|
ps-sched.sh >> ${LOGFILE}
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Collect process, thread and scheduling, and elapsed time
|
||||||
|
# This has everything that ps-sched.sh does, except for cpu affinity mask,
|
||||||
|
# adds: stime,etime,time,wchan,tty).
|
||||||
|
delimiter ${LOGFILE} "ps -eL -o pid,lwp,ppid,state,class,nice,rtprio,priority,psr,stime,etime,time,wchan:16,tty,comm,command"
|
||||||
|
ps -eL -o pid,lwp,ppid,state,class,nice,rtprio,priority,psr,stime,etime,time,wchan:16,tty,comm,command >> ${LOGFILE}
|
||||||
|
|
||||||
|
# Collect per kubernetes container name, QoS, and cpusets per numa node
|
||||||
|
delimiter ${LOGFILE} "kube-cpusets"
|
||||||
|
kube-cpusets >> ${LOGFILE}
|
||||||
|
|
||||||
|
# Various host attributes
|
||||||
|
LOGFILE="${EXTRA_DIR}/host.info"
|
||||||
|
echo "${hostname}: Host Info .........: ${LOGFILE}"
|
||||||
|
|
||||||
|
# CGCS build info
|
||||||
|
delimiter ${LOGFILE} "${BUILD_INFO_CMD}"
|
||||||
|
${BUILD_INFO_CMD} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "uptime"
|
||||||
|
uptime >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /proc/cmdline"
|
||||||
|
cat /proc/cmdline >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /proc/version"
|
||||||
|
cat /proc/version >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "lscpu"
|
||||||
|
lscpu >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "lscpu -e"
|
||||||
|
lscpu -e >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /proc/cpuinfo"
|
||||||
|
cat /proc/cpuinfo >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /sys/devices/system/cpu/isolated"
|
||||||
|
cat /sys/devices/system/cpu/isolated >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ip addr show"
|
||||||
|
ip addr show >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "lspci -nn"
|
||||||
|
lspci -nn >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "find /sys/kernel/iommu_groups/ -type l"
|
||||||
|
find /sys/kernel/iommu_groups/ -type l >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# networking totals
|
||||||
|
delimiter ${LOGFILE} "cat /proc/net/dev"
|
||||||
|
cat /proc/net/dev >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "dmidecode"
|
||||||
|
dmidecode >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# summary of scheduler tunable settings
|
||||||
|
delimiter ${LOGFILE} "cat /proc/sched_debug | head -15"
|
||||||
|
cat /proc/sched_debug | head -15 >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
if [ "${SKIP_MASK}" = "true" ]; then
|
||||||
|
delimiter ${LOGFILE} "facter (excluding ssh info)"
|
||||||
|
facter | grep -iv '^ssh' >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
else
|
||||||
|
delimiter ${LOGFILE} "facter"
|
||||||
|
facter >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$nodetype" == "worker" || "$subfunction" == *"worker"* ]] ; then
|
||||||
|
delimiter ${LOGFILE} "topology"
|
||||||
|
topology >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
fi
|
||||||
|
|
||||||
|
LOGFILE="${EXTRA_DIR}/memory.info"
|
||||||
|
echo "${hostname}: Memory Info .......: ${LOGFILE}"
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /proc/meminfo"
|
||||||
|
cat /proc/meminfo >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /sys/devices/system/node/node?/meminfo"
|
||||||
|
cat /sys/devices/system/node/node?/meminfo >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /proc/slabinfo"
|
||||||
|
log_slabinfo ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ps -e -o ppid,pid,nlwp,rss:10,vsz:10,cmd --sort=-rss"
|
||||||
|
ps -e -o ppid,pid,nlwp,rss:10,vsz:10,cmd --sort=-rss >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# list open files
|
||||||
|
delimiter ${LOGFILE} "lsof -lwX"
|
||||||
|
lsof -lwX >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# hugepages numa mapping
|
||||||
|
delimiter ${LOGFILE} "grep huge /proc/*/numa_maps"
|
||||||
|
grep -e " huge " /proc/*/numa_maps >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# rootfs and tmpfs usage
|
||||||
|
delimiter ${LOGFILE} "df -h -H -T --local -t rootfs -t tmpfs"
|
||||||
|
df -h -H -T --local -t rootfs -t tmpfs >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
LOGFILE="${EXTRA_DIR}/filesystem.info"
|
||||||
|
echo "${hostname}: Filesystem Info ...: ${LOGFILE}"
|
||||||
|
|
||||||
|
# disk inodes usage
|
||||||
|
delimiter ${LOGFILE} "df -h -H -T --local -t rootfs -t tmpfs"
|
||||||
|
df -h -H -T --local -t rootfs -t tmpfs >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# disk space usage
|
||||||
|
delimiter ${LOGFILE} "df -h -H -T --local -t ext2 -t ext3 -t ext4 -t xfs --total"
|
||||||
|
df -h -H -T --local -t ext2 -t ext3 -t ext4 -t xfs --total >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# disk inodes usage
|
||||||
|
delimiter ${LOGFILE} "df -h -H -T --local -i -t ext2 -t ext3 -t ext4 -t xfs --total"
|
||||||
|
df -h -H -T --local -i -t ext2 -t ext3 -t ext4 -t xfs --total >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# disks by-path values
|
||||||
|
delimiter ${LOGFILE} "ls -lR /dev/disk"
|
||||||
|
ls -lR /dev/disk >> ${LOGFILE}
|
||||||
|
|
||||||
|
# disk summary (requires sudo/root)
|
||||||
|
delimiter ${LOGFILE} "fdisk -l"
|
||||||
|
fdisk -l >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /proc/scsi/scsi"
|
||||||
|
cat /proc/scsi/scsi >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# Controller specific stuff
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /proc/drbd"
|
||||||
|
cat /proc/drbd >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "/sbin/drbdadm dump"
|
||||||
|
/sbin/drbdadm dump >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
fi
|
||||||
|
|
||||||
|
# LVM summary
|
||||||
|
delimiter ${LOGFILE} "/usr/sbin/vgs --version ; /usr/sbin/pvs --version ; /usr/sbin/lvs --version"
|
||||||
|
/usr/sbin/vgs --version >> ${LOGFILE}
|
||||||
|
/usr/sbin/pvs --version >> ${LOGFILE}
|
||||||
|
/usr/sbin/lvs --version >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "/usr/sbin/vgs --all --options all"
|
||||||
|
/usr/sbin/vgs --all --options all >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "/usr/sbin/pvs --all --options all"
|
||||||
|
/usr/sbin/pvs --all --options all >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "/usr/sbin/lvs --all --options all"
|
||||||
|
/usr/sbin/lvs --all --options all >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# iSCSI Information
|
||||||
|
LOGFILE="${EXTRA_DIR}/iscsi.info"
|
||||||
|
echo "${hostname}: iSCSI Information ......: ${LOGFILE}"
|
||||||
|
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
# Controller- LIO exported initiators summary
|
||||||
|
delimiter ${LOGFILE} "targetcli ls"
|
||||||
|
targetcli ls >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# Controller - LIO sessions
|
||||||
|
delimiter ${LOGFILE} "targetcli sessions detail"
|
||||||
|
targetcli sessions detail >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
elif [[ "$nodetype" == "worker" || "$subfunction" == *"worker"* ]] ; then
|
||||||
|
# Worker - iSCSI initiator information
|
||||||
|
collect_dir=${EXTRA_DIR}/iscsi_initiator_info
|
||||||
|
mkdir -p ${collect_dir}
|
||||||
|
cp -rf /run/iscsi-cache/nodes/* ${collect_dir}
|
||||||
|
find ${collect_dir} -type d -exec chmod 750 {} \;
|
||||||
|
|
||||||
|
# Worker - iSCSI initiator active sessions
|
||||||
|
delimiter ${LOGFILE} "iscsiadm -m session"
|
||||||
|
iscsiadm -m session >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# Worker - iSCSI udev created nodes
|
||||||
|
delimiter ${LOGFILE} "ls -la /dev/disk/by-path | grep \"iqn\""
|
||||||
|
ls -la /dev/disk/by-path | grep "iqn" >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
fi
|
||||||
|
|
||||||
|
LOGFILE="${EXTRA_DIR}/history.info"
|
||||||
|
echo "${hostname}: Bash History ......: ${LOGFILE}"
|
||||||
|
|
||||||
|
# history
|
||||||
|
delimiter ${LOGFILE} "cat /home/sysadmin/.bash_history"
|
||||||
|
cat /home/sysadmin/.bash_history >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
LOGFILE="${EXTRA_DIR}/interrupt.info"
|
||||||
|
echo "${hostname}: Interrupt Info ....: ${LOGFILE}"
|
||||||
|
|
||||||
|
# interrupts
|
||||||
|
delimiter ${LOGFILE} "cat /proc/interrupts"
|
||||||
|
cat /proc/interrupts >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /proc/softirqs"
|
||||||
|
cat /proc/softirqs >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# Controller specific stuff
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
netstat -pan > ${EXTRA_DIR}/netstat.info
|
||||||
|
fi
|
||||||
|
|
||||||
|
LOGFILE="${EXTRA_DIR}/blockdev.info"
|
||||||
|
echo "${hostname}: Block Devices Info : ${LOGFILE}"
|
||||||
|
|
||||||
|
# Collect block devices - show all sda and cinder devices, and size
|
||||||
|
delimiter ${LOGFILE} "lsblk"
|
||||||
|
lsblk >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# Collect block device topology - show devices and which io-scheduler
|
||||||
|
delimiter ${LOGFILE} "lsblk --topology"
|
||||||
|
lsblk --topology >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# Collect SCSI devices - show devices and cinder attaches, etc
|
||||||
|
delimiter ${LOGFILE} "lsblk --scsi"
|
||||||
|
lsblk --scsi >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
}
|
||||||
|
|
||||||
|
log_space "before collect ......:"
|
||||||
|
|
||||||
|
collect_extra
|
||||||
|
collect_parts
|
||||||
|
|
||||||
|
#
|
||||||
|
# handle collect collect-after and collect-range and then
|
||||||
|
# in elif clause collect-before
|
||||||
|
#
|
||||||
|
VAR_LOG="/var/log"
|
||||||
|
if [ -e /www/var/log ]; then
|
||||||
|
VAR_LOG="$VAR_LOG /www/var/log"
|
||||||
|
fi
|
||||||
|
|
||||||
|
rm -f ${VAR_LOG_INCLUDE_LIST}
|
||||||
|
|
||||||
|
if [ "${STARTDATE_RANGE}" == true ] ; then
|
||||||
|
if [ "${ENDDATE_RANGE}" == false ] ; then
|
||||||
|
ilog "collecting $VAR_LOG files containing logs after ${STARTDATE}"
|
||||||
|
${COLLECT_DATE} ${STARTDATE} ${ENDDATE} ${VAR_LOG_INCLUDE_LIST} ${DEBUG} ""
|
||||||
|
else
|
||||||
|
ilog "collecting $VAR_LOG files containing logs between ${STARTDATE} and ${ENDDATE}"
|
||||||
|
${COLLECT_DATE} ${STARTDATE} ${ENDDATE} ${VAR_LOG_INCLUDE_LIST} ${DEBUG} ""
|
||||||
|
fi
|
||||||
|
elif [ "${ENDDATE_RANGE}" == true ] ; then
|
||||||
|
STARTDATE="20130101"
|
||||||
|
ilog "collecting $VAR_LOG files containing logs before ${ENDDATE}"
|
||||||
|
${COLLECT_DATE} ${STARTDATE} ${ENDDATE} ${VAR_LOG_INCLUDE_LIST} ${DEBUG} ""
|
||||||
|
else
|
||||||
|
ilog "collecting all of $VAR_LOG"
|
||||||
|
find $VAR_LOG ! -empty > ${VAR_LOG_INCLUDE_LIST}
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Add VM console.log
|
||||||
|
for i in /var/lib/nova/instances/*/console.log; do
|
||||||
|
if [ -e "$i" ]; then
|
||||||
|
tmp=`dirname $i`
|
||||||
|
mkdir -p ${COLLECT_NAME_DIR}/$tmp
|
||||||
|
cp $i ${COLLECT_NAME_DIR}/$tmp
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
log_space "before first tar ....:"
|
||||||
|
|
||||||
|
(cd ${COLLECT_NAME_DIR} ; ${IONICE_CMD} ${NICE_CMD} ${TAR_CMD} ${COLLECT_NAME_DIR}/${COLLECT_NAME}.tar -T ${VAR_LOG_INCLUDE_LIST} -X ${RUN_EXCLUDE} -X ${ETC_EXCLUDE} -X ${VAR_LOG_EXCLUDE} ${COLLECT_INCLUDE} 2>>${COLLECT_ERROR_LOG} 1>>${COLLECT_ERROR_LOG} )
|
||||||
|
|
||||||
|
log_space "after first tar .....:"
|
||||||
|
|
||||||
|
(cd ${COLLECT_NAME_DIR} ; ${IONICE_CMD} ${NICE_CMD} ${UNTAR_CMD} ${COLLECT_NAME_DIR}/${COLLECT_NAME}.tar 2>>${COLLECT_ERROR_LOG} 1>>${COLLECT_ERROR_LOG} )
|
||||||
|
|
||||||
|
log_space "after first untar ...:"
|
||||||
|
|
||||||
|
rm -f ${COLLECT_NAME_DIR}/${COLLECT_NAME}.tar
|
||||||
|
|
||||||
|
log_space "after delete tar ....:"
|
||||||
|
|
||||||
|
if [ "${SKIP_MASK}" != "true" ]; then
|
||||||
|
# Run password masking before final tar
|
||||||
|
dlog "running /usr/local/sbin/collect_mask_passwords ${COLLECT_NAME_DIR} ${EXTRA_DIR}"
|
||||||
|
/usr/local/sbin/collect_mask_passwords ${COLLECT_NAME_DIR} ${EXTRA_DIR}
|
||||||
|
log_space "after passwd masking :"
|
||||||
|
fi
|
||||||
|
|
||||||
|
(cd ${COLLECT_BASE_DIR} ; ${IONICE_CMD} ${NICE_CMD} ${TAR_ZIP_CMD} ${COLLECT_NAME_DIR}.tgz ${COLLECT_NAME} 2>/dev/null 1>/dev/null )
|
||||||
|
|
||||||
|
log_space "after first tarball .:"
|
||||||
|
|
||||||
|
mkdir -p ${COLLECT_NAME_DIR}/${FLIGHT_RECORDER_PATH}
|
||||||
|
|
||||||
|
(cd /${FLIGHT_RECORDER_PATH} ; ${TAR_ZIP_CMD} ${COLLECT_NAME_DIR}/${FLIGHT_RECORDER_PATH}/${FLIGHT_RECORDER_FILE}.tgz ./${FLIGHT_RECORDER_FILE} 2>>${COLLECT_ERROR_LOG} 1>>${COLLECT_ERROR_LOG})
|
||||||
|
|
||||||
|
# Pull in an updated user.log which contains the most recent collect logs
|
||||||
|
# ... be sure to exclude any out of space logs
|
||||||
|
tail -30 /var/log/user.log | grep "COLLECT:" | grep -v "${FAIL_OUT_OF_SPACE_STR}" >> ${COLLECT_ERROR_LOG}
|
||||||
|
cp -a ${COLLECT_LOG} ${COLLECT_LOG}.last
|
||||||
|
cp -a ${COLLECT_ERROR_LOG} ${COLLECT_LOG}
|
||||||
|
cp -a ${COLLECT_LOG} ${COLLECT_NAME_DIR}/var/log
|
||||||
|
|
||||||
|
log_space "with flight data ....:"
|
||||||
|
|
||||||
|
(cd ${COLLECT_BASE_DIR} ; ${IONICE_CMD} ${NICE_CMD} ${TAR_ZIP_CMD} ${COLLECT_NAME_DIR}.tgz ${COLLECT_NAME} 2>>${COLLECT_ERROR_LOG} 1>>${COLLECT_ERROR_LOG} )
|
||||||
|
|
||||||
|
log_space "after collect .......:"
|
||||||
|
|
||||||
|
rm -rf ${COLLECT_NAME_DIR}
|
||||||
|
rm -f ${VAR_LOG_INCLUDE_LIST}
|
||||||
|
|
||||||
|
log_space "after cleanup .......:"
|
||||||
|
|
||||||
|
# Check for collect errors
|
||||||
|
# Only out of space error is enough to fail this hosts's collect
|
||||||
|
collect_errors ${HOSTNAME}
|
||||||
|
RC=${?}
|
||||||
|
|
||||||
|
rm -f ${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
if [ ${RC} -ne 0 ] ; then
|
||||||
|
rm -f ${COLLECT_NAME_DIR}.tgz
|
||||||
|
ilog "${FAIL_OUT_OF_SPACE_STR} ${COLLECT_BASE_DIR}"
|
||||||
|
else
|
||||||
|
ilog "collect of ${COLLECT_NAME_DIR}.tgz succeeded"
|
||||||
|
echo "${collect_done}"
|
||||||
|
fi
|
59
tools/collector/debian-scripts/collect_ima.sh
Executable file
59
tools/collector/debian-scripts/collect_ima.sh
Executable file
@ -0,0 +1,59 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2017 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
function is_extended_profile {
|
||||||
|
if [ ! -n "${security_profile}" ] || [ "${security_profile}" != "extended" ]; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
SERVICE="ima"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# All Node Types
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
is_extended_profile
|
||||||
|
if [ "$?" = "0" ] ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "${hostname}: IMA Info ..........: ${LOGFILE}"
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "IMA Kernel Modules"
|
||||||
|
lsmod | grep ima >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "Auditd status"
|
||||||
|
service auditd status >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
ps -aux | grep audit >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
mkdir -p ${extradir}/integrity 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "IMA Runtime Measurement and Violations cache"
|
||||||
|
if [ -d "/sys/kernel/security/ima" ]; then
|
||||||
|
ls /sys/kernel/security/ima >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
cp -rf /sys/kernel/security/ima ${extradir}/integrity 2>>${COLLECT_ERROR_LOG}
|
||||||
|
else
|
||||||
|
echo "ERROR: IMA Securityfs directory does not exist!" >> ${LOGFILE}
|
||||||
|
fi
|
||||||
|
|
||||||
|
cp -rf /etc/modprobe.d/ima.conf ${extradir}/integrity 2>>${COLLECT_ERROR_LOG}
|
||||||
|
cp -rf /etc/modprobe.d/integrity.conf ${extradir}/integrity 2>>${COLLECT_ERROR_LOG}
|
||||||
|
cp -rf /etc/ima.policy ${extradir}/integrity 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
# make sure all these collected files are world readible
|
||||||
|
chmod -R 755 ${extradir}/integrity
|
||||||
|
|
||||||
|
exit 0
|
34
tools/collector/debian-scripts/collect_interfaces.sh
Normal file
34
tools/collector/debian-scripts/collect_interfaces.sh
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2020 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="interface"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Interface Info
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
echo "${hostname}: Interface Info .: ${LOGFILE}"
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ip link"
|
||||||
|
ip link >> ${LOGFILE}
|
||||||
|
|
||||||
|
for i in $(ls /sys/class/net/); do
|
||||||
|
delimiter ${LOGFILE} "ethtool -i ${i}"
|
||||||
|
ethtool -i ${i} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ethtool -S ${i} | grep -v ': 0'"
|
||||||
|
ethtool -S ${i} | grep -v ": 0" >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
done
|
||||||
|
|
||||||
|
exit 0
|
61
tools/collector/debian-scripts/collect_mariadb.sh
Executable file
61
tools/collector/debian-scripts/collect_mariadb.sh
Executable file
@ -0,0 +1,61 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2020 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
# Gather containerized MariaDB information from active controller.
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="mariadb"
|
||||||
|
DB_DIR="${extradir}/${SERVICE}"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
echo "${hostname}: MariaDB Info .....: ${LOGFILE}"
|
||||||
|
|
||||||
|
function is_service_active {
|
||||||
|
active=$(sm-query service postgres | grep "enabled-active")
|
||||||
|
if [ -z "${active}" ] ; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
if [ "${nodetype}" = "controller" ] ; then
|
||||||
|
is_service_active
|
||||||
|
if [ "$?" = "0" ] ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# MariaDB databases
|
||||||
|
delimiter ${LOGFILE} "MariaDB databases:"
|
||||||
|
mariadb-cli --command 'show databases' >> ${LOGFILE}
|
||||||
|
|
||||||
|
# MariaDB database sizes
|
||||||
|
delimiter ${LOGFILE} "MariaDB database sizes:"
|
||||||
|
mariadb-cli --command '
|
||||||
|
SELECT table_schema AS "database",
|
||||||
|
ROUND(SUM(DATA_LENGTH + INDEX_LENGTH)/1024/1024, 3) AS "Size (MiB)",
|
||||||
|
SUM(TABLE_ROWS) AS "rowCount"
|
||||||
|
FROM information_schema.TABLES
|
||||||
|
GROUP BY table_schema' >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "MariaDB database table sizes:"
|
||||||
|
mariadb-cli --command '
|
||||||
|
SELECT
|
||||||
|
table_schema AS "database", TABLE_NAME AS "table",
|
||||||
|
ROUND((DATA_LENGTH + INDEX_LENGTH)/1024/1024, 6) AS "Size (MiB)",
|
||||||
|
TABLE_ROWS AS "rowCount"
|
||||||
|
FROM information_schema.TABLES
|
||||||
|
ORDER BY table_schema, TABLE_NAME' >> ${LOGFILE}
|
||||||
|
|
||||||
|
# MariaDB dump all databases
|
||||||
|
delimiter ${LOGFILE} "Dumping MariaDB databases: ${DB_DIR}"
|
||||||
|
mkdir -p ${DB_DIR}
|
||||||
|
(cd ${DB_DIR}; mariadb-cli --dump --exclude keystone,ceilometer)
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
138
tools/collector/debian-scripts/collect_mask_passwords
Normal file
138
tools/collector/debian-scripts/collect_mask_passwords
Normal file
@ -0,0 +1,138 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2017 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
COLLECT_NAME_DIR=$1
|
||||||
|
EXTRA_DIR=$2
|
||||||
|
|
||||||
|
# Strip the passwords from assorted config files
|
||||||
|
for conffile in \
|
||||||
|
${COLLECT_NAME_DIR}/etc/aodh/aodh.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/barbican/barbican.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/ceilometer/ceilometer.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/cinder/cinder.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/fm/fm.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/glance/glance-api.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/glance/glance-registry.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/heat/heat.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/ironic/ironic.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/keystone/keystone.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/magnum/magnum.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/murano/murano.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/neutron/metadata_agent.ini \
|
||||||
|
${COLLECT_NAME_DIR}/etc/neutron/neutron.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/nfv/nfv_plugins/nfvi_plugins/config.ini \
|
||||||
|
${COLLECT_NAME_DIR}/etc/nova/nova.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/nslcd.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/openldap/slapd.conf.backup \
|
||||||
|
${COLLECT_NAME_DIR}/etc/openstack-dashboard/local_settings \
|
||||||
|
${COLLECT_NAME_DIR}/etc/panko/panko.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/patching/patching.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/proxy/nova-api-proxy.conf \
|
||||||
|
${COLLECT_NAME_DIR}/etc/rabbitmq/murano-rabbitmq.config \
|
||||||
|
${COLLECT_NAME_DIR}/etc/rabbitmq/rabbitmq.config \
|
||||||
|
${COLLECT_NAME_DIR}/etc/sysinv/api-paste.ini \
|
||||||
|
${COLLECT_NAME_DIR}/etc/sysinv/sysinv.conf \
|
||||||
|
${COLLECT_NAME_DIR}/var/extra/platform/sysinv/*/sysinv.conf.default \
|
||||||
|
${COLLECT_NAME_DIR}/etc/mtc.ini
|
||||||
|
|
||||||
|
do
|
||||||
|
if [ ! -f $conffile ]; then
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
sed -i -r 's/^(admin_password) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^(auth_encryption_key) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^(bindpw) .*/\1 xxxxxx/;
|
||||||
|
s/^(rootpw) .*/\1 xxxxxx/;
|
||||||
|
s/^(connection) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^( *credentials) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^(metadata_proxy_shared_secret) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^(password) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^(rabbit_password) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^(sql_connection) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^(stack_domain_admin_password) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^(transport_url) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^(SECRET_KEY) *=.*/\1 = xxxxxx/;
|
||||||
|
s/^(keystone_auth_pw) *=.*/\1 = xxxxxx/;
|
||||||
|
s/\{default_pass, <<\".*\">>\}/\{default_pass, <<\"xxxxxx\">>\}/' $conffile
|
||||||
|
done
|
||||||
|
|
||||||
|
find ${COLLECT_NAME_DIR} -name server-cert.pem | xargs --no-run-if-empty rm -f
|
||||||
|
rm -rf ${COLLECT_NAME_DIR}/var/extra/platform/config/*/ssh_config
|
||||||
|
rm -f ${COLLECT_NAME_DIR}/var/extra/platform/puppet/*/hieradata/secure*.yaml
|
||||||
|
rm -f ${COLLECT_NAME_DIR}/etc/puppet/cache/hieradata/secure*.yaml
|
||||||
|
|
||||||
|
# dir /etc/kubernetes/pki was etc.excluded
|
||||||
|
if [ -d "/etc/kubernetes/pki" ] ; then
|
||||||
|
# grab the public certificates if /etc/kubernetes/pki exists
|
||||||
|
mkdir -p ${COLLECT_NAME_DIR}/etc/kubernetes/pki
|
||||||
|
cp -a /etc/kubernetes/pki/*.crt ${COLLECT_NAME_DIR}/etc/kubernetes/pki 2>/dev/null 1>/dev/null
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Mask user passwords in sysinv db dump
|
||||||
|
if [ -f ${COLLECT_NAME_DIR}/var/extra/database/sysinv.db.sql.txt ]; then
|
||||||
|
sed -i -r '/COPY i_user/, /^--/ s/^(([^\t]*\t){10})[^\t]*(\t.*)/\1xxxxxx\3/;
|
||||||
|
/COPY i_community/, /^--/ s/^(([^\t]*\t){5})[^\t]*(\t.*)/\1xxxxxx\3/;
|
||||||
|
/COPY i_trap_destination/, /^--/ s/^(([^\t]*\t){6})[^\t]*(\t.*)/\1xxxxxx\3/;
|
||||||
|
s/(identity\t[^\t]*\tpassword\t)[^\t]*/\1xxxxxx/' \
|
||||||
|
${COLLECT_NAME_DIR}/var/extra/database/sysinv.db.sql.txt
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Mask passwords in host profiles
|
||||||
|
grep -rl '\"name\": \"password\"' ${COLLECT_NAME_DIR}/var/extra/platform/sysinv/ \
|
||||||
|
| xargs --no-run-if-empty perl -i -e '
|
||||||
|
$prev="";
|
||||||
|
while (<>)
|
||||||
|
{
|
||||||
|
if (/\"name\": \"password\"/)
|
||||||
|
{
|
||||||
|
$prev =~ s/\"value\": \".*\"/\"value\": \"xxxxxx\"/;
|
||||||
|
}
|
||||||
|
print $prev;
|
||||||
|
$prev=$_;
|
||||||
|
}
|
||||||
|
print $prev;'
|
||||||
|
|
||||||
|
# Cleanup snmp
|
||||||
|
sed -i -r 's/(rocommunity[^ ]*).*/\1 xxxxxx/' ${COLLECT_NAME_DIR}/var/extra/platform/config/*/snmp/*
|
||||||
|
sed -i -r 's/(trap2sink *[^ ]*).*/\1 xxxxxx/' ${COLLECT_NAME_DIR}/var/extra/platform/config/*/snmp/*
|
||||||
|
|
||||||
|
# Mask passwords in bash.log and history logs
|
||||||
|
USER_HISTORY_FILES=$(find ${COLLECT_NAME_DIR} -type f -name .bash_history 2>/dev/null)
|
||||||
|
sed -i -r 's/(snmp-comm-(delete|show)) *((\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*) *){1,}/\1 xxxxxx/;
|
||||||
|
s/(snmp.*) *(--community|-c) *(\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1 \2 xxxxxx/;
|
||||||
|
s/(-password)=(\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1=xxxxxx/;
|
||||||
|
s/(-password) (\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1 xxxxxx/g;
|
||||||
|
s/(password)'\'': (\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1'\':' xxxxxx/g;
|
||||||
|
s/(password):(\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)'\''/\1:xxxxxx'\''/g;
|
||||||
|
s/(openstack.*) *(--password) *(\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1 \2 xxxxxx/;
|
||||||
|
s/(ldapmodifyuser.*userPassword *)(\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1 xxxxxx/' \
|
||||||
|
${USER_HISTORY_FILES} \
|
||||||
|
${COLLECT_NAME_DIR}/var/extra/history.info \
|
||||||
|
${COLLECT_NAME_DIR}/var/log/bash.log \
|
||||||
|
${COLLECT_NAME_DIR}/var/log/auth.log \
|
||||||
|
${COLLECT_NAME_DIR}/var/log/user.log \
|
||||||
|
${COLLECT_NAME_DIR}/var/log/ldapscripts.log
|
||||||
|
|
||||||
|
for f in ${COLLECT_NAME_DIR}/var/log/bash.log.*.gz \
|
||||||
|
${COLLECT_NAME_DIR}/var/log/auth.log.*.gz \
|
||||||
|
${COLLECT_NAME_DIR}/var/log/user.log.*.gz \
|
||||||
|
${COLLECT_NAME_DIR}/var/log/ldapscripts.log.*.gz
|
||||||
|
do
|
||||||
|
zgrep -q 'snmp|password' $f || continue
|
||||||
|
gunzip $f
|
||||||
|
unzipped=${f%%.gz}
|
||||||
|
sed -i -r 's/(snmp-comm-(delete|show)) *((\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*) *){1,}/\1 xxxxxx/;
|
||||||
|
s/(snmp.*) *(--community|-c) *(\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1 \2 xxxxxx/;
|
||||||
|
s/(-password)=(\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1=xxxxxx/;
|
||||||
|
s/(-password) (\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1 xxxxxx/g;
|
||||||
|
s/(password)'\'': (\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1'\':' xxxxxx/g;
|
||||||
|
s/(password):(\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)'\''/\1:xxxxxx'\''/g;
|
||||||
|
s/(openstack.*) *(--password) *(\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1 \2 xxxxxx/;
|
||||||
|
s/(ldapmodifyuser.*userPassword *)(\"[^\"]*\"|'\''[^'"'"']*'"'"'|[^ ]*)/\1 xxxxxx/' $unzipped
|
||||||
|
gzip $unzipped
|
||||||
|
done
|
58
tools/collector/debian-scripts/collect_networking.sh
Executable file
58
tools/collector/debian-scripts/collect_networking.sh
Executable file
@ -0,0 +1,58 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="networking"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
echo "${hostname}: Networking Info ...: ${LOGFILE}"
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# All nodes
|
||||||
|
###############################################################################
|
||||||
|
declare -a CMDS=("ip -s link"
|
||||||
|
"ip -4 -s addr"
|
||||||
|
"ip -6 -s addr"
|
||||||
|
"ip -4 -s neigh"
|
||||||
|
"ip -6 -s neigh"
|
||||||
|
"ip -4 rule"
|
||||||
|
"ip -6 rule"
|
||||||
|
"ip -4 route"
|
||||||
|
"ip -6 route"
|
||||||
|
)
|
||||||
|
|
||||||
|
for CMD in "${CMDS[@]}" ; do
|
||||||
|
delimiter ${LOGFILE} "${CMD}"
|
||||||
|
${CMD} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
done
|
||||||
|
|
||||||
|
CMD="iptables-save"
|
||||||
|
delimiter ${LOGFILE} "${CMD}"
|
||||||
|
${CMD} > ${extradir}/iptables.dump 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
CMD="ip6tables-save"
|
||||||
|
delimiter ${LOGFILE} "${CMD}"
|
||||||
|
${CMD} > ${extradir}/ip6tables.dump 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Worker
|
||||||
|
###############################################################################
|
||||||
|
if [[ "$nodetype" = "worker" || "$subfunction" == *"worker"* ]] ; then
|
||||||
|
NAMESPACES=($(ip netns))
|
||||||
|
for NS in ${NAMESPACES[@]}; do
|
||||||
|
delimiter ${LOGFILE} "${NS}"
|
||||||
|
for CMD in "${CMDS[@]}" ; do
|
||||||
|
ip netns exec ${NS} ${CMD}
|
||||||
|
done
|
||||||
|
done >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
44
tools/collector/debian-scripts/collect_nfv_vim.sh
Normal file
44
tools/collector/debian-scripts/collect_nfv_vim.sh
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2016 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
LOGFILE="${extradir}/nfv-vim.info"
|
||||||
|
echo "${hostname}: NFV-Vim Info ......: ${LOGFILE}"
|
||||||
|
|
||||||
|
function is_service_active {
|
||||||
|
active=`sm-query service vim | grep "enabled-active"`
|
||||||
|
if [ -z "$active" ] ; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Controller
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
is_service_active
|
||||||
|
if [ "$?" = "0" ] ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Assumes that database_dir is unique in /etc/nfv/vim/config.ini
|
||||||
|
DATABASE_DIR=$(awk -F "=" '/database_dir/ {print $2}' /etc/nfv/vim/config.ini)
|
||||||
|
|
||||||
|
SQLITE_DUMP="/usr/bin/sqlite3 ${DATABASE_DIR}/vim_db_v1 .dump"
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "dump database"
|
||||||
|
timeout 30 ${SQLITE_DUMP} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
||||||
|
|
154
tools/collector/debian-scripts/collect_openstack.sh
Executable file
154
tools/collector/debian-scripts/collect_openstack.sh
Executable file
@ -0,0 +1,154 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2019 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
# Environment for kubectl
|
||||||
|
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||||
|
|
||||||
|
SERVICE="openstack"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
echo "${hostname}: Openstack Info ....: ${LOGFILE}"
|
||||||
|
|
||||||
|
function is_service_active {
|
||||||
|
active=$(sm-query service rabbit-fs | grep "enabled-active")
|
||||||
|
if [ -z "${active}" ] ; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function is_openstack_node {
|
||||||
|
local PASS=0
|
||||||
|
local FAIL=1
|
||||||
|
# NOTE: hostname changes during first configuration
|
||||||
|
local this_node=$(cat /proc/sys/kernel/hostname)
|
||||||
|
|
||||||
|
labels=$(kubectl get node ${this_node} \
|
||||||
|
--no-headers --show-labels 2>/dev/null | awk '{print $NF}')
|
||||||
|
if [[ $labels =~ openstack-control-plane=enabled ]]; then
|
||||||
|
return ${PASS}
|
||||||
|
else
|
||||||
|
return ${FAIL}
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function openstack_credentials {
|
||||||
|
# Setup openstack admin tenant credentials using environment variables
|
||||||
|
unset OS_SERVICE_TOKEN
|
||||||
|
export OS_ENDPOINT_TYPE=internalURL
|
||||||
|
export CINDER_ENDPOINT_TYPE=internalURL
|
||||||
|
export OS_USERNAME=admin
|
||||||
|
export OS_PASSWORD=$(TERM=linux /opt/platform/.keyring/*/.CREDENTIAL 2>/dev/null)
|
||||||
|
export OS_AUTH_TYPE=password
|
||||||
|
export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3
|
||||||
|
export OS_PROJECT_NAME=admin
|
||||||
|
export OS_USER_DOMAIN_NAME=Default
|
||||||
|
export OS_PROJECT_DOMAIN_NAME=Default
|
||||||
|
export OS_IDENTITY_API_VERSION=3
|
||||||
|
export OS_REGION_NAME=RegionOne
|
||||||
|
export OS_INTERFACE=internal
|
||||||
|
}
|
||||||
|
|
||||||
|
function openstack_commands {
|
||||||
|
declare -a CMDS=()
|
||||||
|
CMDS+=("openstack project list --long")
|
||||||
|
CMDS+=("openstack user list --long")
|
||||||
|
CMDS+=("openstack service list --long")
|
||||||
|
CMDS+=("openstack router list --long")
|
||||||
|
CMDS+=("openstack network list --long")
|
||||||
|
CMDS+=("openstack subnet list --long")
|
||||||
|
CMDS+=("openstack image list --long")
|
||||||
|
CMDS+=("openstack volume list --all-projects --long")
|
||||||
|
CMDS+=("openstack availability zone list --long")
|
||||||
|
CMDS+=("openstack server group list --all-projects --long")
|
||||||
|
CMDS+=('openstack server list --all-projects --long -c ID -c Name -c Status -c "Task State" -c "Power State" -c Networks -c "Image Name" -c "Image ID" -c "Flavor Name" -c "Flavor ID" -c "Availability Zone" -c Host -c Properties')
|
||||||
|
CMDS+=("openstack stack list --long --all-projects")
|
||||||
|
CMDS+=("openstack security group list --all-projects")
|
||||||
|
CMDS+=("openstack security group rule list --all-projects --long")
|
||||||
|
CMDS+=("openstack keypair list")
|
||||||
|
CMDS+=("openstack configuration show")
|
||||||
|
CMDS+=("openstack quota list --compute")
|
||||||
|
CMDS+=("openstack quota list --volume")
|
||||||
|
CMDS+=("openstack quota list --network")
|
||||||
|
CMDS+=("openstack host list")
|
||||||
|
CMDS+=("openstack hypervisor list --long")
|
||||||
|
CMDS+=("openstack hypervisor stats show")
|
||||||
|
HOSTS=( $(openstack hypervisor list -f value -c "Hypervisor Hostname" 2>/dev/null) )
|
||||||
|
for host in "${HOSTS[@]}" ; do
|
||||||
|
CMDS+=("openstack hypervisor show -f yaml ${host}")
|
||||||
|
done
|
||||||
|
|
||||||
|
# nova commands
|
||||||
|
CMDS+=("nova service-list")
|
||||||
|
|
||||||
|
for CMD in "${CMDS[@]}" ; do
|
||||||
|
delimiter ${LOGFILE} "${CMD}"
|
||||||
|
eval ${CMD} 2>>${COLLECT_ERROR_LOG} >>${LOGFILE}
|
||||||
|
echo >>${LOGFILE}
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
function rabbitmq_usage_stats {
|
||||||
|
# RabbitMQ usage stats
|
||||||
|
MQ_STATUS="rabbitmqctl status"
|
||||||
|
delimiter ${LOGFILE} "${MQ_STATUS} | grep -e '{memory' -A30"
|
||||||
|
${MQ_STATUS} 2>/dev/null | grep -e '{memory' -A30 >> ${LOGFILE}
|
||||||
|
echo >>${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "RabbitMQ Queue Info"
|
||||||
|
num_queues=$(rabbitmqctl list_queues | wc -l); ((num_queues-=2))
|
||||||
|
num_bindings=$(rabbitmqctl list_bindings | wc -l); ((num_bindings-=2))
|
||||||
|
num_exchanges=$(rabbitmqctl list_exchanges | wc -l); ((num_exchanges-=2))
|
||||||
|
num_connections=$(rabbitmqctl list_connections | wc -l); ((num_connections-=2))
|
||||||
|
num_channels=$(rabbitmqctl list_channels | wc -l); ((num_channels-=2))
|
||||||
|
arr=($(rabbitmqctl list_queues messages consumers memory | \
|
||||||
|
awk '/^[0-9]/ {a+=$1; b+=$2; c+=$3} END {print a, b, c}'))
|
||||||
|
messages=${arr[0]}; consumers=${arr[1]}; memory=${arr[2]}
|
||||||
|
printf "%6s %8s %9s %11s %8s %8s %9s %10s\n" "queues" "bindings" "exchanges" "connections" "channels" "messages" "consumers" "memory" >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
printf "%6d %8d %9d %11d %8d %8d %9d %10d\n" $num_queues $num_bindings $num_exchanges $num_connections $num_channels $messages $consumers $memory >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Controller
|
||||||
|
###############################################################################
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
|
||||||
|
is_service_active
|
||||||
|
if [ "$?" = "0" ] ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# host rabbitmq usage
|
||||||
|
rabbitmq_usage_stats
|
||||||
|
|
||||||
|
# Check for openstack label on this node
|
||||||
|
if ! is_openstack_node; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Run as subshell so we don't contaminate environment
|
||||||
|
(openstack_credentials; openstack_commands)
|
||||||
|
|
||||||
|
# TODO(jgauld): Should also get containerized rabbitmq usage,
|
||||||
|
# need wrapper script rabbitmq-cli
|
||||||
|
fi
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# collect does not retrieve /etc/keystone dir
|
||||||
|
# Additional logic included to copy /etc/keystone directory
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
mkdir -p ${extradir}/../../etc/
|
||||||
|
cp -R /etc/keystone/ ${extradir}/../../etc
|
||||||
|
chmod -R 755 ${extradir}/../../etc/keystone
|
||||||
|
|
||||||
|
exit 0
|
35
tools/collector/debian-scripts/collect_ovs.sh
Normal file
35
tools/collector/debian-scripts/collect_ovs.sh
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
########################################################################
|
||||||
|
#
|
||||||
|
# Copyright (c) 2018 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
########################################################################
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="ovs"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Worker Nodes
|
||||||
|
###############################################################################
|
||||||
|
if [[ "$nodetype" == "worker" || "$subfunction" == *"worker"* ]] ; then
|
||||||
|
|
||||||
|
if [[ "$vswitch_type" == *ovs* ]]; then
|
||||||
|
echo "${hostname}: OVS Info ..........: ${LOGFILE}"
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ovsdb-client dump"
|
||||||
|
ovsdb-client dump >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ovs-vsctl show"
|
||||||
|
ovs-vsctl --timeout 10 show >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
29
tools/collector/debian-scripts/collect_parms
Normal file
29
tools/collector/debian-scripts/collect_parms
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
#echo "defaults: $1-$2-$3-$4"
|
||||||
|
|
||||||
|
if [ -z ${1} ] ; then
|
||||||
|
basedir=/scratch
|
||||||
|
else
|
||||||
|
basedir=$1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -z ${2} ] ; then
|
||||||
|
extradir=$basedir/var/extra
|
||||||
|
else
|
||||||
|
extradir=$2
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -z ${3} ] ; then
|
||||||
|
hostname=$HOSTNAME
|
||||||
|
else
|
||||||
|
hostname=$3
|
||||||
|
fi
|
||||||
|
|
||||||
|
mkdir -p ${extradir}
|
46
tools/collector/debian-scripts/collect_patching.sh
Executable file
46
tools/collector/debian-scripts/collect_patching.sh
Executable file
@ -0,0 +1,46 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="patching"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
echo "${hostname}: Patching Info .....: ${LOGFILE}"
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# All nodes
|
||||||
|
###############################################################################
|
||||||
|
# FIXME: Debian doesnt support smart channel
|
||||||
|
#delimiter ${LOGFILE} "smart channel --show"
|
||||||
|
#smart channel --show 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Controller
|
||||||
|
###############################################################################
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "sw-patch query"
|
||||||
|
sw-patch query 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "sw-patch query-hosts"
|
||||||
|
sw-patch query-hosts 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "sw-patch query-hosts --debug"
|
||||||
|
sw-patch query-hosts --debug 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "find /opt/patching"
|
||||||
|
find /opt/patching 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "find /var/www/pages/updates"
|
||||||
|
find /var/www/pages/updates 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
117
tools/collector/debian-scripts/collect_psqldb.sh
Executable file
117
tools/collector/debian-scripts/collect_psqldb.sh
Executable file
@ -0,0 +1,117 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
# postgres database commands
|
||||||
|
PSQL_CMD="sudo -u postgres psql --pset pager=off -q"
|
||||||
|
PG_DUMP_CMD="sudo -u postgres pg_dump"
|
||||||
|
|
||||||
|
SERVICE="database"
|
||||||
|
DB_DIR="${extradir}/database"
|
||||||
|
LOGFILE="${extradir}/database.info"
|
||||||
|
echo "${hostname}: Database Info .....: ${LOGFILE}"
|
||||||
|
|
||||||
|
function is_service_active {
|
||||||
|
active=`sm-query service postgres | grep "enabled-active"`
|
||||||
|
if [ -z "$active" ] ; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# All node types
|
||||||
|
###############################################################################
|
||||||
|
mkdir -p ${DB_DIR}
|
||||||
|
|
||||||
|
function log_database {
|
||||||
|
db_list=( $(${PSQL_CMD} -t -c "SELECT datname FROM pg_database WHERE datistemplate = false;") )
|
||||||
|
for db in "${db_list[@]}"; do
|
||||||
|
echo "postgres database: ${db}"
|
||||||
|
${PSQL_CMD} -d ${db} -c "
|
||||||
|
SELECT
|
||||||
|
table_schema,
|
||||||
|
table_name,
|
||||||
|
pg_size_pretty(table_size) AS table_size,
|
||||||
|
pg_size_pretty(indexes_size) AS indexes_size,
|
||||||
|
pg_size_pretty(total_size) AS total_size,
|
||||||
|
live_tuples,
|
||||||
|
dead_tuples
|
||||||
|
FROM (
|
||||||
|
SELECT
|
||||||
|
table_schema,
|
||||||
|
table_name,
|
||||||
|
pg_table_size(table_name) AS table_size,
|
||||||
|
pg_indexes_size(table_name) AS indexes_size,
|
||||||
|
pg_total_relation_size(table_name) AS total_size,
|
||||||
|
pg_stat_get_live_tuples(table_name::regclass) AS live_tuples,
|
||||||
|
pg_stat_get_dead_tuples(table_name::regclass) AS dead_tuples
|
||||||
|
FROM (
|
||||||
|
SELECT
|
||||||
|
table_schema,
|
||||||
|
table_name
|
||||||
|
FROM information_schema.tables
|
||||||
|
WHERE table_schema='public'
|
||||||
|
AND table_type='BASE TABLE'
|
||||||
|
) AS all_tables
|
||||||
|
ORDER BY total_size DESC
|
||||||
|
) AS pretty_sizes;
|
||||||
|
"
|
||||||
|
done >> ${1}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
DB_EXT=db.sql.txt
|
||||||
|
function database_dump {
|
||||||
|
mkdir -p ${DB_DIR}
|
||||||
|
db_list=( $(${PSQL_CMD} -t -c "SELECT datname FROM pg_database WHERE datistemplate = false;") )
|
||||||
|
for DB in "${db_list[@]}"; do
|
||||||
|
if [ "$DB" != "keystone" -a "$DB" != "ceilometer" ] ; then
|
||||||
|
echo "${hostname}: Dumping Database ..: ${DB_DIR}/$DB.$DB_EXT"
|
||||||
|
(cd ${DB_DIR} ; sudo -u postgres pg_dump $DB > $DB.$DB_EXT)
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Controller
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
is_service_active
|
||||||
|
if [ "$?" = "0" ] ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# postgres DB sizes
|
||||||
|
delimiter ${LOGFILE} "formatted ${PSQL_CMD} -c"
|
||||||
|
${PSQL_CMD} -c "
|
||||||
|
SELECT
|
||||||
|
pg_database.datname,
|
||||||
|
pg_database_size(pg_database.datname),
|
||||||
|
pg_size_pretty(pg_database_size(pg_database.datname))
|
||||||
|
FROM pg_database
|
||||||
|
ORDER BY pg_database_size DESC;
|
||||||
|
" >> ${LOGFILE}
|
||||||
|
|
||||||
|
# Number of postgres connections
|
||||||
|
delimiter ${LOGFILE} "ps -C postgres -o cmd="
|
||||||
|
ps -C postgres -o cmd= >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "call to log_database"
|
||||||
|
log_database ${LOGFILE}
|
||||||
|
|
||||||
|
database_dump
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
26
tools/collector/debian-scripts/collect_sm.sh
Normal file
26
tools/collector/debian-scripts/collect_sm.sh
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="sm"
|
||||||
|
LOGFILE="${extradir}/sm.info"
|
||||||
|
echo "${hostname}: Service Management : ${LOGFILE}"
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Controller
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
kill -SIGUSR1 $(</var/run/sm.pid)
|
||||||
|
sm-troubleshoot 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
118
tools/collector/debian-scripts/collect_sysinv.sh
Executable file
118
tools/collector/debian-scripts/collect_sysinv.sh
Executable file
@ -0,0 +1,118 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2021 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="inventory"
|
||||||
|
LOGFILE="${extradir}/${SERVICE}.info"
|
||||||
|
RPMLOG="${extradir}/rpm.info"
|
||||||
|
INVENTORY=${4}
|
||||||
|
|
||||||
|
function is_service_active {
|
||||||
|
active=`sm-query service management-ip | grep "enabled-active"`
|
||||||
|
if [ -z "$active" ] ; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function collect_inventory {
|
||||||
|
is_service_active
|
||||||
|
if [ "$?" = "0" ] ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
echo "${hostname}: System Inventory ..: ${LOGFILE}"
|
||||||
|
|
||||||
|
HOSTNAMES=$(system host-list --nowrap | grep '[0-9]' | cut -d '|' -f 3 | tr -d ' ')
|
||||||
|
if [[ -z ${HOSTNAMES} || ${HOSTNAMES} != *"controller"* ]]; then
|
||||||
|
echo "Failed to get system host-list" > $LOGFILE
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# These go into the SERVICE.info file
|
||||||
|
delimiter ${LOGFILE} "system show"
|
||||||
|
system show 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-list"
|
||||||
|
system host-list 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system datanetwork-list"
|
||||||
|
system datanetwork-list 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system service-list"
|
||||||
|
system service-list 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
# delimiter ${LOGFILE} "vm-topology"
|
||||||
|
# timeout 60 vm-topology --show all 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system network-list"
|
||||||
|
system network-list 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
for host in ${HOSTNAMES}; do
|
||||||
|
delimiter ${LOGFILE} "system host-show ${host}"
|
||||||
|
system host-show 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-port-list ${host}"
|
||||||
|
system host-port-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-if-list ${host}"
|
||||||
|
system host-if-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system interface-network-list ${host}"
|
||||||
|
system interface-network-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-ethernet-port-list ${host}"
|
||||||
|
system host-ethernet-port-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-cpu-list ${host}"
|
||||||
|
system host-cpu-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-memory-list ${host}"
|
||||||
|
system host-memory-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-label-list ${host}"
|
||||||
|
system host-label-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-disk-list ${host}"
|
||||||
|
system host-disk-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-stor-list ${host}"
|
||||||
|
system host-stor-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-lvg-list ${host}"
|
||||||
|
system host-lvg-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "system host-pv-list ${host}"
|
||||||
|
system host-pv-list ${host} 2>>${COLLECT_ERROR_LOG} >> ${LOGFILE}
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Only Controller
|
||||||
|
###############################################################################
|
||||||
|
if [ "$nodetype" = "controller" ] ; then
|
||||||
|
|
||||||
|
echo "${hostname}: Software Config ...: ${RPMLOG}"
|
||||||
|
# These go into the SERVICE.info file
|
||||||
|
delimiter ${RPMLOG} "dpkg -l"
|
||||||
|
dpkg -l >> ${RPMLOG}
|
||||||
|
|
||||||
|
if [ "${INVENTORY}" = true ] ; then
|
||||||
|
collect_inventory
|
||||||
|
fi
|
||||||
|
|
||||||
|
# copy /opt/platform to extra dir while filtering out the
|
||||||
|
# iso and lost+found dirs
|
||||||
|
rsync -a --exclude 'iso' --exclude 'lost+found' /opt/platform ${extradir}
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
exit 0
|
82
tools/collector/debian-scripts/collect_tc.sh
Executable file
82
tools/collector/debian-scripts/collect_tc.sh
Executable file
@ -0,0 +1,82 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
# Loads Up Utilities and Commands Variables
|
||||||
|
source /usr/local/sbin/collect_parms
|
||||||
|
source /usr/local/sbin/collect_utils
|
||||||
|
|
||||||
|
SERVICE="tc"
|
||||||
|
LOGFILE="${extradir}/tc.info"
|
||||||
|
echo "${hostname}: Traffic Controls . : ${LOGFILE}"
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Interface Info
|
||||||
|
###############################################################################
|
||||||
|
delimiter ${LOGFILE} "cat /etc/network/interfaces"
|
||||||
|
if [ -f /etc/network/interfaces ]; then
|
||||||
|
cat /etc/network/interfaces >> ${LOGFILE}
|
||||||
|
else
|
||||||
|
echo "/etc/network/interfaces NOT FOUND" >> ${LOGFILE}
|
||||||
|
fi
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ip link"
|
||||||
|
ip link >> ${LOGFILE}
|
||||||
|
|
||||||
|
for i in $(ip link | grep mtu | grep eth |awk '{print $2}' | sed 's#:##g'); do
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ethtool ${i}"
|
||||||
|
ethtool ${i} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "cat /sys/class/net/${i}/speed"
|
||||||
|
cat /sys/class/net/${i}/speed >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "ethtool -S ${i}"
|
||||||
|
ethtool -S ${i} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
done
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# TC Configuration Script (/usr/local/bin/tc_setup.sh)
|
||||||
|
###############################################################################
|
||||||
|
delimiter ${LOGFILE} "cat /usr/local/bin/tc_setup.sh"
|
||||||
|
if [ -f /usr/local/bin/tc_setup.sh ]; then
|
||||||
|
cat /usr/local/bin/tc_setup.sh >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
else
|
||||||
|
echo "/usr/local/bin/tc_setup.sh NOT FOUND" >> ${LOGFILE}
|
||||||
|
fi
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# TC Configuration
|
||||||
|
###############################################################################
|
||||||
|
delimiter ${LOGFILE} "tc qdisc show"
|
||||||
|
tc qdisc show >> ${LOGFILE}
|
||||||
|
|
||||||
|
for i in $(ip link | grep htb | awk '{print $2}' | sed 's#:##g'); do
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "tc class show dev ${i}"
|
||||||
|
tc class show dev ${i} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "tc filter show dev ${i}"
|
||||||
|
tc filter show dev ${i} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
done
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# TC Statistics
|
||||||
|
###############################################################################
|
||||||
|
delimiter ${LOGFILE} "tc -s qdisc show"
|
||||||
|
tc -s qdisc show >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
for i in $(ip link | grep htb | awk '{print $2}' | sed 's#:##g'); do
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "tc -s class show dev ${i}"
|
||||||
|
tc -s class show dev ${i} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
|
||||||
|
delimiter ${LOGFILE} "tc -s filter show dev ${i}"
|
||||||
|
tc -s filter show dev ${i} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
done
|
||||||
|
|
||||||
|
exit 0
|
318
tools/collector/debian-scripts/collect_utils
Executable file
318
tools/collector/debian-scripts/collect_utils
Executable file
@ -0,0 +1,318 @@
|
|||||||
|
#! /bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2013-2019 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
##########################################################################################
|
||||||
|
|
||||||
|
DEBUG=false
|
||||||
|
|
||||||
|
# Fail Codes
|
||||||
|
PASS=0
|
||||||
|
FAIL=1
|
||||||
|
RETRY=2
|
||||||
|
|
||||||
|
FAIL_NODETYPE=3
|
||||||
|
|
||||||
|
FAIL_TIMEOUT=10
|
||||||
|
FAIL_TIMEOUT1=11
|
||||||
|
FAIL_TIMEOUT2=12
|
||||||
|
FAIL_TIMEOUT3=13
|
||||||
|
FAIL_TIMEOUT4=14
|
||||||
|
FAIL_TIMEOUT5=15
|
||||||
|
FAIL_TIMEOUT6=16
|
||||||
|
FAIL_TIMEOUT7=17
|
||||||
|
FAIL_TIMEOUT8=18
|
||||||
|
FAIL_TIMEOUT9=19
|
||||||
|
|
||||||
|
FAIL_SUBCLOUD_TIMEOUT=20
|
||||||
|
|
||||||
|
FAIL_PASSWORD=30
|
||||||
|
FAIL_PERMISSION=31
|
||||||
|
FAIL_CLEANUP=32
|
||||||
|
FAIL_UNREACHABLE=33
|
||||||
|
FAIL_HOSTNAME=34
|
||||||
|
FAIL_INACTIVE=35
|
||||||
|
FAIL_PERMISSION_SKIP=36
|
||||||
|
FAIL_OUT_OF_SPACE=37
|
||||||
|
FAIL_INSUFFICIENT_SPACE=38
|
||||||
|
FAIL_INTERNAL=39
|
||||||
|
FAIL_NO_TARDIR=40
|
||||||
|
FAIL_NO_TARBALLS=41
|
||||||
|
FAIL_NO_FILE_SPECIFIED=42
|
||||||
|
FAIL_FILE_NOT_FOUND=43
|
||||||
|
FAIL_FILE_EMPTY=44
|
||||||
|
FAIL_PASSWORD_PROMPT=45
|
||||||
|
FAIL_MISSING_PARAMETER=46
|
||||||
|
FAIL_DATE_FORMAT=47
|
||||||
|
FAIL_NO_HOSTS=48
|
||||||
|
FAIL_FILE_COPY=49
|
||||||
|
FAIL_SUBCLOUD=50
|
||||||
|
FAIL_CONTINUE=51
|
||||||
|
FAIL_SUBCLOUDNAME=52
|
||||||
|
FAIL_NO_SUBCLOUDS=53
|
||||||
|
FAIL_NOT_SYSTEMCONTROLLER=54
|
||||||
|
|
||||||
|
|
||||||
|
# Warnings are above 200
|
||||||
|
WARN_WARNING=200
|
||||||
|
WARN_HOSTNAME=201
|
||||||
|
WARN_SUBCLOUD=202
|
||||||
|
|
||||||
|
COLLECT_ERROR="Error:"
|
||||||
|
COLLECT_DEBUG="Debug:"
|
||||||
|
COLLECT_WARN="Warning:"
|
||||||
|
|
||||||
|
# Failure Strings
|
||||||
|
FAIL_NOT_ENOUGH_SPACE_STR="Not enough /scratch filesystem space"
|
||||||
|
FAIL_OUT_OF_SPACE_STR="No space left on device"
|
||||||
|
FAIL_TAR_OUT_OF_SPACE_STR="tar: Error is not recoverable"
|
||||||
|
FAIL_INSUFFICIENT_SPACE_STR="Not enough space on device"
|
||||||
|
FAIL_UNREACHABLE_STR="Unreachable"
|
||||||
|
|
||||||
|
FAIL_TIMEOUT_STR="operation timeout"
|
||||||
|
FAIL_SUBCLOUD_TIMEOUT_STR="subcloud collect timeout"
|
||||||
|
|
||||||
|
FAIL_NO_FILE_SPECIFIED_STR="no file specified"
|
||||||
|
FAIL_FILE_NOT_FOUND_STR="no such file or directory"
|
||||||
|
FAIL_FILE_EMPTY_STR="file is empty"
|
||||||
|
FAIL_PASSWORD_PROMPT_STR="password for"
|
||||||
|
|
||||||
|
FAIL_DATE_FORMAT_STR="date format"
|
||||||
|
FAIL_INACTIVE_STR="not active"
|
||||||
|
FAIL_NO_HOSTS_STR="empty host list"
|
||||||
|
FAIL_NO_SUBCLOUDS_STR="empty subcloud list"
|
||||||
|
FAIL_MISSING_PARAMETER_STR="missing parameter"
|
||||||
|
FAIL_FILE_COPY_STR="failed to copy"
|
||||||
|
FAIL_CONTINUE_STR="cannot continue"
|
||||||
|
|
||||||
|
# The minimum amount of % free space on /scratch to allow collect to proceed
|
||||||
|
MIN_PERCENT_SPACE_REQUIRED=75
|
||||||
|
|
||||||
|
# Subcloud collect stops when avail scratch drops below this threshold.
|
||||||
|
# Use collect -sc --continue to tell collect to continue collecting subclouds
|
||||||
|
# from where it left off.
|
||||||
|
# 2Gib in K blocks rounded up
|
||||||
|
declare -i COLLECT_BASE_DIR_FULL_THRESHOLD=2147484 # 2Gib in K blocks rounded up
|
||||||
|
|
||||||
|
# Log file path/names
|
||||||
|
COLLECT_LOG=/var/log/collect.log
|
||||||
|
COLLECT_ERROR_LOG=/tmp/collect_error.log
|
||||||
|
HOST_COLLECT_ERROR_LOG="/tmp/host_collect_error.log"
|
||||||
|
|
||||||
|
DCROLE_SYSTEMCONTROLLER="systemcontroller"
|
||||||
|
DCROLE_SUBCLOUD="subcloud"
|
||||||
|
|
||||||
|
function source_openrc_if_needed
|
||||||
|
{
|
||||||
|
# get the node and subfunction types
|
||||||
|
nodetype=""
|
||||||
|
subfunction=""
|
||||||
|
PLATFORM_CONF=/etc/platform/platform.conf
|
||||||
|
if [ -e ${PLATFORM_CONF} ] ; then
|
||||||
|
source ${PLATFORM_CONF}
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${nodetype}" != "controller" -a "${nodetype}" != "worker" -a "${nodetype}" != "storage" ] ; then
|
||||||
|
logger -t ${COLLECT_TAG} "could not identify nodetype ($nodetype)"
|
||||||
|
exit $FAIL_NODETYPE
|
||||||
|
fi
|
||||||
|
|
||||||
|
ACTIVE=false
|
||||||
|
if [ "$nodetype" == "controller" ] ; then
|
||||||
|
# get local host activity state
|
||||||
|
OPENRC="/etc/platform/openrc"
|
||||||
|
if [ -e "${OPENRC}" ] ; then
|
||||||
|
OS_PASSWORD=""
|
||||||
|
source ${OPENRC} 2>/dev/null 1>/dev/null
|
||||||
|
if [ "${OS_PASSWORD}" != "" ] ; then
|
||||||
|
ACTIVE=true
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Setup an expect command completion file.
|
||||||
|
# This is used to force serialization of expect
|
||||||
|
# sequences and highlight command completion
|
||||||
|
collect_done="collect done"
|
||||||
|
cmd_done_sig="expect done"
|
||||||
|
cmd_done_file="/usr/local/sbin/expect_done"
|
||||||
|
|
||||||
|
# Compression Commands
|
||||||
|
TAR_ZIP_CMD="tar -cvzf"
|
||||||
|
TAR_UZIP_CMD="tar -xvzf"
|
||||||
|
TAR_CMD="tar -cvhf"
|
||||||
|
TAR_CMD_APPEND="tar -rvhf"
|
||||||
|
UNTAR_CMD="tar -xvf"
|
||||||
|
ZIP_CMD="gzip"
|
||||||
|
NICE_CMD="/usr/bin/nice -n19"
|
||||||
|
IONICE_CMD="/usr/bin/ionice -c2 -n7"
|
||||||
|
COLLECT_TAG="COLLECT"
|
||||||
|
|
||||||
|
STARTDATE_OPTION="--start-date"
|
||||||
|
ENDDATE_OPTION="--end-date"
|
||||||
|
|
||||||
|
|
||||||
|
PROCESS_DETAIL_CMD="ps -e -H -o ruser,tid,pid,ppid,flags,stat,policy,rtprio,nice,priority,rss:10,vsz:10,sz:10,psr,stime,tty,cputime,wchan:14,cmd"
|
||||||
|
BUILD_INFO_CMD="cat /etc/build.info"
|
||||||
|
|
||||||
|
################################################################################
|
||||||
|
# Log Debug, Info or Error log message to syslog
|
||||||
|
################################################################################
|
||||||
|
function log
|
||||||
|
{
|
||||||
|
logger -t ${COLLECT_TAG} $@
|
||||||
|
}
|
||||||
|
|
||||||
|
function ilog
|
||||||
|
{
|
||||||
|
echo "$@"
|
||||||
|
logger -t ${COLLECT_TAG} $@
|
||||||
|
}
|
||||||
|
|
||||||
|
function elog
|
||||||
|
{
|
||||||
|
echo "${COLLECT_ERROR} $@"
|
||||||
|
logger -t ${COLLECT_TAG} "${COLLECT_ERROR} $@"
|
||||||
|
}
|
||||||
|
|
||||||
|
function wlog
|
||||||
|
{
|
||||||
|
echo "${COLLECT_WARN} $@"
|
||||||
|
logger -t ${COLLECT_TAG} "${COLLECT_WARN} $@"
|
||||||
|
}
|
||||||
|
|
||||||
|
function set_debug_mode()
|
||||||
|
{
|
||||||
|
DEBUG=${1}
|
||||||
|
}
|
||||||
|
|
||||||
|
function dlog()
|
||||||
|
{
|
||||||
|
if [ "$DEBUG" == true ] ; then
|
||||||
|
logger -t ${COLLECT_TAG} "${COLLECT_DEBUG} $@"
|
||||||
|
echo "$(date) ${COLLECT_DEBUG} $@"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function delimiter()
|
||||||
|
{
|
||||||
|
echo "--------------------------------------------------------------------" >> ${1} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
echo "`date` : ${myhostname} : ${2}" >> ${1} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
echo "--------------------------------------------------------------------" >> ${1} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
}
|
||||||
|
|
||||||
|
function log_slabinfo()
|
||||||
|
{
|
||||||
|
PAGE_SIZE=$(getconf PAGE_SIZE)
|
||||||
|
cat /proc/slabinfo | awk -v page_size_B=${PAGE_SIZE} '
|
||||||
|
BEGIN {page_KiB = page_size_B/1024; TOT_KiB = 0;}
|
||||||
|
(NF == 17) {
|
||||||
|
gsub(/[<>]/, "");
|
||||||
|
printf("%-22s %11s %8s %8s %10s %12s %1s %5s %10s %12s %1s %12s %9s %11s %8s\n",
|
||||||
|
$2, $3, $4, $5, $6, $7, $8, $10, $11, $12, $13, $15, $16, $17, "KiB");
|
||||||
|
}
|
||||||
|
(NF == 16) {
|
||||||
|
num_objs=$3; obj_per_slab=$5; pages_per_slab=$6;
|
||||||
|
KiB = (obj_per_slab > 0) ? page_KiB*num_objs/obj_per_slab*pages_per_slab : 0;
|
||||||
|
TOT_KiB += KiB;
|
||||||
|
printf("%-22s %11d %8d %8d %10d %12d %1s %5d %10d %12d %1s %12d %9d %11d %8d\n",
|
||||||
|
$1, $2, $3, $4, $5, $6, $7, $9, $10, $11, $12, $14, $15, $16, KiB);
|
||||||
|
}
|
||||||
|
END {
|
||||||
|
printf("%-22s %11s %8s %8s %10s %12s %1s %5s %10s %12s %1s %12s %9s %11s %8d\n",
|
||||||
|
"TOTAL", "-", "-", "-", "-", "-", ":", "-", "-", "-", ":", "-", "-", "-", TOT_KiB);
|
||||||
|
}
|
||||||
|
' >> ${1} 2>>${COLLECT_ERROR_LOG}
|
||||||
|
}
|
||||||
|
###########################################################################
|
||||||
|
#
|
||||||
|
# Name : collect_errors
|
||||||
|
#
|
||||||
|
# Description: search COLLECT_ERROR_LOG for "No space left on device" logs
|
||||||
|
# Return 0 if no such logs are found.
|
||||||
|
# Return 1 if such logs are found
|
||||||
|
#
|
||||||
|
# Assumptions: Caller should assume a non-zero return as an indication of
|
||||||
|
# a corrupt or incomplete collect log
|
||||||
|
#
|
||||||
|
# Create logs and screen echos that record the error for the user.
|
||||||
|
#
|
||||||
|
# May look for other errors in the future
|
||||||
|
#
|
||||||
|
###########################################################################
|
||||||
|
|
||||||
|
listOfOutOfSpaceErrors=(
|
||||||
|
"${FAIL_OUT_OF_SPACE_STR}"
|
||||||
|
"${FAIL_TAR_OUT_OF_SPACE_STR}"
|
||||||
|
"${FAIL_INSUFFICIENT_SPACE_STR}"
|
||||||
|
)
|
||||||
|
|
||||||
|
function collect_errors()
|
||||||
|
{
|
||||||
|
local host=${1}
|
||||||
|
local RC=0
|
||||||
|
|
||||||
|
if [ -e "${COLLECT_ERROR_LOG}" ] ; then
|
||||||
|
|
||||||
|
## now loop through known space related error strings
|
||||||
|
index=0
|
||||||
|
while [ "x${listOfOutOfSpaceErrors[index]}" != "x" ] ; do
|
||||||
|
grep -q "${listOfOutOfSpaceErrors[index]}" ${COLLECT_ERROR_LOG}
|
||||||
|
if [ "$?" == "0" ] ; then
|
||||||
|
|
||||||
|
string="failed to collect from ${host} (reason:${FAIL_OUT_OF_SPACE}:${FAIL_OUT_OF_SPACE_STR})"
|
||||||
|
|
||||||
|
# /var/log/user.log it
|
||||||
|
logger -t ${COLLECT_TAG} "${string}"
|
||||||
|
|
||||||
|
# logs that show up in the foreground
|
||||||
|
echo "${string}"
|
||||||
|
echo "Increase available space in ${host}:${COLLECT_BASE_DIR} and retry operation."
|
||||||
|
|
||||||
|
# return error code
|
||||||
|
RC=1
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
index=$(($index+1))
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
return ${RC}
|
||||||
|
}
|
||||||
|
|
||||||
|
############################################################################
|
||||||
|
#
|
||||||
|
# Name : space_precheck
|
||||||
|
#
|
||||||
|
# Description:
|
||||||
|
#
|
||||||
|
############################################################################
|
||||||
|
|
||||||
|
function space_precheck()
|
||||||
|
{
|
||||||
|
HOSTNAME=${1}
|
||||||
|
COLLECT_BASE_DIR=${2}
|
||||||
|
COLLECT_DIR_PCENT_CMD="df --output=pcent ${COLLECT_BASE_DIR}"
|
||||||
|
|
||||||
|
space="`${COLLECT_DIR_PCENT_CMD}`"
|
||||||
|
space1=`echo "${space}" | grep -v Use`
|
||||||
|
size=`echo ${space1} | cut -f 1 -d '%'`
|
||||||
|
if [ ${size} -ge 0 -a ${size} -le 100 ] ; then
|
||||||
|
if [ ${size} -ge ${MIN_PERCENT_SPACE_REQUIRED} ] ; then
|
||||||
|
ilog "${COLLECT_BASE_DIR} is $size% full"
|
||||||
|
echo "${FAIL_INSUFFICIENT_SPACE_STR}"
|
||||||
|
wlog "${HOSTNAME}:${COLLECT_BASE_DIR} does not have enough available space in to perform collect"
|
||||||
|
wlog "${HOSTNAME}:${COLLECT_BASE_DIR} must be below ${MIN_PERCENT_SPACE_REQUIRED}% to perform collect"
|
||||||
|
wlog "Increase available space in ${HOSTNAME}:${COLLECT_BASE_DIR} and retry operation."
|
||||||
|
exit ${FAIL_INSUFFICIENT_SPACE}
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
wlog "unable to parse available space from '${COLLECT_DIR_PCENT_CMD}' output"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
41
tools/collector/debian-scripts/etc.exclude
Normal file
41
tools/collector/debian-scripts/etc.exclude
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
/etc/postgresql
|
||||||
|
/etc/alternatives
|
||||||
|
/etc/terminfo
|
||||||
|
/etc/tempest
|
||||||
|
/etc/security
|
||||||
|
/etc/yum
|
||||||
|
/etc/collect
|
||||||
|
/etc/collect.d
|
||||||
|
/etc/logrotate.d
|
||||||
|
/etc/logrotate*
|
||||||
|
/etc/keystone
|
||||||
|
/etc/pam.d
|
||||||
|
/etc/environment
|
||||||
|
/etc/sudoers.d
|
||||||
|
/etc/sudoers
|
||||||
|
/etc/passwd
|
||||||
|
/etc/passwd-
|
||||||
|
/etc/shadow
|
||||||
|
/etc/shadow-
|
||||||
|
/etc/gshadow
|
||||||
|
/etc/gshadow-
|
||||||
|
/etc/group
|
||||||
|
/etc/group-
|
||||||
|
/etc/ssh
|
||||||
|
/etc/X11
|
||||||
|
/etc/bluetooth
|
||||||
|
/etc/chatscripts
|
||||||
|
/etc/cron*
|
||||||
|
/etc/rc5.d
|
||||||
|
/etc/rc4.d
|
||||||
|
/etc/rc1.d
|
||||||
|
/etc/rc2.d
|
||||||
|
/etc/bash_completion.d
|
||||||
|
/etc/pm
|
||||||
|
/etc/systemd/system/*.mount
|
||||||
|
/etc/systemd/system/*.socket
|
||||||
|
/etc/systemd/system/lvm2-lvmetad.service
|
||||||
|
/etc/systemd/system/ctrl-alt-del.target
|
||||||
|
/etc/ssl
|
||||||
|
/etc/mtc/tmp
|
||||||
|
/etc/kubernetes/pki
|
1
tools/collector/debian-scripts/expect_done
Executable file
1
tools/collector/debian-scripts/expect_done
Executable file
@ -0,0 +1 @@
|
|||||||
|
expect done
|
232
tools/collector/debian-scripts/mariadb-cli.sh
Executable file
232
tools/collector/debian-scripts/mariadb-cli.sh
Executable file
@ -0,0 +1,232 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright (c) 2020 Wind River Systems, Inc.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
# This script is wrapper to containerized mariadb-server mysql client.
|
||||||
|
# This provides access to MariaDB databases.
|
||||||
|
#
|
||||||
|
# There are three modes of operation:
|
||||||
|
# - no command specified gives an interactive mysql shell
|
||||||
|
# - command specified executes a single mysql command
|
||||||
|
# - dump option to dump database contents to sql text file
|
||||||
|
#
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Define minimal path
|
||||||
|
PATH=/bin:/usr/bin:/usr/local/bin
|
||||||
|
|
||||||
|
# Environment for kubectl
|
||||||
|
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||||
|
|
||||||
|
# Process input options
|
||||||
|
SCRIPT=$(basename $0)
|
||||||
|
OPTS=$(getopt -o dh --long debug,help,command:,database:,exclude:,dump -n ${SCRIPT} -- "$@")
|
||||||
|
if [ $? != 0 ]; then
|
||||||
|
echo "Failed parsing options." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
eval set -- "$OPTS"
|
||||||
|
|
||||||
|
DEBUG=false
|
||||||
|
HELP=false
|
||||||
|
DUMP=false
|
||||||
|
COMMAND=""
|
||||||
|
DATABASE=""
|
||||||
|
EXCLUDE=""
|
||||||
|
while true
|
||||||
|
do
|
||||||
|
case "$1" in
|
||||||
|
-d | --debug ) DEBUG=true; shift ;;
|
||||||
|
-h | --help ) HELP=true; shift ;;
|
||||||
|
--command )
|
||||||
|
COMMAND="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--database )
|
||||||
|
DATABASE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--exclude )
|
||||||
|
EXCLUDE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--dump )
|
||||||
|
DUMP=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-- )
|
||||||
|
shift
|
||||||
|
break
|
||||||
|
;;
|
||||||
|
* )
|
||||||
|
break
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Treat remaining arguments as commands + options
|
||||||
|
shift $((OPTIND-1))
|
||||||
|
OTHERARGS="$@"
|
||||||
|
|
||||||
|
if [ ${HELP} == 'true' ]; then
|
||||||
|
echo "Usage: ${SCRIPT} [-d|--debug] [-h|--help] [--database <db>] [--exclude <db,...>] [--command <cmd>] [--dump]"
|
||||||
|
echo "Options:"
|
||||||
|
echo " -d | --debug : display debug information"
|
||||||
|
echo " -h | --help : this help"
|
||||||
|
echo " --database <db> : connect to database db"
|
||||||
|
echo " --exclude <db1,...> : list of databases to exclude"
|
||||||
|
echo " --command <cmd> : execute mysql command cmd"
|
||||||
|
echo " --dump : dump database(s) to sql file in current directory"
|
||||||
|
echo
|
||||||
|
echo "Command option examples:"
|
||||||
|
echo
|
||||||
|
echo "Interactive mysql shell:"
|
||||||
|
echo " mariadb-cli"
|
||||||
|
echo " mariadb-cli --database nova"
|
||||||
|
echo " mariadb-cli --command 'show_databases'"
|
||||||
|
echo " mariadb-cli --database nova --command 'select * from compute_nodes'"
|
||||||
|
echo
|
||||||
|
echo "Dump MariaDB databases to sql file:"
|
||||||
|
echo " mariadb-cli --dump"
|
||||||
|
echo " mariadb-cli --dump --database nova"
|
||||||
|
echo " mariadb-cli --dump --exclude keystone"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
# Logger setup
|
||||||
|
LOG_FACILITY=user
|
||||||
|
LOG_PRIORITY=info
|
||||||
|
function LOG {
|
||||||
|
logger -t "${0##*/}[$$]" -p ${LOG_FACILITY}.${LOG_PRIORITY} "$@"
|
||||||
|
echo "${0##*/}[$$]" "$@"
|
||||||
|
}
|
||||||
|
function ERROR {
|
||||||
|
MSG="ERROR"
|
||||||
|
LOG "${MSG} $@"
|
||||||
|
}
|
||||||
|
|
||||||
|
function is_openstack_node {
|
||||||
|
local PASS=0
|
||||||
|
local FAIL=1
|
||||||
|
# NOTE: hostname changes during first configuration
|
||||||
|
local this_node=$(cat /proc/sys/kernel/hostname)
|
||||||
|
|
||||||
|
labels=$(kubectl get node ${this_node} \
|
||||||
|
--no-headers --show-labels 2>/dev/null | awk '{print $NF}')
|
||||||
|
if [[ $labels =~ openstack-control-plane=enabled ]]; then
|
||||||
|
return ${PASS}
|
||||||
|
else
|
||||||
|
return ${FAIL}
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Selected options
|
||||||
|
if [ ${DEBUG} == 'true' ]; then
|
||||||
|
LOG "Options: DUMP=${DUMP} OTHERARGS: ${OTHERARGS}"
|
||||||
|
if [ ! -z "${DATABASE}" ]; then
|
||||||
|
LOG "Options: DATABASE:${DATABASE}"
|
||||||
|
fi
|
||||||
|
if [ ! -z "${EXCLUDE}" ]; then
|
||||||
|
LOG "Options: EXCLUDE:${EXCLUDE}"
|
||||||
|
fi
|
||||||
|
if [ ! -z "${COMMAND}" ]; then
|
||||||
|
LOG "Options: COMMAND:${COMMAND}"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for openstack label on this node
|
||||||
|
if ! is_openstack_node; then
|
||||||
|
ERROR "This node not configured for openstack."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Determine running mariadb pods
|
||||||
|
MARIADB_PODS=( $(kubectl get pods -n openstack \
|
||||||
|
--selector=application=mariadb,component=server \
|
||||||
|
--field-selector status.phase=Running \
|
||||||
|
--output=jsonpath={.items..metadata.name}) )
|
||||||
|
if [ ${DEBUG} == 'true' ]; then
|
||||||
|
LOG "Found mariadb-server pods: ${MARIADB_PODS[@]}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get first available mariadb pod with container we can exec
|
||||||
|
DBPOD=""
|
||||||
|
for POD in "${MARIADB_PODS[@]}"
|
||||||
|
do
|
||||||
|
kubectl exec -it -n openstack ${POD} -c mariadb -- pwd 1>/dev/null 2>/dev/null
|
||||||
|
RC=$?
|
||||||
|
if [ ${RC} -eq 0 ]; then
|
||||||
|
DBPOD=${POD}
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
if [ -z "${DBPOD}" ]; then
|
||||||
|
ERROR "Could not find mariadb-server pod."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [ ${DEBUG} == 'true' ]; then
|
||||||
|
LOG "Found mariadb-server pod: ${DBPOD}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
EVAL='eval env 1>/dev/null'
|
||||||
|
DBOPTS='--password=$MYSQL_DBADMIN_PASSWORD --user=$MYSQL_DBADMIN_USERNAME'
|
||||||
|
|
||||||
|
if [ ${DUMP} == 'true' ]; then
|
||||||
|
# Dump database contents to sql text file
|
||||||
|
DB_EXT=sql
|
||||||
|
|
||||||
|
DATABASES=()
|
||||||
|
if [ ! -z "${DATABASE}" ]; then
|
||||||
|
DATABASES+=( $DATABASE )
|
||||||
|
else
|
||||||
|
# Get list of databases
|
||||||
|
MYSQL_CMD="${EVAL}; mysql ${DBOPTS} -e 'show databases' -sN --disable-pager"
|
||||||
|
if [ ${DEBUG} == 'true' ]; then
|
||||||
|
LOG "MYSQL_CMD: ${MYSQL_CMD}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Suppress error: line from stdout, eg.,
|
||||||
|
# error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1
|
||||||
|
# Exclude databases: mysql, information_schema, performance_schema
|
||||||
|
# Remove linefeed control character.
|
||||||
|
DATABASES=( $(kubectl exec -it -n openstack ${DBPOD} -c mariadb -- bash -c "${MYSQL_CMD}" | \
|
||||||
|
grep -v -e error: -e mysql -e information_schema -e performance_schema | tr -d '\r') )
|
||||||
|
fi
|
||||||
|
|
||||||
|
for dbname in "${DATABASES[@]}"
|
||||||
|
do
|
||||||
|
re=\\b"${dbname}"\\b
|
||||||
|
if [[ "${EXCLUDE}" =~ ${re} ]]; then
|
||||||
|
LOG "excluding: ${dbname}"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# NOTE: --skip-opt will show an INSERT for each record
|
||||||
|
DUMP_CMD="${EVAL}; mysqldump ${DBOPTS} --skip-opt --skip-comments --skip-set-charset ${dbname}"
|
||||||
|
dbfile=${dbname}.${DB_EXT}
|
||||||
|
LOG "Dump database: $dbname to file: ${dbfile}"
|
||||||
|
if [ ${DEBUG} == 'true' ]; then
|
||||||
|
LOG "DUMP_CMD: ${DUMP_CMD}"
|
||||||
|
fi
|
||||||
|
kubectl exec -it -n openstack ${DBPOD} -c mariadb -- bash -c "${DUMP_CMD}" > ${dbfile}
|
||||||
|
done
|
||||||
|
|
||||||
|
else
|
||||||
|
# Interactive mariadb mysql client
|
||||||
|
LOG "Interactive MariaDB mysql shell"
|
||||||
|
MYSQL_CMD="${EVAL}; mysql ${DBOPTS} ${DATABASE}"
|
||||||
|
if [ ! -z "${COMMAND}" ]; then
|
||||||
|
MYSQL_CMD="${MYSQL_CMD} -e '${COMMAND}'"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ${DEBUG} == 'true' ]; then
|
||||||
|
LOG "MYSQL_CMD: ${MYSQL_CMD}"
|
||||||
|
fi
|
||||||
|
kubectl exec -it -n openstack ${DBPOD} -c mariadb -- bash -c "${MYSQL_CMD}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
14
tools/collector/debian-scripts/run.exclude
Normal file
14
tools/collector/debian-scripts/run.exclude
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
/var/run/sanlock/sanlock.sock
|
||||||
|
/var/run/tgtd.ipc_abstract_namespace.0
|
||||||
|
/var/run/wdmd/wdmd.sock
|
||||||
|
/var/run/acpid.socket
|
||||||
|
/var/run/rpcbind.sock
|
||||||
|
/var/run/libvirt/libvirt-sock-ro
|
||||||
|
/var/run/libvirt/libvirt-sock
|
||||||
|
/var/run/dbus/system_bus_socket
|
||||||
|
/var/run/named-chroot
|
||||||
|
/var/run/avahi-daemon
|
||||||
|
/var/run/neutron/metadata_proxy
|
||||||
|
/var/run/.vswitch
|
||||||
|
/var/run/containerd
|
||||||
|
/var/run/nvidia
|
1
tools/collector/debian-scripts/varlog.exclude
Normal file
1
tools/collector/debian-scripts/varlog.exclude
Normal file
@ -0,0 +1 @@
|
|||||||
|
/var/log/crash
|
@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
debname: collector
|
debname: collector
|
||||||
debver: 1.0-1
|
debver: 1.0-1
|
||||||
src_path: scripts
|
src_path: debian-scripts
|
||||||
revision:
|
revision:
|
||||||
dist: $STX_DIST
|
dist: $STX_DIST
|
||||||
PKG_GITREVCOUNT: true
|
PKG_GITREVCOUNT: true
|
||||||
|
Loading…
Reference in New Issue
Block a user