622baa65bf
Some log upload tasks were missing no_log instructions and might write out credentials to the job-output.json file. Update these tasks to include no_log. Change-Id: I1f18cec117d9205945644ce19d5584f5d676e8d8 |
||
---|---|---|
.. | ||
defaults | ||
meta | ||
tasks | ||
README.rst |
Upload logs to Google Cloud Storage
Before using this role, create at least one bucket and set up appropriate access controls or lifecycle events. This role will not automatically create buckets (though it will configure CORS policies).
This role requires the google-cloud-storage
Python
package to be installed in the Ansible environment on the Zuul executor.
It uses Google Cloud Application Default Credentials.
Role Variables
This role will not create buckets which do not already exist. If partitioning is not enabled, this is the name of the bucket which will be used. If partitioning is enabled, then this will be used as the prefix for the bucket name which will be separated from the partition name by an underscore. For example, "logs_42" would be the bucket name for partition 42.
Note that you will want to set this to a value that uniquely identifies your Zuul installation.
This log upload role normally uses Google Cloud Application Default Credentials, however it can also operate in a mode where it uses a credential file written by gcp-authdaemon: https://opendev.org/zuul/gcp-authdaemon
To use this mode of operation, supply a path to the credentials file previously written by gcp-authdaemon.
Also supply :zuul
upload-logs-gcs.zuul_log_project
.
When using :zuul
upload-logs-gcs.zuul_log_credentials_file
, the name of the Google Cloud project of the log container must also be supplied.