enhance docs on env vars

This commit is contained in:
Felix Kunde 2022-04-13 14:42:36 +02:00
parent 602e914028
commit 4b833c2436
4 changed files with 64 additions and 22 deletions

View File

@ -601,15 +601,38 @@ spec:
## Custom Pod Environment Variables
It is possible to configure a ConfigMap as well as a Secret which are used by
the Postgres pods as an additional provider for environment variables. One use
case is a customized Spilo image configured by extra environment variables.
Another case could be to provide custom cloud provider or backup settings.
The operator will assign a set of environment variables to the database pods
that cannot be overridden to guarantee core functionality. Only variables with
'WAL_' and 'LOG_' prefixes can be customized, to allow backup and log shipping
to be specified differently. There are three ways to specify extra environment
variables (or override existing ones) for database pods:
In general the Operator will give preference to the globally configured
variables, to not have the custom ones interfere with core functionality.
Variables with the 'WAL_' and 'LOG_' prefix can be overwritten though, to
allow backup and log shipping to be specified differently.
* [Via ConfigMap](#via-configmap)
* [Via Secret](#via-secret)
* [Via Postgres Cluster Manifest](#via-postgres-cluster-manifest)
The first two options must be referenced from the operator configuration
making them global settings for all Postgres cluster the operator watches.
One use case is a customized Spilo image that must be configured by extra
environment variables. Another case could be to provide custom cloud
provider or backup settings.
The last options allows for specifying environment variables individual to
every cluster via the `env` section in the manifest. For example, if you use
individual backup locations for each of your clusters. Or you want to disable
WAL archiving for a certain cluster by setting `WAL_S3_BUCKET`, `WAL_GS_BUCKET`
or `AZURE_STORAGE_ACCOUNT` to an empty string.
The operator will give precendence to environment variables in the following
order:
1. Assigned by the operator
2. Clone section (with WAL settings from operator config when `s3_wal_path` is empty)
3. Standby section
4. `env` section in cluster manifest
5. Pod environment secret via operator config
6. Pod environment config map via operator config
7. WAL and logical backup settings from operator config
### Via ConfigMap
@ -706,7 +729,7 @@ data:
The key-value pairs of the Secret are all accessible as environment variables
to the Postgres StatefulSet/pods.
### For individual cluster
### Via Postgres Cluster Manifest
It is possible to define environment variables directly in the Postgres cluster
manifest to configure it individually. The variables must be listed under the
@ -951,6 +974,10 @@ When the `AWS_REGION` is set, `AWS_ENDPOINT` and `WALE_S3_ENDPOINT` are
generated automatically. `WALG_S3_PREFIX` is identical to `WALE_S3_PREFIX`.
`SCOPE` is the Postgres cluster name.
:warning: If both `AWS_REGION` and `AWS_ENDPOINT` or `WALE_S3_ENDPOINT` are
defined, backups with WAL-E will fail. You can fix it by switching to WAL-G
with `USE_WALG_BACKUP: "true"`.
### Google Cloud Platform setup
To configure the operator on GCP these prerequisites that are needed:

View File

@ -645,7 +645,10 @@ yet officially supported.
empty.
* **aws_region**
AWS region used to store EBS volumes. The default is `eu-central-1`.
AWS region used to store EBS volumes. The default is `eu-central-1`. Note,
this option is not meant for specifying the AWS region for backups and
restore, since it can be separate from the EBS region. You have to define
AWS_REGION as a [custom environment variable](https://github.com/zalando/postgres-operator/blob/master/docs/administrator.md#custom-pod-environment-variables).
* **additional_secret_mount**
Additional Secret (aws or gcp credentials) to mount in the pod.

View File

@ -766,15 +766,16 @@ spec:
uid: "efd12e58-5786-11e8-b5a7-06148230260c"
cluster: "acid-minimal-cluster"
timestamp: "2017-12-19T12:40:33+01:00"
s3_wal_path: "s3://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>"
```
Here `cluster` is a name of a source cluster that is going to be cloned. A new
cluster will be cloned from S3, using the latest backup before the `timestamp`.
Note, that a time zone is required for `timestamp` in the format of +00:00 which
is UTC. You can specify the `s3_wal_path` of the source cluster or let the
operator try to find it based on the configured `wal_[s3|gs]_bucket` and the
specified `uid`. You can find the UID of the source cluster in its metadata:
Note, that a time zone is required for `timestamp` in the format of `+00:00`
which is UTC.
The operator will try to find the WAL location based on the configured
`wal_[s3|gs]_bucket` or `wal_az_storage_account` and the specified `uid`.
You can find the UID of the source cluster in its metadata:
```yaml
apiVersion: acid.zalan.do/v1
@ -784,6 +785,14 @@ metadata:
uid: efd12e58-5786-11e8-b5a7-06148230260c
```
If your source cluster uses a WAL location different from the global
configuration you can specify the full path under `s3_wal_path`. For
[Google Cloud Plattform](administrator.md#google-cloud-platform-setup)
or [Azure](administrator.md#azure-setup)
it can only be set globally with [custom Pod environment variables](administrator.md#custom-pod-environment-variables)
or locally in the Postgres manifest's [`env`]() section.
For non AWS S3 following settings can be set to support cloning from other S3
implementations:
@ -793,6 +802,7 @@ spec:
uid: "efd12e58-5786-11e8-b5a7-06148230260c"
cluster: "acid-minimal-cluster"
timestamp: "2017-12-19T12:40:33+01:00"
s3_wal_path: "s3://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>"
s3_endpoint: https://s3.acme.org
s3_access_key_id: 0123456789abcdef0123456789abcdef
s3_secret_access_key: 0123456789abcdef0123456789abcdef
@ -864,9 +874,8 @@ the PostgreSQL version between source and target cluster has to be the same.
To start a cluster as standby, add the following `standby` section in the YAML
file. You can stream changes from archived WAL files (AWS S3 or Google Cloud
Storage) or from a remote primary where you specify the host address and port.
If you leave out the port, Patroni will use `"5432"`. Only one option can be
specfied in the manifest:
Storage) or from a remote primary. Only one option can be specfied in the
manifest:
```yaml
spec:
@ -874,12 +883,19 @@ spec:
s3_wal_path: "s3://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>"
```
For GCS, you have to define STANDBY_GOOGLE_APPLICATION_CREDENTIALS as a
[custom pod environment variable](administrator.md#custom-pod-environment-variables).
It is not set from the config to allow for overridding.
```yaml
spec:
standby:
gs_wal_path: "gs://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>"
```
For a remote primry you specify the host address and optionally the port.
If you leave out the port Patroni will use `"5432"`.
```yaml
spec:
standby:

View File

@ -933,9 +933,6 @@ func (c *Cluster) generateSpiloPodEnvVars(
envVars = appendEnvVars(envVars, opConfigEnvVars...)
//sort.Slice(envVars,
// func(i, j int) bool { return envVars[i].Name < envVars[j].Name })
return envVars
}
@ -1863,7 +1860,6 @@ func (c *Cluster) generateCloneEnvironment(description *acidv1.CloneDescription)
result = append(result, v1.EnvVar{Name: "CLONE_AZURE_STORAGE_ACCOUNT", Value: c.OpConfig.WALAZStorageAccount})
} else {
c.logger.Error("cannot figure out S3 or GS bucket or AZ storage account. All are empty in config.")
return result
}
// append suffix because WAL location name is not the whole path