refactor spilo env var generation (#1848)

* refactor spilo env generation
* enhance docs on env vars
* add unit test for appendEnvVar
This commit is contained in:
Felix Kunde 2022-04-14 11:47:33 +02:00 committed by GitHub
parent 483bf624ee
commit eecd13169c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 1231 additions and 910 deletions

View File

@ -601,15 +601,39 @@ spec:
## Custom Pod Environment Variables ## Custom Pod Environment Variables
It is possible to configure a ConfigMap as well as a Secret which are used by The operator will assign a set of environment variables to the database pods
the Postgres pods as an additional provider for environment variables. One use that cannot be overridden to guarantee core functionality. Only variables with
case is a customized Spilo image configured by extra environment variables. 'WAL_' and 'LOG_' prefixes can be customized to allow for backup and log
Another case could be to provide custom cloud provider or backup settings. shipping to be specified differently. There are three ways to specify extra
environment variables (or override existing ones) for database pods:
In general the Operator will give preference to the globally configured * [Via ConfigMap](#via-configmap)
variables, to not have the custom ones interfere with core functionality. * [Via Secret](#via-secret)
Variables with the 'WAL_' and 'LOG_' prefix can be overwritten though, to * [Via Postgres Cluster Manifest](#via-postgres-cluster-manifest)
allow backup and log shipping to be specified differently.
The first two options must be referenced from the operator configuration
making them global settings for all Postgres cluster the operator watches.
One use case is a customized Spilo image that must be configured by extra
environment variables. Another case could be to provide custom cloud
provider or backup settings.
The last options allows for specifying environment variables individual to
every cluster via the `env` section in the manifest. For example, if you use
individual backup locations for each of your clusters. Or you want to disable
WAL archiving for a certain cluster by setting `WAL_S3_BUCKET`, `WAL_GS_BUCKET`
or `AZURE_STORAGE_ACCOUNT` to an empty string.
The operator will give precedence to environment variables in the following
order (e.g. a variable defined in 4. overrides a variable with the same name
in 5.):
1. Assigned by the operator
2. Clone section (with WAL settings from operator config when `s3_wal_path` is empty)
3. Standby section
4. `env` section in cluster manifest
5. Pod environment secret via operator config
6. Pod environment config map via operator config
7. WAL and logical backup settings from operator config
### Via ConfigMap ### Via ConfigMap
@ -706,7 +730,7 @@ data:
The key-value pairs of the Secret are all accessible as environment variables The key-value pairs of the Secret are all accessible as environment variables
to the Postgres StatefulSet/pods. to the Postgres StatefulSet/pods.
### For individual cluster ### Via Postgres Cluster Manifest
It is possible to define environment variables directly in the Postgres cluster It is possible to define environment variables directly in the Postgres cluster
manifest to configure it individually. The variables must be listed under the manifest to configure it individually. The variables must be listed under the
@ -951,6 +975,10 @@ When the `AWS_REGION` is set, `AWS_ENDPOINT` and `WALE_S3_ENDPOINT` are
generated automatically. `WALG_S3_PREFIX` is identical to `WALE_S3_PREFIX`. generated automatically. `WALG_S3_PREFIX` is identical to `WALE_S3_PREFIX`.
`SCOPE` is the Postgres cluster name. `SCOPE` is the Postgres cluster name.
:warning: If both `AWS_REGION` and `AWS_ENDPOINT` or `WALE_S3_ENDPOINT` are
defined backups with WAL-E will fail. You can fix it by switching to WAL-G
with `USE_WALG_BACKUP: "true"`.
### Google Cloud Platform setup ### Google Cloud Platform setup
To configure the operator on GCP these prerequisites that are needed: To configure the operator on GCP these prerequisites that are needed:

View File

@ -645,7 +645,10 @@ yet officially supported.
empty. empty.
* **aws_region** * **aws_region**
AWS region used to store EBS volumes. The default is `eu-central-1`. AWS region used to store EBS volumes. The default is `eu-central-1`. Note,
this option is not meant for specifying the AWS region for backups and
restore, since it can be separate from the EBS region. You have to define
AWS_REGION as a [custom environment variable](../administrator.md#custom-pod-environment-variables).
* **additional_secret_mount** * **additional_secret_mount**
Additional Secret (aws or gcp credentials) to mount in the pod. Additional Secret (aws or gcp credentials) to mount in the pod.

View File

@ -766,15 +766,15 @@ spec:
uid: "efd12e58-5786-11e8-b5a7-06148230260c" uid: "efd12e58-5786-11e8-b5a7-06148230260c"
cluster: "acid-minimal-cluster" cluster: "acid-minimal-cluster"
timestamp: "2017-12-19T12:40:33+01:00" timestamp: "2017-12-19T12:40:33+01:00"
s3_wal_path: "s3://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>"
``` ```
Here `cluster` is a name of a source cluster that is going to be cloned. A new Here `cluster` is a name of a source cluster that is going to be cloned. A new
cluster will be cloned from S3, using the latest backup before the `timestamp`. cluster will be cloned from S3, using the latest backup before the `timestamp`.
Note, that a time zone is required for `timestamp` in the format of +00:00 which Note, a time zone is required for `timestamp` in the format of `+00:00` (UTC).
is UTC. You can specify the `s3_wal_path` of the source cluster or let the
operator try to find it based on the configured `wal_[s3|gs]_bucket` and the The operator will try to find the WAL location based on the configured
specified `uid`. You can find the UID of the source cluster in its metadata: `wal_[s3|gs]_bucket` or `wal_az_storage_account` and the specified `uid`.
You can find the UID of the source cluster in its metadata:
```yaml ```yaml
apiVersion: acid.zalan.do/v1 apiVersion: acid.zalan.do/v1
@ -784,6 +784,14 @@ metadata:
uid: efd12e58-5786-11e8-b5a7-06148230260c uid: efd12e58-5786-11e8-b5a7-06148230260c
``` ```
If your source cluster uses a WAL location different from the global
configuration you can specify the full path under `s3_wal_path`. For
[Google Cloud Platform](administrator.md#google-cloud-platform-setup)
or [Azure](administrator.md#azure-setup)
it can only be set globally with [custom Pod environment variables](administrator.md#custom-pod-environment-variables)
or locally in the Postgres manifest's [`env`](administrator.md#via-postgres-cluster-manifest) section.
For non AWS S3 following settings can be set to support cloning from other S3 For non AWS S3 following settings can be set to support cloning from other S3
implementations: implementations:
@ -793,6 +801,7 @@ spec:
uid: "efd12e58-5786-11e8-b5a7-06148230260c" uid: "efd12e58-5786-11e8-b5a7-06148230260c"
cluster: "acid-minimal-cluster" cluster: "acid-minimal-cluster"
timestamp: "2017-12-19T12:40:33+01:00" timestamp: "2017-12-19T12:40:33+01:00"
s3_wal_path: "s3://custom/path/to/bucket"
s3_endpoint: https://s3.acme.org s3_endpoint: https://s3.acme.org
s3_access_key_id: 0123456789abcdef0123456789abcdef s3_access_key_id: 0123456789abcdef0123456789abcdef
s3_secret_access_key: 0123456789abcdef0123456789abcdef s3_secret_access_key: 0123456789abcdef0123456789abcdef
@ -864,9 +873,8 @@ the PostgreSQL version between source and target cluster has to be the same.
To start a cluster as standby, add the following `standby` section in the YAML To start a cluster as standby, add the following `standby` section in the YAML
file. You can stream changes from archived WAL files (AWS S3 or Google Cloud file. You can stream changes from archived WAL files (AWS S3 or Google Cloud
Storage) or from a remote primary where you specify the host address and port. Storage) or from a remote primary. Only one option can be specfied in the
If you leave out the port, Patroni will use `"5432"`. Only one option can be manifest:
specfied in the manifest:
```yaml ```yaml
spec: spec:
@ -874,12 +882,19 @@ spec:
s3_wal_path: "s3://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>" s3_wal_path: "s3://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>"
``` ```
For GCS, you have to define STANDBY_GOOGLE_APPLICATION_CREDENTIALS as a
[custom pod environment variable](administrator.md#custom-pod-environment-variables).
It is not set from the config to allow for overridding.
```yaml ```yaml
spec: spec:
standby: standby:
gs_wal_path: "gs://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>" gs_wal_path: "gs://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>"
``` ```
For a remote primary you specify the host address and optionally the port.
If you leave out the port Patroni will use `"5432"`.
```yaml ```yaml
spec: spec:
standby: standby:

View File

@ -1318,7 +1318,7 @@ func (c *Cluster) initAdditionalOwnerRoles() {
} }
} }
if len(memberOf) > 1 { if len(memberOf) > 0 {
namespace := c.Namespace namespace := c.Namespace
additionalOwnerPgUser := spec.PgUser{ additionalOwnerPgUser := spec.PgUser{
Origin: spec.RoleOriginSpilo, Origin: spec.RoleOriginSpilo,

View File

@ -26,6 +26,8 @@ import (
const ( const (
superUserName = "postgres" superUserName = "postgres"
replicationUserName = "standby" replicationUserName = "standby"
exampleSpiloConfig = `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_connections":"100","max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`
spiloConfigDiff = `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`
) )
var logger = logrus.New().WithField("test", "cluster") var logger = logrus.New().WithField("test", "cluster")
@ -957,7 +959,7 @@ func TestCompareEnv(t *testing.T) {
}, },
{ {
Name: "SPILO_CONFIGURATION", Name: "SPILO_CONFIGURATION",
Value: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_connections":"100","max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`, Value: exampleSpiloConfig,
}, },
}, },
ExpectedResult: true, ExpectedResult: true,
@ -978,7 +980,7 @@ func TestCompareEnv(t *testing.T) {
}, },
{ {
Name: "SPILO_CONFIGURATION", Name: "SPILO_CONFIGURATION",
Value: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`, Value: spiloConfigDiff,
}, },
}, },
ExpectedResult: true, ExpectedResult: true,
@ -999,7 +1001,7 @@ func TestCompareEnv(t *testing.T) {
}, },
{ {
Name: "SPILO_CONFIGURATION", Name: "SPILO_CONFIGURATION",
Value: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`, Value: exampleSpiloConfig,
}, },
}, },
ExpectedResult: false, ExpectedResult: false,
@ -1024,7 +1026,7 @@ func TestCompareEnv(t *testing.T) {
}, },
{ {
Name: "SPILO_CONFIGURATION", Name: "SPILO_CONFIGURATION",
Value: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_connections":"100","max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`, Value: exampleSpiloConfig,
}, },
}, },
ExpectedResult: false, ExpectedResult: false,
@ -1041,7 +1043,7 @@ func TestCompareEnv(t *testing.T) {
}, },
{ {
Name: "SPILO_CONFIGURATION", Name: "SPILO_CONFIGURATION",
Value: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_connections":"100","max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`, Value: exampleSpiloConfig,
}, },
}, },
ExpectedResult: false, ExpectedResult: false,

View File

@ -766,9 +766,10 @@ func (c *Cluster) generateSpiloPodEnvVars(
uid types.UID, uid types.UID,
spiloConfiguration string, spiloConfiguration string,
cloneDescription *acidv1.CloneDescription, cloneDescription *acidv1.CloneDescription,
standbyDescription *acidv1.StandbyDescription, standbyDescription *acidv1.StandbyDescription) []v1.EnvVar {
customPodEnvVarsList []v1.EnvVar) []v1.EnvVar {
// hard-coded set of environment variables we need
// to guarantee core functionality of the operator
envVars := []v1.EnvVar{ envVars := []v1.EnvVar{
{ {
Name: "SCOPE", Name: "SCOPE",
@ -875,59 +876,75 @@ func (c *Cluster) generateSpiloPodEnvVars(
envVars = append(envVars, c.generateCloneEnvironment(cloneDescription)...) envVars = append(envVars, c.generateCloneEnvironment(cloneDescription)...)
} }
if c.Spec.StandbyCluster != nil { if standbyDescription != nil {
envVars = append(envVars, c.generateStandbyEnvironment(standbyDescription)...) envVars = append(envVars, c.generateStandbyEnvironment(standbyDescription)...)
} }
// fetch cluster-specific variables that will override all subsequent global variables
if len(c.Spec.Env) > 0 { if len(c.Spec.Env) > 0 {
envVars = appendEnvVars(envVars, c.Spec.Env...) envVars = appendEnvVars(envVars, c.Spec.Env...)
} }
// add vars taken from pod_environment_configmap and pod_environment_secret first // fetch variables from custom environment Secret
// (to allow them to override the globals set in the operator config) // that will override all subsequent global variables
if len(customPodEnvVarsList) > 0 { secretEnvVarsList, err := c.getPodEnvironmentSecretVariables()
envVars = appendEnvVars(envVars, customPodEnvVarsList...) if err != nil {
c.logger.Warningf("%v", err)
} }
envVars = appendEnvVars(envVars, secretEnvVarsList...)
// fetch variables from custom environment ConfigMap
// that will override all subsequent global variables
configMapEnvVarsList, err := c.getPodEnvironmentConfigMapVariables()
if err != nil {
c.logger.Warningf("%v", err)
}
envVars = appendEnvVars(envVars, configMapEnvVarsList...)
// global variables derived from operator configuration
opConfigEnvVars := make([]v1.EnvVar, 0)
if c.OpConfig.WALES3Bucket != "" { if c.OpConfig.WALES3Bucket != "" {
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
} }
if c.OpConfig.WALGSBucket != "" { if c.OpConfig.WALGSBucket != "" {
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_GS_BUCKET", Value: c.OpConfig.WALGSBucket}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_GS_BUCKET", Value: c.OpConfig.WALGSBucket})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
} }
if c.OpConfig.WALAZStorageAccount != "" { if c.OpConfig.WALAZStorageAccount != "" {
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "AZURE_STORAGE_ACCOUNT", Value: c.OpConfig.WALAZStorageAccount}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "AZURE_STORAGE_ACCOUNT", Value: c.OpConfig.WALAZStorageAccount})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
} }
if c.OpConfig.GCPCredentials != "" { if c.OpConfig.GCPCredentials != "" {
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.GCPCredentials}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.GCPCredentials})
} }
if c.OpConfig.LogS3Bucket != "" { if c.OpConfig.LogS3Bucket != "" {
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "LOG_S3_BUCKET", Value: c.OpConfig.LogS3Bucket}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "LOG_S3_BUCKET", Value: c.OpConfig.LogS3Bucket})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_PREFIX", Value: ""}) opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_PREFIX", Value: ""})
} }
envVars = appendEnvVars(envVars, opConfigEnvVars...)
return envVars return envVars
} }
func appendEnvVars(envs []v1.EnvVar, appEnv ...v1.EnvVar) []v1.EnvVar { func appendEnvVars(envs []v1.EnvVar, appEnv ...v1.EnvVar) []v1.EnvVar {
jenvs := envs collectedEnvs := envs
for _, env := range appEnv { for _, env := range appEnv {
if !isEnvVarPresent(jenvs, env.Name) { env.Name = strings.ToUpper(env.Name)
jenvs = append(jenvs, env) if !isEnvVarPresent(collectedEnvs, env.Name) {
collectedEnvs = append(collectedEnvs, env)
} }
} }
return jenvs return collectedEnvs
} }
func isEnvVarPresent(envs []v1.EnvVar, key string) bool { func isEnvVarPresent(envs []v1.EnvVar, key string) bool {
@ -963,9 +980,11 @@ func (c *Cluster) getPodEnvironmentConfigMapVariables() ([]v1.EnvVar, error) {
return nil, fmt.Errorf("could not read PodEnvironmentConfigMap: %v", err) return nil, fmt.Errorf("could not read PodEnvironmentConfigMap: %v", err)
} }
} }
for k, v := range cm.Data { for k, v := range cm.Data {
configMapPodEnvVarsList = append(configMapPodEnvVarsList, v1.EnvVar{Name: k, Value: v}) configMapPodEnvVarsList = append(configMapPodEnvVarsList, v1.EnvVar{Name: k, Value: v})
} }
sort.Slice(configMapPodEnvVarsList, func(i, j int) bool { return configMapPodEnvVarsList[i].Name < configMapPodEnvVarsList[j].Name })
return configMapPodEnvVarsList, nil return configMapPodEnvVarsList, nil
} }
@ -1015,6 +1034,7 @@ func (c *Cluster) getPodEnvironmentSecretVariables() ([]v1.EnvVar, error) {
}}) }})
} }
sort.Slice(secretPodEnvVarsList, func(i, j int) bool { return secretPodEnvVarsList[i].Name < secretPodEnvVarsList[j].Name })
return secretPodEnvVarsList, nil return secretPodEnvVarsList, nil
} }
@ -1104,23 +1124,6 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
initContainers = spec.InitContainers initContainers = spec.InitContainers
} }
// fetch env vars from custom ConfigMap
configMapEnvVarsList, err := c.getPodEnvironmentConfigMapVariables()
if err != nil {
return nil, err
}
// fetch env vars from custom ConfigMap
secretEnvVarsList, err := c.getPodEnvironmentSecretVariables()
if err != nil {
return nil, err
}
// concat all custom pod env vars and sort them
customPodEnvVarsList := append(configMapEnvVarsList, secretEnvVarsList...)
sort.Slice(customPodEnvVarsList,
func(i, j int) bool { return customPodEnvVarsList[i].Name < customPodEnvVarsList[j].Name })
// backward compatible check for InitContainers // backward compatible check for InitContainers
if spec.InitContainersOld != nil { if spec.InitContainersOld != nil {
msg := "manifest parameter init_containers is deprecated." msg := "manifest parameter init_containers is deprecated."
@ -1153,9 +1156,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
c.Postgresql.GetUID(), c.Postgresql.GetUID(),
spiloConfiguration, spiloConfiguration,
spec.Clone, spec.Clone,
spec.StandbyCluster, spec.StandbyCluster)
customPodEnvVarsList,
)
// pickup the docker image for the spilo container // pickup the docker image for the spilo container
effectiveDockerImage := util.Coalesce(spec.DockerImage, c.OpConfig.DockerImage) effectiveDockerImage := util.Coalesce(spec.DockerImage, c.OpConfig.DockerImage)
@ -1297,7 +1298,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
sidecarContainers, conflicts := mergeContainers(clusterSpecificSidecars, c.Config.OpConfig.SidecarContainers, globalSidecarContainersByDockerImage, scalyrSidecars) sidecarContainers, conflicts := mergeContainers(clusterSpecificSidecars, c.Config.OpConfig.SidecarContainers, globalSidecarContainersByDockerImage, scalyrSidecars)
for containerName := range conflicts { for containerName := range conflicts {
c.logger.Warningf("a sidecar is specified twice. Ignoring sidecar %q in favor of %q with high a precendence", c.logger.Warningf("a sidecar is specified twice. Ignoring sidecar %q in favor of %q with high a precedence",
containerName, containerName) containerName, containerName)
} }
@ -1819,6 +1820,7 @@ func (c *Cluster) generateCloneEnvironment(description *acidv1.CloneDescription)
cluster := description.ClusterName cluster := description.ClusterName
result = append(result, v1.EnvVar{Name: "CLONE_SCOPE", Value: cluster}) result = append(result, v1.EnvVar{Name: "CLONE_SCOPE", Value: cluster})
if description.EndTimestamp == "" { if description.EndTimestamp == "" {
c.logger.Infof("cloning with basebackup from %s", cluster)
// cloning with basebackup, make a connection string to the cluster to clone from // cloning with basebackup, make a connection string to the cluster to clone from
host, port := c.getClusterServiceConnectionParameters(cluster) host, port := c.getClusterServiceConnectionParameters(cluster)
// TODO: make some/all of those constants // TODO: make some/all of those constants
@ -1840,67 +1842,47 @@ func (c *Cluster) generateCloneEnvironment(description *acidv1.CloneDescription)
}, },
}) })
} else { } else {
// cloning with S3, find out the bucket to clone c.logger.Info("cloning from WAL location")
msg := "clone from S3 bucket"
c.logger.Info(msg, description.S3WalPath)
if description.S3WalPath == "" { if description.S3WalPath == "" {
msg := "figure out which S3 bucket to use from env" c.logger.Info("no S3 WAL path defined - taking value from global config", description.S3WalPath)
c.logger.Info(msg, description.S3WalPath)
if c.OpConfig.WALES3Bucket != "" { if c.OpConfig.WALES3Bucket != "" {
envs := []v1.EnvVar{ c.logger.Debugf("found WALES3Bucket %s - will set CLONE_WAL_S3_BUCKET", c.OpConfig.WALES3Bucket)
{ result = append(result, v1.EnvVar{Name: "CLONE_WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket})
Name: "CLONE_WAL_S3_BUCKET",
Value: c.OpConfig.WALES3Bucket,
},
}
result = append(result, envs...)
} else if c.OpConfig.WALGSBucket != "" { } else if c.OpConfig.WALGSBucket != "" {
envs := []v1.EnvVar{ c.logger.Debugf("found WALGSBucket %s - will set CLONE_WAL_GS_BUCKET", c.OpConfig.WALGSBucket)
{ result = append(result, v1.EnvVar{Name: "CLONE_WAL_GS_BUCKET", Value: c.OpConfig.WALGSBucket})
Name: "CLONE_WAL_GS_BUCKET", if c.OpConfig.GCPCredentials != "" {
Value: c.OpConfig.WALGSBucket, result = append(result, v1.EnvVar{Name: "CLONE_GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.GCPCredentials})
},
{
Name: "CLONE_GOOGLE_APPLICATION_CREDENTIALS",
Value: c.OpConfig.GCPCredentials,
},
} }
result = append(result, envs...)
} else if c.OpConfig.WALAZStorageAccount != "" { } else if c.OpConfig.WALAZStorageAccount != "" {
envs := []v1.EnvVar{ c.logger.Debugf("found WALAZStorageAccount %s - will set CLONE_AZURE_STORAGE_ACCOUNT", c.OpConfig.WALAZStorageAccount)
{ result = append(result, v1.EnvVar{Name: "CLONE_AZURE_STORAGE_ACCOUNT", Value: c.OpConfig.WALAZStorageAccount})
Name: "CLONE_AZURE_STORAGE_ACCOUNT",
Value: c.OpConfig.WALAZStorageAccount,
},
}
result = append(result, envs...)
} else { } else {
c.logger.Error("Cannot figure out S3 or GS bucket. Both are empty.") c.logger.Error("cannot figure out S3 or GS bucket or AZ storage account. All options are empty in the config.")
} }
// append suffix because WAL location name is not the whole path
result = append(result, v1.EnvVar{Name: "CLONE_WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(description.UID)})
} else {
c.logger.Debugf("use S3WalPath %s from the manifest", description.S3WalPath)
envs := []v1.EnvVar{ envs := []v1.EnvVar{
{
Name: "CLONE_WALE_S3_PREFIX",
Value: description.S3WalPath,
},
{ {
Name: "CLONE_WAL_BUCKET_SCOPE_SUFFIX", Name: "CLONE_WAL_BUCKET_SCOPE_SUFFIX",
Value: getBucketScopeSuffix(description.UID), Value: "",
}, },
} }
result = append(result, envs...) result = append(result, envs...)
} else {
msg := "use custom parsed S3WalPath %s from the manifest"
c.logger.Warningf(msg, description.S3WalPath)
result = append(result, v1.EnvVar{
Name: "CLONE_WALE_S3_PREFIX",
Value: description.S3WalPath,
})
} }
result = append(result, v1.EnvVar{Name: "CLONE_METHOD", Value: "CLONE_WITH_WALE"}) result = append(result, v1.EnvVar{Name: "CLONE_METHOD", Value: "CLONE_WITH_WALE"})
result = append(result, v1.EnvVar{Name: "CLONE_TARGET_TIME", Value: description.EndTimestamp}) result = append(result, v1.EnvVar{Name: "CLONE_TARGET_TIME", Value: description.EndTimestamp})
result = append(result, v1.EnvVar{Name: "CLONE_WAL_BUCKET_SCOPE_PREFIX", Value: ""})
if description.S3Endpoint != "" { if description.S3Endpoint != "" {
result = append(result, v1.EnvVar{Name: "CLONE_AWS_ENDPOINT", Value: description.S3Endpoint}) result = append(result, v1.EnvVar{Name: "CLONE_AWS_ENDPOINT", Value: description.S3Endpoint})
@ -1933,7 +1915,7 @@ func (c *Cluster) generateStandbyEnvironment(description *acidv1.StandbyDescript
result := make([]v1.EnvVar, 0) result := make([]v1.EnvVar, 0)
if description.StandbyHost != "" { if description.StandbyHost != "" {
// standby from remote primary c.logger.Info("standby cluster streaming from remote primary")
result = append(result, v1.EnvVar{ result = append(result, v1.EnvVar{
Name: "STANDBY_HOST", Name: "STANDBY_HOST",
Value: description.StandbyHost, Value: description.StandbyHost,
@ -1945,30 +1927,20 @@ func (c *Cluster) generateStandbyEnvironment(description *acidv1.StandbyDescript
}) })
} }
} else { } else {
c.logger.Info("standby cluster streaming from WAL location")
if description.S3WalPath != "" { if description.S3WalPath != "" {
// standby with S3, find out the bucket to setup standby
msg := "Standby from S3 bucket using custom parsed S3WalPath from the manifest %s "
c.logger.Infof(msg, description.S3WalPath)
result = append(result, v1.EnvVar{ result = append(result, v1.EnvVar{
Name: "STANDBY_WALE_S3_PREFIX", Name: "STANDBY_WALE_S3_PREFIX",
Value: description.S3WalPath, Value: description.S3WalPath,
}) })
} else if description.GSWalPath != "" { } else if description.GSWalPath != "" {
msg := "Standby from GS bucket using custom parsed GSWalPath from the manifest %s " result = append(result, v1.EnvVar{
c.logger.Infof(msg, description.GSWalPath) Name: "STANDBY_WALE_GS_PREFIX",
Value: description.GSWalPath,
envs := []v1.EnvVar{ })
{ } else {
Name: "STANDBY_WALE_GS_PREFIX", c.logger.Error("no WAL path specified in standby section")
Value: description.GSWalPath, return result
},
{
Name: "STANDBY_GOOGLE_APPLICATION_CREDENTIALS",
Value: c.OpConfig.GCPCredentials,
},
}
result = append(result, envs...)
} }
result = append(result, v1.EnvVar{Name: "STANDBY_METHOD", Value: "STANDBY_WITH_WALE"}) result = append(result, v1.EnvVar{Name: "STANDBY_METHOD", Value: "STANDBY_WITH_WALE"})

File diff suppressed because it is too large Load Diff