refactor spilo env var generation (#1848)

* refactor spilo env generation
* enhance docs on env vars
* add unit test for appendEnvVar
This commit is contained in:
Felix Kunde 2022-04-14 11:47:33 +02:00 committed by GitHub
parent 483bf624ee
commit eecd13169c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 1231 additions and 910 deletions

View File

@ -601,15 +601,39 @@ spec:
## Custom Pod Environment Variables
It is possible to configure a ConfigMap as well as a Secret which are used by
the Postgres pods as an additional provider for environment variables. One use
case is a customized Spilo image configured by extra environment variables.
Another case could be to provide custom cloud provider or backup settings.
The operator will assign a set of environment variables to the database pods
that cannot be overridden to guarantee core functionality. Only variables with
'WAL_' and 'LOG_' prefixes can be customized to allow for backup and log
shipping to be specified differently. There are three ways to specify extra
environment variables (or override existing ones) for database pods:
In general the Operator will give preference to the globally configured
variables, to not have the custom ones interfere with core functionality.
Variables with the 'WAL_' and 'LOG_' prefix can be overwritten though, to
allow backup and log shipping to be specified differently.
* [Via ConfigMap](#via-configmap)
* [Via Secret](#via-secret)
* [Via Postgres Cluster Manifest](#via-postgres-cluster-manifest)
The first two options must be referenced from the operator configuration
making them global settings for all Postgres cluster the operator watches.
One use case is a customized Spilo image that must be configured by extra
environment variables. Another case could be to provide custom cloud
provider or backup settings.
The last options allows for specifying environment variables individual to
every cluster via the `env` section in the manifest. For example, if you use
individual backup locations for each of your clusters. Or you want to disable
WAL archiving for a certain cluster by setting `WAL_S3_BUCKET`, `WAL_GS_BUCKET`
or `AZURE_STORAGE_ACCOUNT` to an empty string.
The operator will give precedence to environment variables in the following
order (e.g. a variable defined in 4. overrides a variable with the same name
in 5.):
1. Assigned by the operator
2. Clone section (with WAL settings from operator config when `s3_wal_path` is empty)
3. Standby section
4. `env` section in cluster manifest
5. Pod environment secret via operator config
6. Pod environment config map via operator config
7. WAL and logical backup settings from operator config
### Via ConfigMap
@ -706,7 +730,7 @@ data:
The key-value pairs of the Secret are all accessible as environment variables
to the Postgres StatefulSet/pods.
### For individual cluster
### Via Postgres Cluster Manifest
It is possible to define environment variables directly in the Postgres cluster
manifest to configure it individually. The variables must be listed under the
@ -951,6 +975,10 @@ When the `AWS_REGION` is set, `AWS_ENDPOINT` and `WALE_S3_ENDPOINT` are
generated automatically. `WALG_S3_PREFIX` is identical to `WALE_S3_PREFIX`.
`SCOPE` is the Postgres cluster name.
:warning: If both `AWS_REGION` and `AWS_ENDPOINT` or `WALE_S3_ENDPOINT` are
defined backups with WAL-E will fail. You can fix it by switching to WAL-G
with `USE_WALG_BACKUP: "true"`.
### Google Cloud Platform setup
To configure the operator on GCP these prerequisites that are needed:

View File

@ -645,7 +645,10 @@ yet officially supported.
empty.
* **aws_region**
AWS region used to store EBS volumes. The default is `eu-central-1`.
AWS region used to store EBS volumes. The default is `eu-central-1`. Note,
this option is not meant for specifying the AWS region for backups and
restore, since it can be separate from the EBS region. You have to define
AWS_REGION as a [custom environment variable](../administrator.md#custom-pod-environment-variables).
* **additional_secret_mount**
Additional Secret (aws or gcp credentials) to mount in the pod.

View File

@ -766,15 +766,15 @@ spec:
uid: "efd12e58-5786-11e8-b5a7-06148230260c"
cluster: "acid-minimal-cluster"
timestamp: "2017-12-19T12:40:33+01:00"
s3_wal_path: "s3://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>"
```
Here `cluster` is a name of a source cluster that is going to be cloned. A new
cluster will be cloned from S3, using the latest backup before the `timestamp`.
Note, that a time zone is required for `timestamp` in the format of +00:00 which
is UTC. You can specify the `s3_wal_path` of the source cluster or let the
operator try to find it based on the configured `wal_[s3|gs]_bucket` and the
specified `uid`. You can find the UID of the source cluster in its metadata:
Note, a time zone is required for `timestamp` in the format of `+00:00` (UTC).
The operator will try to find the WAL location based on the configured
`wal_[s3|gs]_bucket` or `wal_az_storage_account` and the specified `uid`.
You can find the UID of the source cluster in its metadata:
```yaml
apiVersion: acid.zalan.do/v1
@ -784,6 +784,14 @@ metadata:
uid: efd12e58-5786-11e8-b5a7-06148230260c
```
If your source cluster uses a WAL location different from the global
configuration you can specify the full path under `s3_wal_path`. For
[Google Cloud Platform](administrator.md#google-cloud-platform-setup)
or [Azure](administrator.md#azure-setup)
it can only be set globally with [custom Pod environment variables](administrator.md#custom-pod-environment-variables)
or locally in the Postgres manifest's [`env`](administrator.md#via-postgres-cluster-manifest) section.
For non AWS S3 following settings can be set to support cloning from other S3
implementations:
@ -793,6 +801,7 @@ spec:
uid: "efd12e58-5786-11e8-b5a7-06148230260c"
cluster: "acid-minimal-cluster"
timestamp: "2017-12-19T12:40:33+01:00"
s3_wal_path: "s3://custom/path/to/bucket"
s3_endpoint: https://s3.acme.org
s3_access_key_id: 0123456789abcdef0123456789abcdef
s3_secret_access_key: 0123456789abcdef0123456789abcdef
@ -864,9 +873,8 @@ the PostgreSQL version between source and target cluster has to be the same.
To start a cluster as standby, add the following `standby` section in the YAML
file. You can stream changes from archived WAL files (AWS S3 or Google Cloud
Storage) or from a remote primary where you specify the host address and port.
If you leave out the port, Patroni will use `"5432"`. Only one option can be
specfied in the manifest:
Storage) or from a remote primary. Only one option can be specfied in the
manifest:
```yaml
spec:
@ -874,12 +882,19 @@ spec:
s3_wal_path: "s3://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>"
```
For GCS, you have to define STANDBY_GOOGLE_APPLICATION_CREDENTIALS as a
[custom pod environment variable](administrator.md#custom-pod-environment-variables).
It is not set from the config to allow for overridding.
```yaml
spec:
standby:
gs_wal_path: "gs://<bucketname>/spilo/<source_db_cluster>/<UID>/wal/<PGVERSION>"
```
For a remote primary you specify the host address and optionally the port.
If you leave out the port Patroni will use `"5432"`.
```yaml
spec:
standby:

View File

@ -1318,7 +1318,7 @@ func (c *Cluster) initAdditionalOwnerRoles() {
}
}
if len(memberOf) > 1 {
if len(memberOf) > 0 {
namespace := c.Namespace
additionalOwnerPgUser := spec.PgUser{
Origin: spec.RoleOriginSpilo,

View File

@ -26,6 +26,8 @@ import (
const (
superUserName = "postgres"
replicationUserName = "standby"
exampleSpiloConfig = `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_connections":"100","max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`
spiloConfigDiff = `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`
)
var logger = logrus.New().WithField("test", "cluster")
@ -957,7 +959,7 @@ func TestCompareEnv(t *testing.T) {
},
{
Name: "SPILO_CONFIGURATION",
Value: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_connections":"100","max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`,
Value: exampleSpiloConfig,
},
},
ExpectedResult: true,
@ -978,7 +980,7 @@ func TestCompareEnv(t *testing.T) {
},
{
Name: "SPILO_CONFIGURATION",
Value: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`,
Value: spiloConfigDiff,
},
},
ExpectedResult: true,
@ -999,7 +1001,7 @@ func TestCompareEnv(t *testing.T) {
},
{
Name: "SPILO_CONFIGURATION",
Value: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`,
Value: exampleSpiloConfig,
},
},
ExpectedResult: false,
@ -1024,7 +1026,7 @@ func TestCompareEnv(t *testing.T) {
},
{
Name: "SPILO_CONFIGURATION",
Value: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_connections":"100","max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`,
Value: exampleSpiloConfig,
},
},
ExpectedResult: false,
@ -1041,7 +1043,7 @@ func TestCompareEnv(t *testing.T) {
},
{
Name: "SPILO_CONFIGURATION",
Value: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/12/bin","parameters":{"autovacuum_analyze_scale_factor":"0.1"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"users":{"test":{"password":"","options":["CREATEDB","NOLOGIN"]}},"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"postgresql":{"parameters":{"max_connections":"100","max_locks_per_transaction":"64","max_worker_processes":"4"}}}}}`,
Value: exampleSpiloConfig,
},
},
ExpectedResult: false,

View File

@ -766,9 +766,10 @@ func (c *Cluster) generateSpiloPodEnvVars(
uid types.UID,
spiloConfiguration string,
cloneDescription *acidv1.CloneDescription,
standbyDescription *acidv1.StandbyDescription,
customPodEnvVarsList []v1.EnvVar) []v1.EnvVar {
standbyDescription *acidv1.StandbyDescription) []v1.EnvVar {
// hard-coded set of environment variables we need
// to guarantee core functionality of the operator
envVars := []v1.EnvVar{
{
Name: "SCOPE",
@ -875,59 +876,75 @@ func (c *Cluster) generateSpiloPodEnvVars(
envVars = append(envVars, c.generateCloneEnvironment(cloneDescription)...)
}
if c.Spec.StandbyCluster != nil {
if standbyDescription != nil {
envVars = append(envVars, c.generateStandbyEnvironment(standbyDescription)...)
}
// fetch cluster-specific variables that will override all subsequent global variables
if len(c.Spec.Env) > 0 {
envVars = appendEnvVars(envVars, c.Spec.Env...)
}
// add vars taken from pod_environment_configmap and pod_environment_secret first
// (to allow them to override the globals set in the operator config)
if len(customPodEnvVarsList) > 0 {
envVars = appendEnvVars(envVars, customPodEnvVarsList...)
// fetch variables from custom environment Secret
// that will override all subsequent global variables
secretEnvVarsList, err := c.getPodEnvironmentSecretVariables()
if err != nil {
c.logger.Warningf("%v", err)
}
envVars = appendEnvVars(envVars, secretEnvVarsList...)
// fetch variables from custom environment ConfigMap
// that will override all subsequent global variables
configMapEnvVarsList, err := c.getPodEnvironmentConfigMapVariables()
if err != nil {
c.logger.Warningf("%v", err)
}
envVars = appendEnvVars(envVars, configMapEnvVarsList...)
// global variables derived from operator configuration
opConfigEnvVars := make([]v1.EnvVar, 0)
if c.OpConfig.WALES3Bucket != "" {
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
}
if c.OpConfig.WALGSBucket != "" {
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_GS_BUCKET", Value: c.OpConfig.WALGSBucket})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_GS_BUCKET", Value: c.OpConfig.WALGSBucket})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
}
if c.OpConfig.WALAZStorageAccount != "" {
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "AZURE_STORAGE_ACCOUNT", Value: c.OpConfig.WALAZStorageAccount})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "AZURE_STORAGE_ACCOUNT", Value: c.OpConfig.WALAZStorageAccount})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
}
if c.OpConfig.GCPCredentials != "" {
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.GCPCredentials})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.GCPCredentials})
}
if c.OpConfig.LogS3Bucket != "" {
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "LOG_S3_BUCKET", Value: c.OpConfig.LogS3Bucket})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = appendEnvVars(envVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_PREFIX", Value: ""})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "LOG_S3_BUCKET", Value: c.OpConfig.LogS3Bucket})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
opConfigEnvVars = append(opConfigEnvVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_PREFIX", Value: ""})
}
envVars = appendEnvVars(envVars, opConfigEnvVars...)
return envVars
}
func appendEnvVars(envs []v1.EnvVar, appEnv ...v1.EnvVar) []v1.EnvVar {
jenvs := envs
collectedEnvs := envs
for _, env := range appEnv {
if !isEnvVarPresent(jenvs, env.Name) {
jenvs = append(jenvs, env)
env.Name = strings.ToUpper(env.Name)
if !isEnvVarPresent(collectedEnvs, env.Name) {
collectedEnvs = append(collectedEnvs, env)
}
}
return jenvs
return collectedEnvs
}
func isEnvVarPresent(envs []v1.EnvVar, key string) bool {
@ -963,9 +980,11 @@ func (c *Cluster) getPodEnvironmentConfigMapVariables() ([]v1.EnvVar, error) {
return nil, fmt.Errorf("could not read PodEnvironmentConfigMap: %v", err)
}
}
for k, v := range cm.Data {
configMapPodEnvVarsList = append(configMapPodEnvVarsList, v1.EnvVar{Name: k, Value: v})
}
sort.Slice(configMapPodEnvVarsList, func(i, j int) bool { return configMapPodEnvVarsList[i].Name < configMapPodEnvVarsList[j].Name })
return configMapPodEnvVarsList, nil
}
@ -1015,6 +1034,7 @@ func (c *Cluster) getPodEnvironmentSecretVariables() ([]v1.EnvVar, error) {
}})
}
sort.Slice(secretPodEnvVarsList, func(i, j int) bool { return secretPodEnvVarsList[i].Name < secretPodEnvVarsList[j].Name })
return secretPodEnvVarsList, nil
}
@ -1104,23 +1124,6 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
initContainers = spec.InitContainers
}
// fetch env vars from custom ConfigMap
configMapEnvVarsList, err := c.getPodEnvironmentConfigMapVariables()
if err != nil {
return nil, err
}
// fetch env vars from custom ConfigMap
secretEnvVarsList, err := c.getPodEnvironmentSecretVariables()
if err != nil {
return nil, err
}
// concat all custom pod env vars and sort them
customPodEnvVarsList := append(configMapEnvVarsList, secretEnvVarsList...)
sort.Slice(customPodEnvVarsList,
func(i, j int) bool { return customPodEnvVarsList[i].Name < customPodEnvVarsList[j].Name })
// backward compatible check for InitContainers
if spec.InitContainersOld != nil {
msg := "manifest parameter init_containers is deprecated."
@ -1153,9 +1156,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
c.Postgresql.GetUID(),
spiloConfiguration,
spec.Clone,
spec.StandbyCluster,
customPodEnvVarsList,
)
spec.StandbyCluster)
// pickup the docker image for the spilo container
effectiveDockerImage := util.Coalesce(spec.DockerImage, c.OpConfig.DockerImage)
@ -1297,7 +1298,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
sidecarContainers, conflicts := mergeContainers(clusterSpecificSidecars, c.Config.OpConfig.SidecarContainers, globalSidecarContainersByDockerImage, scalyrSidecars)
for containerName := range conflicts {
c.logger.Warningf("a sidecar is specified twice. Ignoring sidecar %q in favor of %q with high a precendence",
c.logger.Warningf("a sidecar is specified twice. Ignoring sidecar %q in favor of %q with high a precedence",
containerName, containerName)
}
@ -1819,6 +1820,7 @@ func (c *Cluster) generateCloneEnvironment(description *acidv1.CloneDescription)
cluster := description.ClusterName
result = append(result, v1.EnvVar{Name: "CLONE_SCOPE", Value: cluster})
if description.EndTimestamp == "" {
c.logger.Infof("cloning with basebackup from %s", cluster)
// cloning with basebackup, make a connection string to the cluster to clone from
host, port := c.getClusterServiceConnectionParameters(cluster)
// TODO: make some/all of those constants
@ -1840,67 +1842,47 @@ func (c *Cluster) generateCloneEnvironment(description *acidv1.CloneDescription)
},
})
} else {
// cloning with S3, find out the bucket to clone
msg := "clone from S3 bucket"
c.logger.Info(msg, description.S3WalPath)
c.logger.Info("cloning from WAL location")
if description.S3WalPath == "" {
msg := "figure out which S3 bucket to use from env"
c.logger.Info(msg, description.S3WalPath)
c.logger.Info("no S3 WAL path defined - taking value from global config", description.S3WalPath)
if c.OpConfig.WALES3Bucket != "" {
envs := []v1.EnvVar{
{
Name: "CLONE_WAL_S3_BUCKET",
Value: c.OpConfig.WALES3Bucket,
},
}
result = append(result, envs...)
c.logger.Debugf("found WALES3Bucket %s - will set CLONE_WAL_S3_BUCKET", c.OpConfig.WALES3Bucket)
result = append(result, v1.EnvVar{Name: "CLONE_WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket})
} else if c.OpConfig.WALGSBucket != "" {
envs := []v1.EnvVar{
{
Name: "CLONE_WAL_GS_BUCKET",
Value: c.OpConfig.WALGSBucket,
},
{
Name: "CLONE_GOOGLE_APPLICATION_CREDENTIALS",
Value: c.OpConfig.GCPCredentials,
},
c.logger.Debugf("found WALGSBucket %s - will set CLONE_WAL_GS_BUCKET", c.OpConfig.WALGSBucket)
result = append(result, v1.EnvVar{Name: "CLONE_WAL_GS_BUCKET", Value: c.OpConfig.WALGSBucket})
if c.OpConfig.GCPCredentials != "" {
result = append(result, v1.EnvVar{Name: "CLONE_GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.GCPCredentials})
}
result = append(result, envs...)
} else if c.OpConfig.WALAZStorageAccount != "" {
envs := []v1.EnvVar{
{
Name: "CLONE_AZURE_STORAGE_ACCOUNT",
Value: c.OpConfig.WALAZStorageAccount,
},
}
result = append(result, envs...)
c.logger.Debugf("found WALAZStorageAccount %s - will set CLONE_AZURE_STORAGE_ACCOUNT", c.OpConfig.WALAZStorageAccount)
result = append(result, v1.EnvVar{Name: "CLONE_AZURE_STORAGE_ACCOUNT", Value: c.OpConfig.WALAZStorageAccount})
} else {
c.logger.Error("Cannot figure out S3 or GS bucket. Both are empty.")
c.logger.Error("cannot figure out S3 or GS bucket or AZ storage account. All options are empty in the config.")
}
// append suffix because WAL location name is not the whole path
result = append(result, v1.EnvVar{Name: "CLONE_WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(description.UID)})
} else {
c.logger.Debugf("use S3WalPath %s from the manifest", description.S3WalPath)
envs := []v1.EnvVar{
{
Name: "CLONE_WALE_S3_PREFIX",
Value: description.S3WalPath,
},
{
Name: "CLONE_WAL_BUCKET_SCOPE_SUFFIX",
Value: getBucketScopeSuffix(description.UID),
Value: "",
},
}
result = append(result, envs...)
} else {
msg := "use custom parsed S3WalPath %s from the manifest"
c.logger.Warningf(msg, description.S3WalPath)
result = append(result, v1.EnvVar{
Name: "CLONE_WALE_S3_PREFIX",
Value: description.S3WalPath,
})
}
result = append(result, v1.EnvVar{Name: "CLONE_METHOD", Value: "CLONE_WITH_WALE"})
result = append(result, v1.EnvVar{Name: "CLONE_TARGET_TIME", Value: description.EndTimestamp})
result = append(result, v1.EnvVar{Name: "CLONE_WAL_BUCKET_SCOPE_PREFIX", Value: ""})
if description.S3Endpoint != "" {
result = append(result, v1.EnvVar{Name: "CLONE_AWS_ENDPOINT", Value: description.S3Endpoint})
@ -1933,7 +1915,7 @@ func (c *Cluster) generateStandbyEnvironment(description *acidv1.StandbyDescript
result := make([]v1.EnvVar, 0)
if description.StandbyHost != "" {
// standby from remote primary
c.logger.Info("standby cluster streaming from remote primary")
result = append(result, v1.EnvVar{
Name: "STANDBY_HOST",
Value: description.StandbyHost,
@ -1945,30 +1927,20 @@ func (c *Cluster) generateStandbyEnvironment(description *acidv1.StandbyDescript
})
}
} else {
c.logger.Info("standby cluster streaming from WAL location")
if description.S3WalPath != "" {
// standby with S3, find out the bucket to setup standby
msg := "Standby from S3 bucket using custom parsed S3WalPath from the manifest %s "
c.logger.Infof(msg, description.S3WalPath)
result = append(result, v1.EnvVar{
Name: "STANDBY_WALE_S3_PREFIX",
Value: description.S3WalPath,
})
} else if description.GSWalPath != "" {
msg := "Standby from GS bucket using custom parsed GSWalPath from the manifest %s "
c.logger.Infof(msg, description.GSWalPath)
envs := []v1.EnvVar{
{
Name: "STANDBY_WALE_GS_PREFIX",
Value: description.GSWalPath,
},
{
Name: "STANDBY_GOOGLE_APPLICATION_CREDENTIALS",
Value: c.OpConfig.GCPCredentials,
},
}
result = append(result, envs...)
result = append(result, v1.EnvVar{
Name: "STANDBY_WALE_GS_PREFIX",
Value: description.GSWalPath,
})
} else {
c.logger.Error("no WAL path specified in standby section")
return result
}
result = append(result, v1.EnvVar{Name: "STANDBY_METHOD", Value: "STANDBY_WITH_WALE"})

File diff suppressed because it is too large Load Diff