Allow pod environment variables to also be sourced from a secret (#946)

* Extend operator configuration to allow for a pod_environment_secret just like pod_environment_configmap

* Add all keys from PodEnvironmentSecrets as ENV vars (using SecretKeyRef to protect the value)

* Apply envVars from pod_environment_configmap and pod_environment_secrets before doing the global settings from the operator config. This allows them to be overriden by the user (via configmap / secret)

* Add ability use a Secret for custom pod envVars (via pod_environment_secret) to admin documentation

* Add pod_environment_secret to Helm chart values.yaml

* Add unit tests for PodEnvironmentConfigMap and PodEnvironmentSecret - highly inspired by @kupson and his very similar PR #481

* Added new parameter pod_environment_secret to operatorconfig CRD and configmap examples

* Add pod_environment_secret to the operationconfiguration CRD

Co-authored-by: Christian Rohmann <christian.rohmann@inovex.de>
This commit is contained in:
Christian Rohmann 2020-07-30 10:48:16 +02:00 committed by GitHub
parent 102a353649
commit ece341d516
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 394 additions and 55 deletions

View File

@ -149,6 +149,8 @@ spec:
type: string type: string
pod_environment_configmap: pod_environment_configmap:
type: string type: string
pod_environment_secret:
type: string
pod_management_policy: pod_management_policy:
type: string type: string
enum: enum:

View File

@ -104,6 +104,8 @@ configKubernetes:
pod_antiaffinity_topology_key: "kubernetes.io/hostname" pod_antiaffinity_topology_key: "kubernetes.io/hostname"
# namespaced name of the ConfigMap with environment variables to populate on every pod # namespaced name of the ConfigMap with environment variables to populate on every pod
# pod_environment_configmap: "default/my-custom-config" # pod_environment_configmap: "default/my-custom-config"
# name of the Secret (in cluster namespace) with environment variables to populate on every pod
# pod_environment_secret: "my-custom-secret"
# specify the pod management policy of stateful sets of Postgres clusters # specify the pod management policy of stateful sets of Postgres clusters
pod_management_policy: "ordered_ready" pod_management_policy: "ordered_ready"

View File

@ -95,6 +95,8 @@ configKubernetes:
pod_antiaffinity_topology_key: "kubernetes.io/hostname" pod_antiaffinity_topology_key: "kubernetes.io/hostname"
# namespaced name of the ConfigMap with environment variables to populate on every pod # namespaced name of the ConfigMap with environment variables to populate on every pod
# pod_environment_configmap: "default/my-custom-config" # pod_environment_configmap: "default/my-custom-config"
# name of the Secret (in cluster namespace) with environment variables to populate on every pod
# pod_environment_secret: "my-custom-secret"
# specify the pod management policy of stateful sets of Postgres clusters # specify the pod management policy of stateful sets of Postgres clusters
pod_management_policy: "ordered_ready" pod_management_policy: "ordered_ready"

View File

@ -319,11 +319,18 @@ spec:
## Custom Pod Environment Variables ## Custom Pod Environment Variables
It is possible to configure a ConfigMap as well as a Secret which are used by the Postgres pods as
It is possible to configure a ConfigMap which is used by the Postgres pods as
an additional provider for environment variables. One use case is to customize an additional provider for environment variables. One use case is to customize
the Spilo image and configure it with environment variables. The ConfigMap with the Spilo image and configure it with environment variables. Another case could be to provide custom
the additional settings is referenced in the operator's main configuration. cloud provider or backup settings.
In general the Operator will give preference to the globally configured variables, to not have the custom
ones interfere with core functionality. Variables with the 'WAL_' and 'LOG_' prefix can be overwritten though, to allow
backup and logshipping to be specified differently.
### Via ConfigMap
The ConfigMap with the additional settings is referenced in the operator's main configuration.
A namespace can be specified along with the name. If left out, the configured A namespace can be specified along with the name. If left out, the configured
default namespace of your K8s client will be used and if the ConfigMap is not default namespace of your K8s client will be used and if the ConfigMap is not
found there, the Postgres cluster's namespace is taken when different: found there, the Postgres cluster's namespace is taken when different:
@ -365,7 +372,54 @@ data:
MY_CUSTOM_VAR: value MY_CUSTOM_VAR: value
``` ```
This ConfigMap is then added as a source of environment variables to the The key-value pairs of the ConfigMap are then added as environment variables to the
Postgres StatefulSet/pods.
### Via Secret
The Secret with the additional variables is referenced in the operator's main configuration.
To protect the values of the secret from being exposed in the pod spec they are each referenced
as SecretKeyRef.
This does not allow for the secret to be in a different namespace as the pods though
**postgres-operator ConfigMap**
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-operator
data:
# referencing secret with custom environment variables
pod_environment_secret: postgres-pod-secrets
```
**OperatorConfiguration**
```yaml
apiVersion: "acid.zalan.do/v1"
kind: OperatorConfiguration
metadata:
name: postgresql-operator-configuration
configuration:
kubernetes:
# referencing secret with custom environment variables
pod_environment_secret: postgres-pod-secrets
```
**referenced Secret `postgres-pod-secrets`**
```yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-pod-secrets
namespace: default
data:
MY_CUSTOM_VAR: dmFsdWU=
```
The key-value pairs of the Secret are all accessible as environment variables to the
Postgres StatefulSet/pods. Postgres StatefulSet/pods.
## Limiting the number of min and max instances in clusters ## Limiting the number of min and max instances in clusters

View File

@ -74,6 +74,7 @@ data:
# pod_antiaffinity_topology_key: "kubernetes.io/hostname" # pod_antiaffinity_topology_key: "kubernetes.io/hostname"
pod_deletion_wait_timeout: 10m pod_deletion_wait_timeout: 10m
# pod_environment_configmap: "default/my-custom-config" # pod_environment_configmap: "default/my-custom-config"
# pod_environment_secret: "my-custom-secret"
pod_label_wait_timeout: 10m pod_label_wait_timeout: 10m
pod_management_policy: "ordered_ready" pod_management_policy: "ordered_ready"
pod_role_label: spilo-role pod_role_label: spilo-role

View File

@ -145,6 +145,8 @@ spec:
type: string type: string
pod_environment_configmap: pod_environment_configmap:
type: string type: string
pod_environment_secret:
type: string
pod_management_policy: pod_management_policy:
type: string type: string
enum: enum:

View File

@ -49,6 +49,7 @@ configuration:
pdb_name_format: "postgres-{cluster}-pdb" pdb_name_format: "postgres-{cluster}-pdb"
pod_antiaffinity_topology_key: "kubernetes.io/hostname" pod_antiaffinity_topology_key: "kubernetes.io/hostname"
# pod_environment_configmap: "default/my-custom-config" # pod_environment_configmap: "default/my-custom-config"
# pod_environment_secret: "my-custom-secret"
pod_management_policy: "ordered_ready" pod_management_policy: "ordered_ready"
# pod_priority_class_name: "" # pod_priority_class_name: ""
pod_role_label: spilo-role pod_role_label: spilo-role

View File

@ -942,6 +942,9 @@ var OperatorConfigCRDResourceValidation = apiextv1beta1.CustomResourceValidation
"pod_environment_configmap": { "pod_environment_configmap": {
Type: "string", Type: "string",
}, },
"pod_environment_secret": {
Type: "string",
},
"pod_management_policy": { "pod_management_policy": {
Type: "string", Type: "string",
Enum: []apiextv1beta1.JSON{ Enum: []apiextv1beta1.JSON{

View File

@ -70,6 +70,7 @@ type KubernetesMetaConfiguration struct {
// TODO: use a proper toleration structure? // TODO: use a proper toleration structure?
PodToleration map[string]string `json:"toleration,omitempty"` PodToleration map[string]string `json:"toleration,omitempty"`
PodEnvironmentConfigMap spec.NamespacedName `json:"pod_environment_configmap,omitempty"` PodEnvironmentConfigMap spec.NamespacedName `json:"pod_environment_configmap,omitempty"`
PodEnvironmentSecret string `json:"pod_environment_secret,omitempty"`
PodPriorityClassName string `json:"pod_priority_class_name,omitempty"` PodPriorityClassName string `json:"pod_priority_class_name,omitempty"`
MasterPodMoveTimeout Duration `json:"master_pod_move_timeout,omitempty"` MasterPodMoveTimeout Duration `json:"master_pod_move_timeout,omitempty"`
EnablePodAntiAffinity bool `json:"enable_pod_antiaffinity,omitempty"` EnablePodAntiAffinity bool `json:"enable_pod_antiaffinity,omitempty"`

View File

@ -7,6 +7,7 @@ import (
"path" "path"
"sort" "sort"
"strconv" "strconv"
"strings"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
@ -20,7 +21,6 @@ import (
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1" acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
"github.com/zalando/postgres-operator/pkg/spec" "github.com/zalando/postgres-operator/pkg/spec"
pkgspec "github.com/zalando/postgres-operator/pkg/spec"
"github.com/zalando/postgres-operator/pkg/util" "github.com/zalando/postgres-operator/pkg/util"
"github.com/zalando/postgres-operator/pkg/util/config" "github.com/zalando/postgres-operator/pkg/util/config"
"github.com/zalando/postgres-operator/pkg/util/constants" "github.com/zalando/postgres-operator/pkg/util/constants"
@ -715,6 +715,30 @@ func (c *Cluster) generateSpiloPodEnvVars(uid types.UID, spiloConfiguration stri
envVars = append(envVars, v1.EnvVar{Name: "SPILO_CONFIGURATION", Value: spiloConfiguration}) envVars = append(envVars, v1.EnvVar{Name: "SPILO_CONFIGURATION", Value: spiloConfiguration})
} }
if c.patroniUsesKubernetes() {
envVars = append(envVars, v1.EnvVar{Name: "DCS_ENABLE_KUBERNETES_API", Value: "true"})
} else {
envVars = append(envVars, v1.EnvVar{Name: "ETCD_HOST", Value: c.OpConfig.EtcdHost})
}
if c.patroniKubernetesUseConfigMaps() {
envVars = append(envVars, v1.EnvVar{Name: "KUBERNETES_USE_CONFIGMAPS", Value: "true"})
}
if cloneDescription.ClusterName != "" {
envVars = append(envVars, c.generateCloneEnvironment(cloneDescription)...)
}
if c.Spec.StandbyCluster != nil {
envVars = append(envVars, c.generateStandbyEnvironment(standbyDescription)...)
}
// add vars taken from pod_environment_configmap and pod_environment_secret first
// (to allow them to override the globals set in the operator config)
if len(customPodEnvVarsList) > 0 {
envVars = append(envVars, customPodEnvVarsList...)
}
if c.OpConfig.WALES3Bucket != "" { if c.OpConfig.WALES3Bucket != "" {
envVars = append(envVars, v1.EnvVar{Name: "WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket}) envVars = append(envVars, v1.EnvVar{Name: "WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket})
envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))}) envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
@ -737,28 +761,6 @@ func (c *Cluster) generateSpiloPodEnvVars(uid types.UID, spiloConfiguration stri
envVars = append(envVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_PREFIX", Value: ""}) envVars = append(envVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_PREFIX", Value: ""})
} }
if c.patroniUsesKubernetes() {
envVars = append(envVars, v1.EnvVar{Name: "DCS_ENABLE_KUBERNETES_API", Value: "true"})
} else {
envVars = append(envVars, v1.EnvVar{Name: "ETCD_HOST", Value: c.OpConfig.EtcdHost})
}
if c.patroniKubernetesUseConfigMaps() {
envVars = append(envVars, v1.EnvVar{Name: "KUBERNETES_USE_CONFIGMAPS", Value: "true"})
}
if cloneDescription.ClusterName != "" {
envVars = append(envVars, c.generateCloneEnvironment(cloneDescription)...)
}
if c.Spec.StandbyCluster != nil {
envVars = append(envVars, c.generateStandbyEnvironment(standbyDescription)...)
}
if len(customPodEnvVarsList) > 0 {
envVars = append(envVars, customPodEnvVarsList...)
}
return envVars return envVars
} }
@ -777,13 +779,81 @@ func deduplicateEnvVars(input []v1.EnvVar, containerName string, logger *logrus.
result = append(result, input[i]) result = append(result, input[i])
} else if names[va.Name] == 1 { } else if names[va.Name] == 1 {
names[va.Name]++ names[va.Name]++
// Some variables (those to configure the WAL_ and LOG_ shipping) may be overriden, only log as info
if strings.HasPrefix(va.Name, "WAL_") || strings.HasPrefix(va.Name, "LOG_") {
logger.Infof("global variable %q has been overwritten by configmap/secret for container %q",
va.Name, containerName)
} else {
logger.Warningf("variable %q is defined in %q more than once, the subsequent definitions are ignored", logger.Warningf("variable %q is defined in %q more than once, the subsequent definitions are ignored",
va.Name, containerName) va.Name, containerName)
} }
} }
}
return result return result
} }
// Return list of variables the pod recieved from the configured ConfigMap
func (c *Cluster) getPodEnvironmentConfigMapVariables() ([]v1.EnvVar, error) {
configMapPodEnvVarsList := make([]v1.EnvVar, 0)
if c.OpConfig.PodEnvironmentConfigMap.Name == "" {
return configMapPodEnvVarsList, nil
}
cm, err := c.KubeClient.ConfigMaps(c.OpConfig.PodEnvironmentConfigMap.Namespace).Get(
context.TODO(),
c.OpConfig.PodEnvironmentConfigMap.Name,
metav1.GetOptions{})
if err != nil {
// if not found, try again using the cluster's namespace if it's different (old behavior)
if k8sutil.ResourceNotFound(err) && c.Namespace != c.OpConfig.PodEnvironmentConfigMap.Namespace {
cm, err = c.KubeClient.ConfigMaps(c.Namespace).Get(
context.TODO(),
c.OpConfig.PodEnvironmentConfigMap.Name,
metav1.GetOptions{})
}
if err != nil {
return nil, fmt.Errorf("could not read PodEnvironmentConfigMap: %v", err)
}
}
for k, v := range cm.Data {
configMapPodEnvVarsList = append(configMapPodEnvVarsList, v1.EnvVar{Name: k, Value: v})
}
return configMapPodEnvVarsList, nil
}
// Return list of variables the pod recieved from the configured Secret
func (c *Cluster) getPodEnvironmentSecretVariables() ([]v1.EnvVar, error) {
secretPodEnvVarsList := make([]v1.EnvVar, 0)
if c.OpConfig.PodEnvironmentSecret == "" {
return secretPodEnvVarsList, nil
}
secret, err := c.KubeClient.Secrets(c.OpConfig.PodEnvironmentSecret).Get(
context.TODO(),
c.OpConfig.PodEnvironmentSecret,
metav1.GetOptions{})
if err != nil {
return nil, fmt.Errorf("could not read Secret PodEnvironmentSecretName: %v", err)
}
for k := range secret.Data {
secretPodEnvVarsList = append(secretPodEnvVarsList,
v1.EnvVar{Name: k, ValueFrom: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: c.OpConfig.PodEnvironmentSecret,
},
Key: k,
},
}})
}
return secretPodEnvVarsList, nil
}
func getSidecarContainer(sidecar acidv1.Sidecar, index int, resources *v1.ResourceRequirements) *v1.Container { func getSidecarContainer(sidecar acidv1.Sidecar, index int, resources *v1.ResourceRequirements) *v1.Container {
name := sidecar.Name name := sidecar.Name
if name == "" { if name == "" {
@ -943,32 +1013,23 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
initContainers = spec.InitContainers initContainers = spec.InitContainers
} }
customPodEnvVarsList := make([]v1.EnvVar, 0) // fetch env vars from custom ConfigMap
configMapEnvVarsList, err := c.getPodEnvironmentConfigMapVariables()
if err != nil {
return nil, err
}
if c.OpConfig.PodEnvironmentConfigMap != (pkgspec.NamespacedName{}) { // fetch env vars from custom ConfigMap
var cm *v1.ConfigMap secretEnvVarsList, err := c.getPodEnvironmentSecretVariables()
cm, err = c.KubeClient.ConfigMaps(c.OpConfig.PodEnvironmentConfigMap.Namespace).Get(
context.TODO(),
c.OpConfig.PodEnvironmentConfigMap.Name,
metav1.GetOptions{})
if err != nil { if err != nil {
// if not found, try again using the cluster's namespace if it's different (old behavior) return nil, err
if k8sutil.ResourceNotFound(err) && c.Namespace != c.OpConfig.PodEnvironmentConfigMap.Namespace {
cm, err = c.KubeClient.ConfigMaps(c.Namespace).Get(
context.TODO(),
c.OpConfig.PodEnvironmentConfigMap.Name,
metav1.GetOptions{})
}
if err != nil {
return nil, fmt.Errorf("could not read PodEnvironmentConfigMap: %v", err)
}
}
for k, v := range cm.Data {
customPodEnvVarsList = append(customPodEnvVarsList, v1.EnvVar{Name: k, Value: v})
} }
// concat all custom pod env vars and sort them
customPodEnvVarsList := append(configMapEnvVarsList, secretEnvVarsList...)
sort.Slice(customPodEnvVarsList, sort.Slice(customPodEnvVarsList,
func(i, j int) bool { return customPodEnvVarsList[i].Name < customPodEnvVarsList[j].Name }) func(i, j int) bool { return customPodEnvVarsList[i].Name < customPodEnvVarsList[j].Name })
}
if spec.StandbyCluster != nil && spec.StandbyCluster.S3WalPath == "" { if spec.StandbyCluster != nil && spec.StandbyCluster.S3WalPath == "" {
return nil, fmt.Errorf("s3_wal_path is empty for standby cluster") return nil, fmt.Errorf("s3_wal_path is empty for standby cluster")
} }

View File

@ -1,6 +1,7 @@
package cluster package cluster
import ( import (
"context"
"errors" "errors"
"fmt" "fmt"
"reflect" "reflect"
@ -10,6 +11,7 @@ import (
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1" acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
"github.com/zalando/postgres-operator/pkg/spec"
"github.com/zalando/postgres-operator/pkg/util" "github.com/zalando/postgres-operator/pkg/util"
"github.com/zalando/postgres-operator/pkg/util/config" "github.com/zalando/postgres-operator/pkg/util/config"
"github.com/zalando/postgres-operator/pkg/util/constants" "github.com/zalando/postgres-operator/pkg/util/constants"
@ -22,6 +24,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr" "k8s.io/apimachinery/pkg/util/intstr"
v1core "k8s.io/client-go/kubernetes/typed/core/v1"
) )
// For testing purposes // For testing purposes
@ -713,6 +716,211 @@ func TestSecretVolume(t *testing.T) {
} }
} }
const (
testPodEnvironmentConfigMapName = "pod_env_cm"
testPodEnvironmentSecretName = "pod_env_sc"
)
type mockSecret struct {
v1core.SecretInterface
}
type mockConfigMap struct {
v1core.ConfigMapInterface
}
func (c *mockSecret) Get(ctx context.Context, name string, options metav1.GetOptions) (*v1.Secret, error) {
if name != testPodEnvironmentSecretName {
return nil, fmt.Errorf("Secret PodEnvironmentSecret not found")
}
secret := &v1.Secret{}
secret.Name = testPodEnvironmentSecretName
secret.Data = map[string][]byte{
"minio_access_key": []byte("alpha"),
"minio_secret_key": []byte("beta"),
}
return secret, nil
}
func (c *mockConfigMap) Get(ctx context.Context, name string, options metav1.GetOptions) (*v1.ConfigMap, error) {
if name != testPodEnvironmentConfigMapName {
return nil, fmt.Errorf("NotFound")
}
configmap := &v1.ConfigMap{}
configmap.Name = testPodEnvironmentConfigMapName
configmap.Data = map[string]string{
"foo": "bar",
}
return configmap, nil
}
type MockSecretGetter struct {
}
type MockConfigMapsGetter struct {
}
func (c *MockSecretGetter) Secrets(namespace string) v1core.SecretInterface {
return &mockSecret{}
}
func (c *MockConfigMapsGetter) ConfigMaps(namespace string) v1core.ConfigMapInterface {
return &mockConfigMap{}
}
func newMockKubernetesClient() k8sutil.KubernetesClient {
return k8sutil.KubernetesClient{
SecretsGetter: &MockSecretGetter{},
ConfigMapsGetter: &MockConfigMapsGetter{},
}
}
func newMockCluster(opConfig config.Config) *Cluster {
cluster := &Cluster{
Config: Config{OpConfig: opConfig},
KubeClient: newMockKubernetesClient(),
}
return cluster
}
func TestPodEnvironmentConfigMapVariables(t *testing.T) {
testName := "TestPodEnvironmentConfigMapVariables"
tests := []struct {
subTest string
opConfig config.Config
envVars []v1.EnvVar
err error
}{
{
subTest: "no PodEnvironmentConfigMap",
envVars: []v1.EnvVar{},
},
{
subTest: "missing PodEnvironmentConfigMap",
opConfig: config.Config{
Resources: config.Resources{
PodEnvironmentConfigMap: spec.NamespacedName{
Name: "idonotexist",
},
},
},
err: fmt.Errorf("could not read PodEnvironmentConfigMap: NotFound"),
},
{
subTest: "simple PodEnvironmentConfigMap",
opConfig: config.Config{
Resources: config.Resources{
PodEnvironmentConfigMap: spec.NamespacedName{
Name: testPodEnvironmentConfigMapName,
},
},
},
envVars: []v1.EnvVar{
{
Name: "foo",
Value: "bar",
},
},
},
}
for _, tt := range tests {
c := newMockCluster(tt.opConfig)
vars, err := c.getPodEnvironmentConfigMapVariables()
if !reflect.DeepEqual(vars, tt.envVars) {
t.Errorf("%s %s: expected `%v` but got `%v`",
testName, tt.subTest, tt.envVars, vars)
}
if tt.err != nil {
if err.Error() != tt.err.Error() {
t.Errorf("%s %s: expected error `%v` but got `%v`",
testName, tt.subTest, tt.err, err)
}
} else {
if err != nil {
t.Errorf("%s %s: expected no error but got error: `%v`",
testName, tt.subTest, err)
}
}
}
}
// Test if the keys of an existing secret are properly referenced
func TestPodEnvironmentSecretVariables(t *testing.T) {
testName := "TestPodEnvironmentSecretVariables"
tests := []struct {
subTest string
opConfig config.Config
envVars []v1.EnvVar
err error
}{
{
subTest: "No PodEnvironmentSecret configured",
envVars: []v1.EnvVar{},
},
{
subTest: "Secret referenced by PodEnvironmentSecret does not exist",
opConfig: config.Config{
Resources: config.Resources{
PodEnvironmentSecret: "idonotexist",
},
},
err: fmt.Errorf("could not read Secret PodEnvironmentSecretName: Secret PodEnvironmentSecret not found"),
},
{
subTest: "Pod environment vars reference all keys from secret configured by PodEnvironmentSecret",
opConfig: config.Config{
Resources: config.Resources{
PodEnvironmentSecret: testPodEnvironmentSecretName,
},
},
envVars: []v1.EnvVar{
{
Name: "minio_access_key",
ValueFrom: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: testPodEnvironmentSecretName,
},
Key: "minio_access_key",
},
},
},
{
Name: "minio_secret_key",
ValueFrom: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: testPodEnvironmentSecretName,
},
Key: "minio_secret_key",
},
},
},
},
},
}
for _, tt := range tests {
c := newMockCluster(tt.opConfig)
vars, err := c.getPodEnvironmentSecretVariables()
if !reflect.DeepEqual(vars, tt.envVars) {
t.Errorf("%s %s: expected `%v` but got `%v`",
testName, tt.subTest, tt.envVars, vars)
}
if tt.err != nil {
if err.Error() != tt.err.Error() {
t.Errorf("%s %s: expected error `%v` but got `%v`",
testName, tt.subTest, tt.err, err)
}
} else {
if err != nil {
t.Errorf("%s %s: expected no error but got error: `%v`",
testName, tt.subTest, err)
}
}
}
}
func testResources(cluster *Cluster, podSpec *v1.PodTemplateSpec) error { func testResources(cluster *Cluster, podSpec *v1.PodTemplateSpec) error {
cpuReq := podSpec.Spec.Containers[0].Resources.Requests["cpu"] cpuReq := podSpec.Spec.Containers[0].Resources.Requests["cpu"]
if cpuReq.String() != cluster.OpConfig.ConnectionPooler.ConnectionPoolerDefaultCPURequest { if cpuReq.String() != cluster.OpConfig.ConnectionPooler.ConnectionPoolerDefaultCPURequest {

View File

@ -58,6 +58,7 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
result.PodServiceAccountDefinition = fromCRD.Kubernetes.PodServiceAccountDefinition result.PodServiceAccountDefinition = fromCRD.Kubernetes.PodServiceAccountDefinition
result.PodServiceAccountRoleBindingDefinition = fromCRD.Kubernetes.PodServiceAccountRoleBindingDefinition result.PodServiceAccountRoleBindingDefinition = fromCRD.Kubernetes.PodServiceAccountRoleBindingDefinition
result.PodEnvironmentConfigMap = fromCRD.Kubernetes.PodEnvironmentConfigMap result.PodEnvironmentConfigMap = fromCRD.Kubernetes.PodEnvironmentConfigMap
result.PodEnvironmentSecret = fromCRD.Kubernetes.PodEnvironmentSecret
result.PodTerminateGracePeriod = util.CoalesceDuration(time.Duration(fromCRD.Kubernetes.PodTerminateGracePeriod), "5m") result.PodTerminateGracePeriod = util.CoalesceDuration(time.Duration(fromCRD.Kubernetes.PodTerminateGracePeriod), "5m")
result.SpiloPrivileged = fromCRD.Kubernetes.SpiloPrivileged result.SpiloPrivileged = fromCRD.Kubernetes.SpiloPrivileged
result.SpiloFSGroup = fromCRD.Kubernetes.SpiloFSGroup result.SpiloFSGroup = fromCRD.Kubernetes.SpiloFSGroup

View File

@ -45,6 +45,7 @@ type Resources struct {
MinCPULimit string `name:"min_cpu_limit" default:"250m"` MinCPULimit string `name:"min_cpu_limit" default:"250m"`
MinMemoryLimit string `name:"min_memory_limit" default:"250Mi"` MinMemoryLimit string `name:"min_memory_limit" default:"250Mi"`
PodEnvironmentConfigMap spec.NamespacedName `name:"pod_environment_configmap"` PodEnvironmentConfigMap spec.NamespacedName `name:"pod_environment_configmap"`
PodEnvironmentSecret string `name:"pod_environment_secret"`
NodeReadinessLabel map[string]string `name:"node_readiness_label" default:""` NodeReadinessLabel map[string]string `name:"node_readiness_label" default:""`
MaxInstances int32 `name:"max_instances" default:"-1"` MaxInstances int32 `name:"max_instances" default:"-1"`
MinInstances int32 `name:"min_instances" default:"-1"` MinInstances int32 `name:"min_instances" default:"-1"`