Fully speced global sidecars (#890)

* implement fully speced global sidecars

* fix issue #924
This commit is contained in:
Björn Fischer 2020-04-27 17:40:22 +02:00 committed by GitHub
parent f32c615a53
commit 168abfe37b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
15 changed files with 462 additions and 140 deletions

View File

@ -84,6 +84,12 @@ spec:
type: object type: object
additionalProperties: additionalProperties:
type: string type: string
sidecars:
type: array
nullable: true
items:
type: object
additionalProperties: true
workers: workers:
type: integer type: integer
minimum: 1 minimum: 1

View File

@ -507,6 +507,33 @@ A secret can be pre-provisioned in different ways:
* Automatically provisioned via a custom K8s controller like * Automatically provisioned via a custom K8s controller like
[kube-aws-iam-controller](https://github.com/mikkeloscar/kube-aws-iam-controller) [kube-aws-iam-controller](https://github.com/mikkeloscar/kube-aws-iam-controller)
## Sidecars for Postgres clusters
A list of sidecars is added to each cluster created by the
operator. The default is empty list.
```yaml
kind: OperatorConfiguration
configuration:
sidecars:
- image: image:123
name: global-sidecar
ports:
- containerPort: 80
volumeMounts:
- mountPath: /custom-pgdata-mountpoint
name: pgdata
- ...
```
In addition to any environment variables you specify, the following environment variables are always passed to sidecars:
- `POD_NAME` - field reference to `metadata.name`
- `POD_NAMESPACE` - field reference to `metadata.namespace`
- `POSTGRES_USER` - the superuser that can be used to connect to the database
- `POSTGRES_PASSWORD` - the password for the superuser
## Setting up the Postgres Operator UI ## Setting up the Postgres Operator UI
Since the v1.2 release the Postgres Operator is shipped with a browser-based Since the v1.2 release the Postgres Operator is shipped with a browser-based

View File

@ -93,9 +93,17 @@ Those are top-level keys, containing both leaf keys and groups.
repository](https://github.com/zalando/spilo). repository](https://github.com/zalando/spilo).
* **sidecar_docker_images** * **sidecar_docker_images**
a map of sidecar names to Docker images to run with Spilo. In case of the name *deprecated*: use **sidecars** instead. A map of sidecar names to Docker images to
conflict with the definition in the cluster manifest the cluster-specific one run with Spilo. In case of the name conflict with the definition in the cluster
is preferred. manifest the cluster-specific one is preferred.
* **sidecars**
a list of sidecars to run with Spilo, for any cluster (i.e. globally defined sidecars).
Each item in the list is of type
[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#container-v1-core).
Globally defined sidecars can be overwritten by specifying a sidecar in the custom resource with
the same name. Note: This field is not part of the schema validation. If the container specification
is invalid, then the operator fails to create the statefulset.
* **enable_shm_volume** * **enable_shm_volume**
Instruct operator to start any new database pod without limitations on shm Instruct operator to start any new database pod without limitations on shm
@ -133,8 +141,9 @@ Those are top-level keys, containing both leaf keys and groups.
at the cost of overprovisioning memory and potential scheduling problems for at the cost of overprovisioning memory and potential scheduling problems for
containers with high memory limits due to the lack of memory on Kubernetes containers with high memory limits due to the lack of memory on Kubernetes
cluster nodes. This affects all containers created by the operator (Postgres, cluster nodes. This affects all containers created by the operator (Postgres,
Scalyr sidecar, and other sidecars); to set resources for the operator's own Scalyr sidecar, and other sidecars except **sidecars** defined in the operator
container, change the [operator deployment manually](../../manifests/postgres-operator.yaml#L20). configuration); to set resources for the operator's own container, change the
[operator deployment manually](../../manifests/postgres-operator.yaml#L20).
The default is `false`. The default is `false`.
## Postgres users ## Postgres users
@ -206,12 +215,12 @@ configuration they are grouped under the `kubernetes` key.
Default is true. Default is true.
* **enable_init_containers** * **enable_init_containers**
global option to allow for creating init containers to run actions before global option to allow for creating init containers in the cluster manifest to
Spilo is started. Default is true. run actions before Spilo is started. Default is true.
* **enable_sidecars** * **enable_sidecars**
global option to allow for creating sidecar containers to run alongside Spilo global option to allow for creating sidecar containers in the cluster manifest
on the same pod. Default is true. to run alongside Spilo on the same pod. Globally defined sidecars are always enabled. Default is true.
* **secret_name_template** * **secret_name_template**
a template for the name of the database user secrets generated by the a template for the name of the database user secrets generated by the

View File

@ -442,6 +442,8 @@ The PostgreSQL volume is shared with sidecars and is mounted at
specified but globally disabled in the configuration. The `enable_sidecars` specified but globally disabled in the configuration. The `enable_sidecars`
option must be set to `true`. option must be set to `true`.
If you want to add a sidecar to every cluster managed by the operator, you can specify it in the [operator configuration](administrator.md#sidecars-for-postgres-clusters) instead.
## InitContainers Support ## InitContainers Support
Each cluster can specify arbitrary init containers to run. These containers can Each cluster can specify arbitrary init containers to run. These containers can

View File

@ -60,6 +60,12 @@ spec:
type: object type: object
additionalProperties: additionalProperties:
type: string type: string
sidecars:
type: array
nullable: true
items:
type: object
additionalProperties: true
workers: workers:
type: integer type: integer
minimum: 1 minimum: 1

View File

@ -13,8 +13,11 @@ configuration:
resync_period: 30m resync_period: 30m
repair_period: 5m repair_period: 5m
# set_memory_request_to_limit: false # set_memory_request_to_limit: false
# sidecar_docker_images: # sidecars:
# example: "exampleimage:exampletag" # - image: image:123
# name: global-sidecar-1
# ports:
# - containerPort: 80
workers: 4 workers: 4
users: users:
replication_username: standby replication_username: standby

View File

@ -797,6 +797,17 @@ var OperatorConfigCRDResourceValidation = apiextv1beta1.CustomResourceValidation
}, },
}, },
}, },
"sidecars": {
Type: "array",
Items: &apiextv1beta1.JSONSchemaPropsOrArray{
Schema: &apiextv1beta1.JSONSchemaProps{
Type: "object",
AdditionalProperties: &apiextv1beta1.JSONSchemaPropsOrBool{
Allows: true,
},
},
},
},
"workers": { "workers": {
Type: "integer", Type: "integer",
Minimum: &min1, Minimum: &min1,

View File

@ -8,6 +8,7 @@ import (
"time" "time"
"github.com/zalando/postgres-operator/pkg/spec" "github.com/zalando/postgres-operator/pkg/spec"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
) )
@ -192,7 +193,9 @@ type OperatorConfigurationData struct {
RepairPeriod Duration `json:"repair_period,omitempty"` RepairPeriod Duration `json:"repair_period,omitempty"`
SetMemoryRequestToLimit bool `json:"set_memory_request_to_limit,omitempty"` SetMemoryRequestToLimit bool `json:"set_memory_request_to_limit,omitempty"`
ShmVolume *bool `json:"enable_shm_volume,omitempty"` ShmVolume *bool `json:"enable_shm_volume,omitempty"`
Sidecars map[string]string `json:"sidecar_docker_images,omitempty"` // deprecated in favour of SidecarContainers
SidecarImages map[string]string `json:"sidecar_docker_images,omitempty"`
SidecarContainers []v1.Container `json:"sidecars,omitempty"`
PostgresUsersConfiguration PostgresUsersConfiguration `json:"users"` PostgresUsersConfiguration PostgresUsersConfiguration `json:"users"`
Kubernetes KubernetesMetaConfiguration `json:"kubernetes"` Kubernetes KubernetesMetaConfiguration `json:"kubernetes"`
PostgresPodResources PostgresPodResourcesDefaults `json:"postgres_pod_resources"` PostgresPodResources PostgresPodResourcesDefaults `json:"postgres_pod_resources"`

View File

@ -312,13 +312,20 @@ func (in *OperatorConfigurationData) DeepCopyInto(out *OperatorConfigurationData
*out = new(bool) *out = new(bool)
**out = **in **out = **in
} }
if in.Sidecars != nil { if in.SidecarImages != nil {
in, out := &in.Sidecars, &out.Sidecars in, out := &in.SidecarImages, &out.SidecarImages
*out = make(map[string]string, len(*in)) *out = make(map[string]string, len(*in))
for key, val := range *in { for key, val := range *in {
(*out)[key] = val (*out)[key] = val
} }
} }
if in.SidecarContainers != nil {
in, out := &in.SidecarContainers, &out.SidecarContainers
*out = make([]corev1.Container, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
out.PostgresUsersConfiguration = in.PostgresUsersConfiguration out.PostgresUsersConfiguration = in.PostgresUsersConfiguration
in.Kubernetes.DeepCopyInto(&out.Kubernetes) in.Kubernetes.DeepCopyInto(&out.Kubernetes)
out.PostgresPodResources = in.PostgresPodResources out.PostgresPodResources = in.PostgresPodResources

View File

@ -462,8 +462,7 @@ func generateContainer(
} }
func generateSidecarContainers(sidecars []acidv1.Sidecar, func generateSidecarContainers(sidecars []acidv1.Sidecar,
volumeMounts []v1.VolumeMount, defaultResources acidv1.Resources, defaultResources acidv1.Resources, startIndex int, logger *logrus.Entry) ([]v1.Container, error) {
superUserName string, credentialsSecretName string, logger *logrus.Entry) ([]v1.Container, error) {
if len(sidecars) > 0 { if len(sidecars) > 0 {
result := make([]v1.Container, 0) result := make([]v1.Container, 0)
@ -482,7 +481,7 @@ func generateSidecarContainers(sidecars []acidv1.Sidecar,
return nil, err return nil, err
} }
sc := getSidecarContainer(sidecar, index, volumeMounts, resources, superUserName, credentialsSecretName, logger) sc := getSidecarContainer(sidecar, startIndex+index, resources)
result = append(result, *sc) result = append(result, *sc)
} }
return result, nil return result, nil
@ -490,6 +489,55 @@ func generateSidecarContainers(sidecars []acidv1.Sidecar,
return nil, nil return nil, nil
} }
// adds common fields to sidecars
func patchSidecarContainers(in []v1.Container, volumeMounts []v1.VolumeMount, superUserName string, credentialsSecretName string, logger *logrus.Entry) []v1.Container {
result := []v1.Container{}
for _, container := range in {
container.VolumeMounts = append(container.VolumeMounts, volumeMounts...)
env := []v1.EnvVar{
{
Name: "POD_NAME",
ValueFrom: &v1.EnvVarSource{
FieldRef: &v1.ObjectFieldSelector{
APIVersion: "v1",
FieldPath: "metadata.name",
},
},
},
{
Name: "POD_NAMESPACE",
ValueFrom: &v1.EnvVarSource{
FieldRef: &v1.ObjectFieldSelector{
APIVersion: "v1",
FieldPath: "metadata.namespace",
},
},
},
{
Name: "POSTGRES_USER",
Value: superUserName,
},
{
Name: "POSTGRES_PASSWORD",
ValueFrom: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: credentialsSecretName,
},
Key: "password",
},
},
},
}
mergedEnv := append(container.Env, env...)
container.Env = deduplicateEnvVars(mergedEnv, container.Name, logger)
result = append(result, container)
}
return result
}
// Check whether or not we're requested to mount an shm volume, // Check whether or not we're requested to mount an shm volume,
// taking into account that PostgreSQL manifest has precedence. // taking into account that PostgreSQL manifest has precedence.
func mountShmVolumeNeeded(opConfig config.Config, spec *acidv1.PostgresSpec) *bool { func mountShmVolumeNeeded(opConfig config.Config, spec *acidv1.PostgresSpec) *bool {
@ -724,58 +772,18 @@ func deduplicateEnvVars(input []v1.EnvVar, containerName string, logger *logrus.
return result return result
} }
func getSidecarContainer(sidecar acidv1.Sidecar, index int, volumeMounts []v1.VolumeMount, func getSidecarContainer(sidecar acidv1.Sidecar, index int, resources *v1.ResourceRequirements) *v1.Container {
resources *v1.ResourceRequirements, superUserName string, credentialsSecretName string, logger *logrus.Entry) *v1.Container {
name := sidecar.Name name := sidecar.Name
if name == "" { if name == "" {
name = fmt.Sprintf("sidecar-%d", index) name = fmt.Sprintf("sidecar-%d", index)
} }
env := []v1.EnvVar{
{
Name: "POD_NAME",
ValueFrom: &v1.EnvVarSource{
FieldRef: &v1.ObjectFieldSelector{
APIVersion: "v1",
FieldPath: "metadata.name",
},
},
},
{
Name: "POD_NAMESPACE",
ValueFrom: &v1.EnvVarSource{
FieldRef: &v1.ObjectFieldSelector{
APIVersion: "v1",
FieldPath: "metadata.namespace",
},
},
},
{
Name: "POSTGRES_USER",
Value: superUserName,
},
{
Name: "POSTGRES_PASSWORD",
ValueFrom: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: credentialsSecretName,
},
Key: "password",
},
},
},
}
if len(sidecar.Env) > 0 {
env = append(env, sidecar.Env...)
}
return &v1.Container{ return &v1.Container{
Name: name, Name: name,
Image: sidecar.DockerImage, Image: sidecar.DockerImage,
ImagePullPolicy: v1.PullIfNotPresent, ImagePullPolicy: v1.PullIfNotPresent,
Resources: *resources, Resources: *resources,
VolumeMounts: volumeMounts, Env: sidecar.Env,
Env: deduplicateEnvVars(env, name, logger),
Ports: sidecar.Ports, Ports: sidecar.Ports,
} }
} }
@ -1065,37 +1073,63 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
c.OpConfig.Resources.SpiloPrivileged, c.OpConfig.Resources.SpiloPrivileged,
) )
// resolve conflicts between operator-global and per-cluster sidecars // generate container specs for sidecars specified in the cluster manifest
sideCars := c.mergeSidecars(spec.Sidecars) clusterSpecificSidecars := []v1.Container{}
if spec.Sidecars != nil && len(spec.Sidecars) > 0 {
// warn if sidecars are defined, but globally disabled (does not apply to globally defined sidecars)
if c.OpConfig.EnableSidecars != nil && !(*c.OpConfig.EnableSidecars) {
c.logger.Warningf("sidecars specified but disabled in configuration - next statefulset creation would fail")
}
resourceRequirementsScalyrSidecar := makeResources( if clusterSpecificSidecars, err = generateSidecarContainers(spec.Sidecars, defaultResources, 0, c.logger); err != nil {
c.OpConfig.ScalyrCPURequest, return nil, fmt.Errorf("could not generate sidecar containers: %v", err)
c.OpConfig.ScalyrMemoryRequest, }
c.OpConfig.ScalyrCPULimit, }
c.OpConfig.ScalyrMemoryLimit,
) // decrapted way of providing global sidecars
var globalSidecarContainersByDockerImage []v1.Container
var globalSidecarsByDockerImage []acidv1.Sidecar
for name, dockerImage := range c.OpConfig.SidecarImages {
globalSidecarsByDockerImage = append(globalSidecarsByDockerImage, acidv1.Sidecar{Name: name, DockerImage: dockerImage})
}
if globalSidecarContainersByDockerImage, err = generateSidecarContainers(globalSidecarsByDockerImage, defaultResources, len(clusterSpecificSidecars), c.logger); err != nil {
return nil, fmt.Errorf("could not generate sidecar containers: %v", err)
}
// make the resulting list reproducible
// c.OpConfig.SidecarImages is unsorted by Golang definition
// .Name is unique
sort.Slice(globalSidecarContainersByDockerImage, func(i, j int) bool {
return globalSidecarContainersByDockerImage[i].Name < globalSidecarContainersByDockerImage[j].Name
})
// generate scalyr sidecar container // generate scalyr sidecar container
if scalyrSidecar := var scalyrSidecars []v1.Container
if scalyrSidecar, err :=
generateScalyrSidecarSpec(c.Name, generateScalyrSidecarSpec(c.Name,
c.OpConfig.ScalyrAPIKey, c.OpConfig.ScalyrAPIKey,
c.OpConfig.ScalyrServerURL, c.OpConfig.ScalyrServerURL,
c.OpConfig.ScalyrImage, c.OpConfig.ScalyrImage,
&resourceRequirementsScalyrSidecar, c.logger); scalyrSidecar != nil { c.OpConfig.ScalyrCPURequest,
sideCars = append(sideCars, *scalyrSidecar) c.OpConfig.ScalyrMemoryRequest,
c.OpConfig.ScalyrCPULimit,
c.OpConfig.ScalyrMemoryLimit,
defaultResources,
c.logger); err != nil {
return nil, fmt.Errorf("could not generate Scalyr sidecar: %v", err)
} else {
if scalyrSidecar != nil {
scalyrSidecars = append(scalyrSidecars, *scalyrSidecar)
}
} }
// generate sidecar containers sidecarContainers, conflicts := mergeContainers(clusterSpecificSidecars, c.Config.OpConfig.SidecarContainers, globalSidecarContainersByDockerImage, scalyrSidecars)
if sideCars != nil && len(sideCars) > 0 { for containerName := range conflicts {
if c.OpConfig.EnableSidecars != nil && !(*c.OpConfig.EnableSidecars) { c.logger.Warningf("a sidecar is specified twice. Ignoring sidecar %q in favor of %q with high a precendence",
c.logger.Warningf("sidecars specified but disabled in configuration - next statefulset creation would fail") containerName, containerName)
}
if sidecarContainers, err = generateSidecarContainers(sideCars, volumeMounts, defaultResources,
c.OpConfig.SuperUsername, c.credentialSecretName(c.OpConfig.SuperUsername), c.logger); err != nil {
return nil, fmt.Errorf("could not generate sidecar containers: %v", err)
}
} }
sidecarContainers = patchSidecarContainers(sidecarContainers, volumeMounts, c.OpConfig.SuperUsername, c.credentialSecretName(c.OpConfig.SuperUsername), c.logger)
tolerationSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration) tolerationSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration)
effectivePodPriorityClassName := util.Coalesce(spec.PodPriorityClassName, c.OpConfig.PodPriorityClassName) effectivePodPriorityClassName := util.Coalesce(spec.PodPriorityClassName, c.OpConfig.PodPriorityClassName)
@ -1188,17 +1222,25 @@ func (c *Cluster) generatePodAnnotations(spec *acidv1.PostgresSpec) map[string]s
} }
func generateScalyrSidecarSpec(clusterName, APIKey, serverURL, dockerImage string, func generateScalyrSidecarSpec(clusterName, APIKey, serverURL, dockerImage string,
containerResources *acidv1.Resources, logger *logrus.Entry) *acidv1.Sidecar { scalyrCPURequest string, scalyrMemoryRequest string, scalyrCPULimit string, scalyrMemoryLimit string,
defaultResources acidv1.Resources, logger *logrus.Entry) (*v1.Container, error) {
if APIKey == "" || dockerImage == "" { if APIKey == "" || dockerImage == "" {
if APIKey == "" && dockerImage != "" { if APIKey == "" && dockerImage != "" {
logger.Warning("Not running Scalyr sidecar: SCALYR_API_KEY must be defined") logger.Warning("Not running Scalyr sidecar: SCALYR_API_KEY must be defined")
} }
return nil return nil, nil
} }
scalarSpec := &acidv1.Sidecar{ resourcesScalyrSidecar := makeResources(
Name: "scalyr-sidecar", scalyrCPURequest,
DockerImage: dockerImage, scalyrMemoryRequest,
Env: []v1.EnvVar{ scalyrCPULimit,
scalyrMemoryLimit,
)
resourceRequirementsScalyrSidecar, err := generateResourceRequirements(resourcesScalyrSidecar, defaultResources)
if err != nil {
return nil, fmt.Errorf("invalid resources for Scalyr sidecar: %v", err)
}
env := []v1.EnvVar{
{ {
Name: "SCALYR_API_KEY", Name: "SCALYR_API_KEY",
Value: APIKey, Value: APIKey,
@ -1207,38 +1249,17 @@ func generateScalyrSidecarSpec(clusterName, APIKey, serverURL, dockerImage strin
Name: "SCALYR_SERVER_HOST", Name: "SCALYR_SERVER_HOST",
Value: clusterName, Value: clusterName,
}, },
},
Resources: *containerResources,
} }
if serverURL != "" { if serverURL != "" {
scalarSpec.Env = append(scalarSpec.Env, v1.EnvVar{Name: "SCALYR_SERVER_URL", Value: serverURL}) env = append(env, v1.EnvVar{Name: "SCALYR_SERVER_URL", Value: serverURL})
} }
return scalarSpec return &v1.Container{
} Name: "scalyr-sidecar",
Image: dockerImage,
// mergeSidecar merges globally-defined sidecars with those defined in the cluster manifest Env: env,
func (c *Cluster) mergeSidecars(sidecars []acidv1.Sidecar) []acidv1.Sidecar { ImagePullPolicy: v1.PullIfNotPresent,
globalSidecarsToSkip := map[string]bool{} Resources: *resourceRequirementsScalyrSidecar,
result := make([]acidv1.Sidecar, 0) }, nil
for i, sidecar := range sidecars {
dockerImage, ok := c.OpConfig.Sidecars[sidecar.Name]
if ok {
if dockerImage != sidecar.DockerImage {
c.logger.Warningf("merging definitions for sidecar %q: "+
"ignoring %q in the global scope in favor of %q defined in the cluster",
sidecar.Name, dockerImage, sidecar.DockerImage)
}
globalSidecarsToSkip[sidecar.Name] = true
}
result = append(result, sidecars[i])
}
for name, dockerImage := range c.OpConfig.Sidecars {
if !globalSidecarsToSkip[name] {
result = append(result, acidv1.Sidecar{Name: name, DockerImage: dockerImage})
}
}
return result
} }
func (c *Cluster) getNumberOfInstances(spec *acidv1.PostgresSpec) int32 { func (c *Cluster) getNumberOfInstances(spec *acidv1.PostgresSpec) int32 {

View File

@ -18,6 +18,7 @@ import (
appsv1 "k8s.io/api/apps/v1" appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1" v1 "k8s.io/api/core/v1"
policyv1beta1 "k8s.io/api/policy/v1beta1" policyv1beta1 "k8s.io/api/policy/v1beta1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr" "k8s.io/apimachinery/pkg/util/intstr"
) )
@ -1206,3 +1207,201 @@ func TestAdditionalVolume(t *testing.T) {
} }
} }
} }
// inject sidecars through all available mechanisms and check the resulting container specs
func TestSidecars(t *testing.T) {
var err error
var spec acidv1.PostgresSpec
var cluster *Cluster
generateKubernetesResources := func(cpuRequest string, cpuLimit string, memoryRequest string, memoryLimit string) v1.ResourceRequirements {
parsedCPURequest, err := resource.ParseQuantity(cpuRequest)
assert.NoError(t, err)
parsedCPULimit, err := resource.ParseQuantity(cpuLimit)
assert.NoError(t, err)
parsedMemoryRequest, err := resource.ParseQuantity(memoryRequest)
assert.NoError(t, err)
parsedMemoryLimit, err := resource.ParseQuantity(memoryLimit)
assert.NoError(t, err)
return v1.ResourceRequirements{
Requests: v1.ResourceList{
v1.ResourceCPU: parsedCPURequest,
v1.ResourceMemory: parsedMemoryRequest,
},
Limits: v1.ResourceList{
v1.ResourceCPU: parsedCPULimit,
v1.ResourceMemory: parsedMemoryLimit,
},
}
}
spec = acidv1.PostgresSpec{
TeamID: "myapp", NumberOfInstances: 1,
Resources: acidv1.Resources{
ResourceRequests: acidv1.ResourceDescription{CPU: "1", Memory: "10"},
ResourceLimits: acidv1.ResourceDescription{CPU: "1", Memory: "10"},
},
Volume: acidv1.Volume{
Size: "1G",
},
Sidecars: []acidv1.Sidecar{
acidv1.Sidecar{
Name: "cluster-specific-sidecar",
},
acidv1.Sidecar{
Name: "cluster-specific-sidecar-with-resources",
Resources: acidv1.Resources{
ResourceRequests: acidv1.ResourceDescription{CPU: "210m", Memory: "0.8Gi"},
ResourceLimits: acidv1.ResourceDescription{CPU: "510m", Memory: "1.4Gi"},
},
},
acidv1.Sidecar{
Name: "replace-sidecar",
DockerImage: "overwrite-image",
},
},
}
cluster = New(
Config{
OpConfig: config.Config{
PodManagementPolicy: "ordered_ready",
ProtectedRoles: []string{"admin"},
Auth: config.Auth{
SuperUsername: superUserName,
ReplicationUsername: replicationUserName,
},
Resources: config.Resources{
DefaultCPURequest: "200m",
DefaultCPULimit: "500m",
DefaultMemoryRequest: "0.7Gi",
DefaultMemoryLimit: "1.3Gi",
},
SidecarImages: map[string]string{
"deprecated-global-sidecar": "image:123",
},
SidecarContainers: []v1.Container{
v1.Container{
Name: "global-sidecar",
},
// will be replaced by a cluster specific sidecar with the same name
v1.Container{
Name: "replace-sidecar",
Image: "replaced-image",
},
},
Scalyr: config.Scalyr{
ScalyrAPIKey: "abc",
ScalyrImage: "scalyr-image",
ScalyrCPURequest: "220m",
ScalyrCPULimit: "520m",
ScalyrMemoryRequest: "0.9Gi",
// ise default memory limit
},
},
}, k8sutil.KubernetesClient{}, acidv1.Postgresql{}, logger)
s, err := cluster.generateStatefulSet(&spec)
assert.NoError(t, err)
env := []v1.EnvVar{
{
Name: "POD_NAME",
ValueFrom: &v1.EnvVarSource{
FieldRef: &v1.ObjectFieldSelector{
APIVersion: "v1",
FieldPath: "metadata.name",
},
},
},
{
Name: "POD_NAMESPACE",
ValueFrom: &v1.EnvVarSource{
FieldRef: &v1.ObjectFieldSelector{
APIVersion: "v1",
FieldPath: "metadata.namespace",
},
},
},
{
Name: "POSTGRES_USER",
Value: superUserName,
},
{
Name: "POSTGRES_PASSWORD",
ValueFrom: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: "",
},
Key: "password",
},
},
},
}
mounts := []v1.VolumeMount{
v1.VolumeMount{
Name: "pgdata",
MountPath: "/home/postgres/pgdata",
},
}
// deduplicated sidecars and Patroni
assert.Equal(t, 7, len(s.Spec.Template.Spec.Containers), "wrong number of containers")
// cluster specific sidecar
assert.Contains(t, s.Spec.Template.Spec.Containers, v1.Container{
Name: "cluster-specific-sidecar",
Env: env,
Resources: generateKubernetesResources("200m", "500m", "0.7Gi", "1.3Gi"),
ImagePullPolicy: v1.PullIfNotPresent,
VolumeMounts: mounts,
})
// container specific resources
expectedResources := generateKubernetesResources("210m", "510m", "0.8Gi", "1.4Gi")
assert.Equal(t, expectedResources.Requests[v1.ResourceCPU], s.Spec.Template.Spec.Containers[2].Resources.Requests[v1.ResourceCPU])
assert.Equal(t, expectedResources.Limits[v1.ResourceCPU], s.Spec.Template.Spec.Containers[2].Resources.Limits[v1.ResourceCPU])
assert.Equal(t, expectedResources.Requests[v1.ResourceMemory], s.Spec.Template.Spec.Containers[2].Resources.Requests[v1.ResourceMemory])
assert.Equal(t, expectedResources.Limits[v1.ResourceMemory], s.Spec.Template.Spec.Containers[2].Resources.Limits[v1.ResourceMemory])
// deprecated global sidecar
assert.Contains(t, s.Spec.Template.Spec.Containers, v1.Container{
Name: "deprecated-global-sidecar",
Image: "image:123",
Env: env,
Resources: generateKubernetesResources("200m", "500m", "0.7Gi", "1.3Gi"),
ImagePullPolicy: v1.PullIfNotPresent,
VolumeMounts: mounts,
})
// global sidecar
assert.Contains(t, s.Spec.Template.Spec.Containers, v1.Container{
Name: "global-sidecar",
Env: env,
VolumeMounts: mounts,
})
// replaced sidecar
assert.Contains(t, s.Spec.Template.Spec.Containers, v1.Container{
Name: "replace-sidecar",
Image: "overwrite-image",
Resources: generateKubernetesResources("200m", "500m", "0.7Gi", "1.3Gi"),
ImagePullPolicy: v1.PullIfNotPresent,
Env: env,
VolumeMounts: mounts,
})
// replaced sidecar
// the order in env is important
scalyrEnv := append([]v1.EnvVar{v1.EnvVar{Name: "SCALYR_API_KEY", Value: "abc"}, v1.EnvVar{Name: "SCALYR_SERVER_HOST", Value: ""}}, env...)
assert.Contains(t, s.Spec.Template.Spec.Containers, v1.Container{
Name: "scalyr-sidecar",
Image: "scalyr-image",
Resources: generateKubernetesResources("220m", "520m", "0.9Gi", "1.3Gi"),
ImagePullPolicy: v1.PullIfNotPresent,
Env: scalyrEnv,
VolumeMounts: mounts,
})
}

View File

@ -530,3 +530,22 @@ func (c *Cluster) needConnectionPoolerWorker(spec *acidv1.PostgresSpec) bool {
func (c *Cluster) needConnectionPooler() bool { func (c *Cluster) needConnectionPooler() bool {
return c.needConnectionPoolerWorker(&c.Spec) return c.needConnectionPoolerWorker(&c.Spec)
} }
// Earlier arguments take priority
func mergeContainers(containers ...[]v1.Container) ([]v1.Container, []string) {
containerNameTaken := map[string]bool{}
result := make([]v1.Container, 0)
conflicts := make([]string, 0)
for _, containerArray := range containers {
for _, container := range containerArray {
if _, taken := containerNameTaken[container.Name]; taken {
conflicts = append(conflicts, container.Name)
} else {
containerNameTaken[container.Name] = true
result = append(result, container)
}
}
}
return result, conflicts
}

View File

@ -178,6 +178,11 @@ func (c *Controller) warnOnDeprecatedOperatorParameters() {
c.logger.Warningf("Operator configuration parameter 'enable_load_balancer' is deprecated and takes no effect. " + c.logger.Warningf("Operator configuration parameter 'enable_load_balancer' is deprecated and takes no effect. " +
"Consider using the 'enable_master_load_balancer' or 'enable_replica_load_balancer' instead.") "Consider using the 'enable_master_load_balancer' or 'enable_replica_load_balancer' instead.")
} }
if len(c.opConfig.SidecarImages) > 0 {
c.logger.Warningf("Operator configuration parameter 'sidecar_docker_images' is deprecated. " +
"Consider using 'sidecars' instead.")
}
} }
func (c *Controller) initPodServiceAccount() { func (c *Controller) initPodServiceAccount() {

View File

@ -44,7 +44,8 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
result.RepairPeriod = time.Duration(fromCRD.RepairPeriod) result.RepairPeriod = time.Duration(fromCRD.RepairPeriod)
result.SetMemoryRequestToLimit = fromCRD.SetMemoryRequestToLimit result.SetMemoryRequestToLimit = fromCRD.SetMemoryRequestToLimit
result.ShmVolume = fromCRD.ShmVolume result.ShmVolume = fromCRD.ShmVolume
result.Sidecars = fromCRD.Sidecars result.SidecarImages = fromCRD.SidecarImages
result.SidecarContainers = fromCRD.SidecarContainers
// user config // user config
result.SuperUsername = fromCRD.PostgresUsersConfiguration.SuperUsername result.SuperUsername = fromCRD.PostgresUsersConfiguration.SuperUsername

View File

@ -9,6 +9,7 @@ import (
"github.com/zalando/postgres-operator/pkg/spec" "github.com/zalando/postgres-operator/pkg/spec"
"github.com/zalando/postgres-operator/pkg/util/constants" "github.com/zalando/postgres-operator/pkg/util/constants"
v1 "k8s.io/api/core/v1"
) )
// CRD describes CustomResourceDefinition specific configuration parameters // CRD describes CustomResourceDefinition specific configuration parameters
@ -111,7 +112,9 @@ type Config struct {
KubernetesUseConfigMaps bool `name:"kubernetes_use_configmaps" default:"false"` KubernetesUseConfigMaps bool `name:"kubernetes_use_configmaps" default:"false"`
EtcdHost string `name:"etcd_host" default:""` // special values: the empty string "" means Patroni will use K8s as a DCS EtcdHost string `name:"etcd_host" default:""` // special values: the empty string "" means Patroni will use K8s as a DCS
DockerImage string `name:"docker_image" default:"registry.opensource.zalan.do/acid/spilo-12:1.6-p2"` DockerImage string `name:"docker_image" default:"registry.opensource.zalan.do/acid/spilo-12:1.6-p2"`
Sidecars map[string]string `name:"sidecar_docker_images"` // deprecated in favour of SidecarContainers
SidecarImages map[string]string `name:"sidecar_docker_images"`
SidecarContainers []v1.Container `name:"sidecars"`
PodServiceAccountName string `name:"pod_service_account_name" default:"postgres-pod"` PodServiceAccountName string `name:"pod_service_account_name" default:"postgres-pod"`
// value of this string must be valid JSON or YAML; see initPodServiceAccount // value of this string must be valid JSON or YAML; see initPodServiceAccount
PodServiceAccountDefinition string `name:"pod_service_account_definition" default:""` PodServiceAccountDefinition string `name:"pod_service_account_definition" default:""`