Fully speced global sidecars (#890)
* implement fully speced global sidecars * fix issue #924
This commit is contained in:
parent
f32c615a53
commit
168abfe37b
|
|
@ -84,6 +84,12 @@ spec:
|
|||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
sidecars:
|
||||
type: array
|
||||
nullable: true
|
||||
items:
|
||||
type: object
|
||||
additionalProperties: true
|
||||
workers:
|
||||
type: integer
|
||||
minimum: 1
|
||||
|
|
|
|||
|
|
@ -507,6 +507,33 @@ A secret can be pre-provisioned in different ways:
|
|||
* Automatically provisioned via a custom K8s controller like
|
||||
[kube-aws-iam-controller](https://github.com/mikkeloscar/kube-aws-iam-controller)
|
||||
|
||||
## Sidecars for Postgres clusters
|
||||
|
||||
A list of sidecars is added to each cluster created by the
|
||||
operator. The default is empty list.
|
||||
|
||||
|
||||
```yaml
|
||||
kind: OperatorConfiguration
|
||||
configuration:
|
||||
sidecars:
|
||||
- image: image:123
|
||||
name: global-sidecar
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- mountPath: /custom-pgdata-mountpoint
|
||||
name: pgdata
|
||||
- ...
|
||||
```
|
||||
|
||||
In addition to any environment variables you specify, the following environment variables are always passed to sidecars:
|
||||
|
||||
- `POD_NAME` - field reference to `metadata.name`
|
||||
- `POD_NAMESPACE` - field reference to `metadata.namespace`
|
||||
- `POSTGRES_USER` - the superuser that can be used to connect to the database
|
||||
- `POSTGRES_PASSWORD` - the password for the superuser
|
||||
|
||||
## Setting up the Postgres Operator UI
|
||||
|
||||
Since the v1.2 release the Postgres Operator is shipped with a browser-based
|
||||
|
|
|
|||
|
|
@ -93,9 +93,17 @@ Those are top-level keys, containing both leaf keys and groups.
|
|||
repository](https://github.com/zalando/spilo).
|
||||
|
||||
* **sidecar_docker_images**
|
||||
a map of sidecar names to Docker images to run with Spilo. In case of the name
|
||||
conflict with the definition in the cluster manifest the cluster-specific one
|
||||
is preferred.
|
||||
*deprecated*: use **sidecars** instead. A map of sidecar names to Docker images to
|
||||
run with Spilo. In case of the name conflict with the definition in the cluster
|
||||
manifest the cluster-specific one is preferred.
|
||||
|
||||
* **sidecars**
|
||||
a list of sidecars to run with Spilo, for any cluster (i.e. globally defined sidecars).
|
||||
Each item in the list is of type
|
||||
[Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#container-v1-core).
|
||||
Globally defined sidecars can be overwritten by specifying a sidecar in the custom resource with
|
||||
the same name. Note: This field is not part of the schema validation. If the container specification
|
||||
is invalid, then the operator fails to create the statefulset.
|
||||
|
||||
* **enable_shm_volume**
|
||||
Instruct operator to start any new database pod without limitations on shm
|
||||
|
|
@ -133,8 +141,9 @@ Those are top-level keys, containing both leaf keys and groups.
|
|||
at the cost of overprovisioning memory and potential scheduling problems for
|
||||
containers with high memory limits due to the lack of memory on Kubernetes
|
||||
cluster nodes. This affects all containers created by the operator (Postgres,
|
||||
Scalyr sidecar, and other sidecars); to set resources for the operator's own
|
||||
container, change the [operator deployment manually](../../manifests/postgres-operator.yaml#L20).
|
||||
Scalyr sidecar, and other sidecars except **sidecars** defined in the operator
|
||||
configuration); to set resources for the operator's own container, change the
|
||||
[operator deployment manually](../../manifests/postgres-operator.yaml#L20).
|
||||
The default is `false`.
|
||||
|
||||
## Postgres users
|
||||
|
|
@ -206,12 +215,12 @@ configuration they are grouped under the `kubernetes` key.
|
|||
Default is true.
|
||||
|
||||
* **enable_init_containers**
|
||||
global option to allow for creating init containers to run actions before
|
||||
Spilo is started. Default is true.
|
||||
global option to allow for creating init containers in the cluster manifest to
|
||||
run actions before Spilo is started. Default is true.
|
||||
|
||||
* **enable_sidecars**
|
||||
global option to allow for creating sidecar containers to run alongside Spilo
|
||||
on the same pod. Default is true.
|
||||
global option to allow for creating sidecar containers in the cluster manifest
|
||||
to run alongside Spilo on the same pod. Globally defined sidecars are always enabled. Default is true.
|
||||
|
||||
* **secret_name_template**
|
||||
a template for the name of the database user secrets generated by the
|
||||
|
|
|
|||
|
|
@ -442,6 +442,8 @@ The PostgreSQL volume is shared with sidecars and is mounted at
|
|||
specified but globally disabled in the configuration. The `enable_sidecars`
|
||||
option must be set to `true`.
|
||||
|
||||
If you want to add a sidecar to every cluster managed by the operator, you can specify it in the [operator configuration](administrator.md#sidecars-for-postgres-clusters) instead.
|
||||
|
||||
## InitContainers Support
|
||||
|
||||
Each cluster can specify arbitrary init containers to run. These containers can
|
||||
|
|
|
|||
|
|
@ -60,6 +60,12 @@ spec:
|
|||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
sidecars:
|
||||
type: array
|
||||
nullable: true
|
||||
items:
|
||||
type: object
|
||||
additionalProperties: true
|
||||
workers:
|
||||
type: integer
|
||||
minimum: 1
|
||||
|
|
|
|||
|
|
@ -13,8 +13,11 @@ configuration:
|
|||
resync_period: 30m
|
||||
repair_period: 5m
|
||||
# set_memory_request_to_limit: false
|
||||
# sidecar_docker_images:
|
||||
# example: "exampleimage:exampletag"
|
||||
# sidecars:
|
||||
# - image: image:123
|
||||
# name: global-sidecar-1
|
||||
# ports:
|
||||
# - containerPort: 80
|
||||
workers: 4
|
||||
users:
|
||||
replication_username: standby
|
||||
|
|
|
|||
|
|
@ -797,6 +797,17 @@ var OperatorConfigCRDResourceValidation = apiextv1beta1.CustomResourceValidation
|
|||
},
|
||||
},
|
||||
},
|
||||
"sidecars": {
|
||||
Type: "array",
|
||||
Items: &apiextv1beta1.JSONSchemaPropsOrArray{
|
||||
Schema: &apiextv1beta1.JSONSchemaProps{
|
||||
Type: "object",
|
||||
AdditionalProperties: &apiextv1beta1.JSONSchemaPropsOrBool{
|
||||
Allows: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"workers": {
|
||||
Type: "integer",
|
||||
Minimum: &min1,
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/zalando/postgres-operator/pkg/spec"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
|
|
@ -192,7 +193,9 @@ type OperatorConfigurationData struct {
|
|||
RepairPeriod Duration `json:"repair_period,omitempty"`
|
||||
SetMemoryRequestToLimit bool `json:"set_memory_request_to_limit,omitempty"`
|
||||
ShmVolume *bool `json:"enable_shm_volume,omitempty"`
|
||||
Sidecars map[string]string `json:"sidecar_docker_images,omitempty"`
|
||||
// deprecated in favour of SidecarContainers
|
||||
SidecarImages map[string]string `json:"sidecar_docker_images,omitempty"`
|
||||
SidecarContainers []v1.Container `json:"sidecars,omitempty"`
|
||||
PostgresUsersConfiguration PostgresUsersConfiguration `json:"users"`
|
||||
Kubernetes KubernetesMetaConfiguration `json:"kubernetes"`
|
||||
PostgresPodResources PostgresPodResourcesDefaults `json:"postgres_pod_resources"`
|
||||
|
|
|
|||
|
|
@ -312,13 +312,20 @@ func (in *OperatorConfigurationData) DeepCopyInto(out *OperatorConfigurationData
|
|||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
if in.Sidecars != nil {
|
||||
in, out := &in.Sidecars, &out.Sidecars
|
||||
if in.SidecarImages != nil {
|
||||
in, out := &in.SidecarImages, &out.SidecarImages
|
||||
*out = make(map[string]string, len(*in))
|
||||
for key, val := range *in {
|
||||
(*out)[key] = val
|
||||
}
|
||||
}
|
||||
if in.SidecarContainers != nil {
|
||||
in, out := &in.SidecarContainers, &out.SidecarContainers
|
||||
*out = make([]corev1.Container, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
out.PostgresUsersConfiguration = in.PostgresUsersConfiguration
|
||||
in.Kubernetes.DeepCopyInto(&out.Kubernetes)
|
||||
out.PostgresPodResources = in.PostgresPodResources
|
||||
|
|
|
|||
|
|
@ -462,8 +462,7 @@ func generateContainer(
|
|||
}
|
||||
|
||||
func generateSidecarContainers(sidecars []acidv1.Sidecar,
|
||||
volumeMounts []v1.VolumeMount, defaultResources acidv1.Resources,
|
||||
superUserName string, credentialsSecretName string, logger *logrus.Entry) ([]v1.Container, error) {
|
||||
defaultResources acidv1.Resources, startIndex int, logger *logrus.Entry) ([]v1.Container, error) {
|
||||
|
||||
if len(sidecars) > 0 {
|
||||
result := make([]v1.Container, 0)
|
||||
|
|
@ -482,7 +481,7 @@ func generateSidecarContainers(sidecars []acidv1.Sidecar,
|
|||
return nil, err
|
||||
}
|
||||
|
||||
sc := getSidecarContainer(sidecar, index, volumeMounts, resources, superUserName, credentialsSecretName, logger)
|
||||
sc := getSidecarContainer(sidecar, startIndex+index, resources)
|
||||
result = append(result, *sc)
|
||||
}
|
||||
return result, nil
|
||||
|
|
@ -490,6 +489,55 @@ func generateSidecarContainers(sidecars []acidv1.Sidecar,
|
|||
return nil, nil
|
||||
}
|
||||
|
||||
// adds common fields to sidecars
|
||||
func patchSidecarContainers(in []v1.Container, volumeMounts []v1.VolumeMount, superUserName string, credentialsSecretName string, logger *logrus.Entry) []v1.Container {
|
||||
result := []v1.Container{}
|
||||
|
||||
for _, container := range in {
|
||||
container.VolumeMounts = append(container.VolumeMounts, volumeMounts...)
|
||||
env := []v1.EnvVar{
|
||||
{
|
||||
Name: "POD_NAME",
|
||||
ValueFrom: &v1.EnvVarSource{
|
||||
FieldRef: &v1.ObjectFieldSelector{
|
||||
APIVersion: "v1",
|
||||
FieldPath: "metadata.name",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "POD_NAMESPACE",
|
||||
ValueFrom: &v1.EnvVarSource{
|
||||
FieldRef: &v1.ObjectFieldSelector{
|
||||
APIVersion: "v1",
|
||||
FieldPath: "metadata.namespace",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "POSTGRES_USER",
|
||||
Value: superUserName,
|
||||
},
|
||||
{
|
||||
Name: "POSTGRES_PASSWORD",
|
||||
ValueFrom: &v1.EnvVarSource{
|
||||
SecretKeyRef: &v1.SecretKeySelector{
|
||||
LocalObjectReference: v1.LocalObjectReference{
|
||||
Name: credentialsSecretName,
|
||||
},
|
||||
Key: "password",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
mergedEnv := append(container.Env, env...)
|
||||
container.Env = deduplicateEnvVars(mergedEnv, container.Name, logger)
|
||||
result = append(result, container)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// Check whether or not we're requested to mount an shm volume,
|
||||
// taking into account that PostgreSQL manifest has precedence.
|
||||
func mountShmVolumeNeeded(opConfig config.Config, spec *acidv1.PostgresSpec) *bool {
|
||||
|
|
@ -724,58 +772,18 @@ func deduplicateEnvVars(input []v1.EnvVar, containerName string, logger *logrus.
|
|||
return result
|
||||
}
|
||||
|
||||
func getSidecarContainer(sidecar acidv1.Sidecar, index int, volumeMounts []v1.VolumeMount,
|
||||
resources *v1.ResourceRequirements, superUserName string, credentialsSecretName string, logger *logrus.Entry) *v1.Container {
|
||||
func getSidecarContainer(sidecar acidv1.Sidecar, index int, resources *v1.ResourceRequirements) *v1.Container {
|
||||
name := sidecar.Name
|
||||
if name == "" {
|
||||
name = fmt.Sprintf("sidecar-%d", index)
|
||||
}
|
||||
|
||||
env := []v1.EnvVar{
|
||||
{
|
||||
Name: "POD_NAME",
|
||||
ValueFrom: &v1.EnvVarSource{
|
||||
FieldRef: &v1.ObjectFieldSelector{
|
||||
APIVersion: "v1",
|
||||
FieldPath: "metadata.name",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "POD_NAMESPACE",
|
||||
ValueFrom: &v1.EnvVarSource{
|
||||
FieldRef: &v1.ObjectFieldSelector{
|
||||
APIVersion: "v1",
|
||||
FieldPath: "metadata.namespace",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "POSTGRES_USER",
|
||||
Value: superUserName,
|
||||
},
|
||||
{
|
||||
Name: "POSTGRES_PASSWORD",
|
||||
ValueFrom: &v1.EnvVarSource{
|
||||
SecretKeyRef: &v1.SecretKeySelector{
|
||||
LocalObjectReference: v1.LocalObjectReference{
|
||||
Name: credentialsSecretName,
|
||||
},
|
||||
Key: "password",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
if len(sidecar.Env) > 0 {
|
||||
env = append(env, sidecar.Env...)
|
||||
}
|
||||
return &v1.Container{
|
||||
Name: name,
|
||||
Image: sidecar.DockerImage,
|
||||
ImagePullPolicy: v1.PullIfNotPresent,
|
||||
Resources: *resources,
|
||||
VolumeMounts: volumeMounts,
|
||||
Env: deduplicateEnvVars(env, name, logger),
|
||||
Env: sidecar.Env,
|
||||
Ports: sidecar.Ports,
|
||||
}
|
||||
}
|
||||
|
|
@ -1065,37 +1073,63 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
|
|||
c.OpConfig.Resources.SpiloPrivileged,
|
||||
)
|
||||
|
||||
// resolve conflicts between operator-global and per-cluster sidecars
|
||||
sideCars := c.mergeSidecars(spec.Sidecars)
|
||||
// generate container specs for sidecars specified in the cluster manifest
|
||||
clusterSpecificSidecars := []v1.Container{}
|
||||
if spec.Sidecars != nil && len(spec.Sidecars) > 0 {
|
||||
// warn if sidecars are defined, but globally disabled (does not apply to globally defined sidecars)
|
||||
if c.OpConfig.EnableSidecars != nil && !(*c.OpConfig.EnableSidecars) {
|
||||
c.logger.Warningf("sidecars specified but disabled in configuration - next statefulset creation would fail")
|
||||
}
|
||||
|
||||
resourceRequirementsScalyrSidecar := makeResources(
|
||||
c.OpConfig.ScalyrCPURequest,
|
||||
c.OpConfig.ScalyrMemoryRequest,
|
||||
c.OpConfig.ScalyrCPULimit,
|
||||
c.OpConfig.ScalyrMemoryLimit,
|
||||
)
|
||||
if clusterSpecificSidecars, err = generateSidecarContainers(spec.Sidecars, defaultResources, 0, c.logger); err != nil {
|
||||
return nil, fmt.Errorf("could not generate sidecar containers: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// decrapted way of providing global sidecars
|
||||
var globalSidecarContainersByDockerImage []v1.Container
|
||||
var globalSidecarsByDockerImage []acidv1.Sidecar
|
||||
for name, dockerImage := range c.OpConfig.SidecarImages {
|
||||
globalSidecarsByDockerImage = append(globalSidecarsByDockerImage, acidv1.Sidecar{Name: name, DockerImage: dockerImage})
|
||||
}
|
||||
if globalSidecarContainersByDockerImage, err = generateSidecarContainers(globalSidecarsByDockerImage, defaultResources, len(clusterSpecificSidecars), c.logger); err != nil {
|
||||
return nil, fmt.Errorf("could not generate sidecar containers: %v", err)
|
||||
}
|
||||
// make the resulting list reproducible
|
||||
// c.OpConfig.SidecarImages is unsorted by Golang definition
|
||||
// .Name is unique
|
||||
sort.Slice(globalSidecarContainersByDockerImage, func(i, j int) bool {
|
||||
return globalSidecarContainersByDockerImage[i].Name < globalSidecarContainersByDockerImage[j].Name
|
||||
})
|
||||
|
||||
// generate scalyr sidecar container
|
||||
if scalyrSidecar :=
|
||||
var scalyrSidecars []v1.Container
|
||||
if scalyrSidecar, err :=
|
||||
generateScalyrSidecarSpec(c.Name,
|
||||
c.OpConfig.ScalyrAPIKey,
|
||||
c.OpConfig.ScalyrServerURL,
|
||||
c.OpConfig.ScalyrImage,
|
||||
&resourceRequirementsScalyrSidecar, c.logger); scalyrSidecar != nil {
|
||||
sideCars = append(sideCars, *scalyrSidecar)
|
||||
c.OpConfig.ScalyrCPURequest,
|
||||
c.OpConfig.ScalyrMemoryRequest,
|
||||
c.OpConfig.ScalyrCPULimit,
|
||||
c.OpConfig.ScalyrMemoryLimit,
|
||||
defaultResources,
|
||||
c.logger); err != nil {
|
||||
return nil, fmt.Errorf("could not generate Scalyr sidecar: %v", err)
|
||||
} else {
|
||||
if scalyrSidecar != nil {
|
||||
scalyrSidecars = append(scalyrSidecars, *scalyrSidecar)
|
||||
}
|
||||
}
|
||||
|
||||
// generate sidecar containers
|
||||
if sideCars != nil && len(sideCars) > 0 {
|
||||
if c.OpConfig.EnableSidecars != nil && !(*c.OpConfig.EnableSidecars) {
|
||||
c.logger.Warningf("sidecars specified but disabled in configuration - next statefulset creation would fail")
|
||||
}
|
||||
if sidecarContainers, err = generateSidecarContainers(sideCars, volumeMounts, defaultResources,
|
||||
c.OpConfig.SuperUsername, c.credentialSecretName(c.OpConfig.SuperUsername), c.logger); err != nil {
|
||||
return nil, fmt.Errorf("could not generate sidecar containers: %v", err)
|
||||
}
|
||||
sidecarContainers, conflicts := mergeContainers(clusterSpecificSidecars, c.Config.OpConfig.SidecarContainers, globalSidecarContainersByDockerImage, scalyrSidecars)
|
||||
for containerName := range conflicts {
|
||||
c.logger.Warningf("a sidecar is specified twice. Ignoring sidecar %q in favor of %q with high a precendence",
|
||||
containerName, containerName)
|
||||
}
|
||||
|
||||
sidecarContainers = patchSidecarContainers(sidecarContainers, volumeMounts, c.OpConfig.SuperUsername, c.credentialSecretName(c.OpConfig.SuperUsername), c.logger)
|
||||
|
||||
tolerationSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration)
|
||||
effectivePodPriorityClassName := util.Coalesce(spec.PodPriorityClassName, c.OpConfig.PodPriorityClassName)
|
||||
|
||||
|
|
@ -1188,17 +1222,25 @@ func (c *Cluster) generatePodAnnotations(spec *acidv1.PostgresSpec) map[string]s
|
|||
}
|
||||
|
||||
func generateScalyrSidecarSpec(clusterName, APIKey, serverURL, dockerImage string,
|
||||
containerResources *acidv1.Resources, logger *logrus.Entry) *acidv1.Sidecar {
|
||||
scalyrCPURequest string, scalyrMemoryRequest string, scalyrCPULimit string, scalyrMemoryLimit string,
|
||||
defaultResources acidv1.Resources, logger *logrus.Entry) (*v1.Container, error) {
|
||||
if APIKey == "" || dockerImage == "" {
|
||||
if APIKey == "" && dockerImage != "" {
|
||||
logger.Warning("Not running Scalyr sidecar: SCALYR_API_KEY must be defined")
|
||||
}
|
||||
return nil
|
||||
return nil, nil
|
||||
}
|
||||
scalarSpec := &acidv1.Sidecar{
|
||||
Name: "scalyr-sidecar",
|
||||
DockerImage: dockerImage,
|
||||
Env: []v1.EnvVar{
|
||||
resourcesScalyrSidecar := makeResources(
|
||||
scalyrCPURequest,
|
||||
scalyrMemoryRequest,
|
||||
scalyrCPULimit,
|
||||
scalyrMemoryLimit,
|
||||
)
|
||||
resourceRequirementsScalyrSidecar, err := generateResourceRequirements(resourcesScalyrSidecar, defaultResources)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid resources for Scalyr sidecar: %v", err)
|
||||
}
|
||||
env := []v1.EnvVar{
|
||||
{
|
||||
Name: "SCALYR_API_KEY",
|
||||
Value: APIKey,
|
||||
|
|
@ -1207,38 +1249,17 @@ func generateScalyrSidecarSpec(clusterName, APIKey, serverURL, dockerImage strin
|
|||
Name: "SCALYR_SERVER_HOST",
|
||||
Value: clusterName,
|
||||
},
|
||||
},
|
||||
Resources: *containerResources,
|
||||
}
|
||||
if serverURL != "" {
|
||||
scalarSpec.Env = append(scalarSpec.Env, v1.EnvVar{Name: "SCALYR_SERVER_URL", Value: serverURL})
|
||||
env = append(env, v1.EnvVar{Name: "SCALYR_SERVER_URL", Value: serverURL})
|
||||
}
|
||||
return scalarSpec
|
||||
}
|
||||
|
||||
// mergeSidecar merges globally-defined sidecars with those defined in the cluster manifest
|
||||
func (c *Cluster) mergeSidecars(sidecars []acidv1.Sidecar) []acidv1.Sidecar {
|
||||
globalSidecarsToSkip := map[string]bool{}
|
||||
result := make([]acidv1.Sidecar, 0)
|
||||
|
||||
for i, sidecar := range sidecars {
|
||||
dockerImage, ok := c.OpConfig.Sidecars[sidecar.Name]
|
||||
if ok {
|
||||
if dockerImage != sidecar.DockerImage {
|
||||
c.logger.Warningf("merging definitions for sidecar %q: "+
|
||||
"ignoring %q in the global scope in favor of %q defined in the cluster",
|
||||
sidecar.Name, dockerImage, sidecar.DockerImage)
|
||||
}
|
||||
globalSidecarsToSkip[sidecar.Name] = true
|
||||
}
|
||||
result = append(result, sidecars[i])
|
||||
}
|
||||
for name, dockerImage := range c.OpConfig.Sidecars {
|
||||
if !globalSidecarsToSkip[name] {
|
||||
result = append(result, acidv1.Sidecar{Name: name, DockerImage: dockerImage})
|
||||
}
|
||||
}
|
||||
return result
|
||||
return &v1.Container{
|
||||
Name: "scalyr-sidecar",
|
||||
Image: dockerImage,
|
||||
Env: env,
|
||||
ImagePullPolicy: v1.PullIfNotPresent,
|
||||
Resources: *resourceRequirementsScalyrSidecar,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (c *Cluster) getNumberOfInstances(spec *acidv1.PostgresSpec) int32 {
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@ import (
|
|||
appsv1 "k8s.io/api/apps/v1"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
policyv1beta1 "k8s.io/api/policy/v1beta1"
|
||||
"k8s.io/apimachinery/pkg/api/resource"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/util/intstr"
|
||||
)
|
||||
|
|
@ -1206,3 +1207,201 @@ func TestAdditionalVolume(t *testing.T) {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
// inject sidecars through all available mechanisms and check the resulting container specs
|
||||
func TestSidecars(t *testing.T) {
|
||||
var err error
|
||||
var spec acidv1.PostgresSpec
|
||||
var cluster *Cluster
|
||||
|
||||
generateKubernetesResources := func(cpuRequest string, cpuLimit string, memoryRequest string, memoryLimit string) v1.ResourceRequirements {
|
||||
parsedCPURequest, err := resource.ParseQuantity(cpuRequest)
|
||||
assert.NoError(t, err)
|
||||
parsedCPULimit, err := resource.ParseQuantity(cpuLimit)
|
||||
assert.NoError(t, err)
|
||||
parsedMemoryRequest, err := resource.ParseQuantity(memoryRequest)
|
||||
assert.NoError(t, err)
|
||||
parsedMemoryLimit, err := resource.ParseQuantity(memoryLimit)
|
||||
assert.NoError(t, err)
|
||||
return v1.ResourceRequirements{
|
||||
Requests: v1.ResourceList{
|
||||
v1.ResourceCPU: parsedCPURequest,
|
||||
v1.ResourceMemory: parsedMemoryRequest,
|
||||
},
|
||||
Limits: v1.ResourceList{
|
||||
v1.ResourceCPU: parsedCPULimit,
|
||||
v1.ResourceMemory: parsedMemoryLimit,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
spec = acidv1.PostgresSpec{
|
||||
TeamID: "myapp", NumberOfInstances: 1,
|
||||
Resources: acidv1.Resources{
|
||||
ResourceRequests: acidv1.ResourceDescription{CPU: "1", Memory: "10"},
|
||||
ResourceLimits: acidv1.ResourceDescription{CPU: "1", Memory: "10"},
|
||||
},
|
||||
Volume: acidv1.Volume{
|
||||
Size: "1G",
|
||||
},
|
||||
Sidecars: []acidv1.Sidecar{
|
||||
acidv1.Sidecar{
|
||||
Name: "cluster-specific-sidecar",
|
||||
},
|
||||
acidv1.Sidecar{
|
||||
Name: "cluster-specific-sidecar-with-resources",
|
||||
Resources: acidv1.Resources{
|
||||
ResourceRequests: acidv1.ResourceDescription{CPU: "210m", Memory: "0.8Gi"},
|
||||
ResourceLimits: acidv1.ResourceDescription{CPU: "510m", Memory: "1.4Gi"},
|
||||
},
|
||||
},
|
||||
acidv1.Sidecar{
|
||||
Name: "replace-sidecar",
|
||||
DockerImage: "overwrite-image",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cluster = New(
|
||||
Config{
|
||||
OpConfig: config.Config{
|
||||
PodManagementPolicy: "ordered_ready",
|
||||
ProtectedRoles: []string{"admin"},
|
||||
Auth: config.Auth{
|
||||
SuperUsername: superUserName,
|
||||
ReplicationUsername: replicationUserName,
|
||||
},
|
||||
Resources: config.Resources{
|
||||
DefaultCPURequest: "200m",
|
||||
DefaultCPULimit: "500m",
|
||||
DefaultMemoryRequest: "0.7Gi",
|
||||
DefaultMemoryLimit: "1.3Gi",
|
||||
},
|
||||
SidecarImages: map[string]string{
|
||||
"deprecated-global-sidecar": "image:123",
|
||||
},
|
||||
SidecarContainers: []v1.Container{
|
||||
v1.Container{
|
||||
Name: "global-sidecar",
|
||||
},
|
||||
// will be replaced by a cluster specific sidecar with the same name
|
||||
v1.Container{
|
||||
Name: "replace-sidecar",
|
||||
Image: "replaced-image",
|
||||
},
|
||||
},
|
||||
Scalyr: config.Scalyr{
|
||||
ScalyrAPIKey: "abc",
|
||||
ScalyrImage: "scalyr-image",
|
||||
ScalyrCPURequest: "220m",
|
||||
ScalyrCPULimit: "520m",
|
||||
ScalyrMemoryRequest: "0.9Gi",
|
||||
// ise default memory limit
|
||||
},
|
||||
},
|
||||
}, k8sutil.KubernetesClient{}, acidv1.Postgresql{}, logger)
|
||||
|
||||
s, err := cluster.generateStatefulSet(&spec)
|
||||
assert.NoError(t, err)
|
||||
|
||||
env := []v1.EnvVar{
|
||||
{
|
||||
Name: "POD_NAME",
|
||||
ValueFrom: &v1.EnvVarSource{
|
||||
FieldRef: &v1.ObjectFieldSelector{
|
||||
APIVersion: "v1",
|
||||
FieldPath: "metadata.name",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "POD_NAMESPACE",
|
||||
ValueFrom: &v1.EnvVarSource{
|
||||
FieldRef: &v1.ObjectFieldSelector{
|
||||
APIVersion: "v1",
|
||||
FieldPath: "metadata.namespace",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "POSTGRES_USER",
|
||||
Value: superUserName,
|
||||
},
|
||||
{
|
||||
Name: "POSTGRES_PASSWORD",
|
||||
ValueFrom: &v1.EnvVarSource{
|
||||
SecretKeyRef: &v1.SecretKeySelector{
|
||||
LocalObjectReference: v1.LocalObjectReference{
|
||||
Name: "",
|
||||
},
|
||||
Key: "password",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
mounts := []v1.VolumeMount{
|
||||
v1.VolumeMount{
|
||||
Name: "pgdata",
|
||||
MountPath: "/home/postgres/pgdata",
|
||||
},
|
||||
}
|
||||
|
||||
// deduplicated sidecars and Patroni
|
||||
assert.Equal(t, 7, len(s.Spec.Template.Spec.Containers), "wrong number of containers")
|
||||
|
||||
// cluster specific sidecar
|
||||
assert.Contains(t, s.Spec.Template.Spec.Containers, v1.Container{
|
||||
Name: "cluster-specific-sidecar",
|
||||
Env: env,
|
||||
Resources: generateKubernetesResources("200m", "500m", "0.7Gi", "1.3Gi"),
|
||||
ImagePullPolicy: v1.PullIfNotPresent,
|
||||
VolumeMounts: mounts,
|
||||
})
|
||||
|
||||
// container specific resources
|
||||
expectedResources := generateKubernetesResources("210m", "510m", "0.8Gi", "1.4Gi")
|
||||
assert.Equal(t, expectedResources.Requests[v1.ResourceCPU], s.Spec.Template.Spec.Containers[2].Resources.Requests[v1.ResourceCPU])
|
||||
assert.Equal(t, expectedResources.Limits[v1.ResourceCPU], s.Spec.Template.Spec.Containers[2].Resources.Limits[v1.ResourceCPU])
|
||||
assert.Equal(t, expectedResources.Requests[v1.ResourceMemory], s.Spec.Template.Spec.Containers[2].Resources.Requests[v1.ResourceMemory])
|
||||
assert.Equal(t, expectedResources.Limits[v1.ResourceMemory], s.Spec.Template.Spec.Containers[2].Resources.Limits[v1.ResourceMemory])
|
||||
|
||||
// deprecated global sidecar
|
||||
assert.Contains(t, s.Spec.Template.Spec.Containers, v1.Container{
|
||||
Name: "deprecated-global-sidecar",
|
||||
Image: "image:123",
|
||||
Env: env,
|
||||
Resources: generateKubernetesResources("200m", "500m", "0.7Gi", "1.3Gi"),
|
||||
ImagePullPolicy: v1.PullIfNotPresent,
|
||||
VolumeMounts: mounts,
|
||||
})
|
||||
|
||||
// global sidecar
|
||||
assert.Contains(t, s.Spec.Template.Spec.Containers, v1.Container{
|
||||
Name: "global-sidecar",
|
||||
Env: env,
|
||||
VolumeMounts: mounts,
|
||||
})
|
||||
|
||||
// replaced sidecar
|
||||
assert.Contains(t, s.Spec.Template.Spec.Containers, v1.Container{
|
||||
Name: "replace-sidecar",
|
||||
Image: "overwrite-image",
|
||||
Resources: generateKubernetesResources("200m", "500m", "0.7Gi", "1.3Gi"),
|
||||
ImagePullPolicy: v1.PullIfNotPresent,
|
||||
Env: env,
|
||||
VolumeMounts: mounts,
|
||||
})
|
||||
|
||||
// replaced sidecar
|
||||
// the order in env is important
|
||||
scalyrEnv := append([]v1.EnvVar{v1.EnvVar{Name: "SCALYR_API_KEY", Value: "abc"}, v1.EnvVar{Name: "SCALYR_SERVER_HOST", Value: ""}}, env...)
|
||||
assert.Contains(t, s.Spec.Template.Spec.Containers, v1.Container{
|
||||
Name: "scalyr-sidecar",
|
||||
Image: "scalyr-image",
|
||||
Resources: generateKubernetesResources("220m", "520m", "0.9Gi", "1.3Gi"),
|
||||
ImagePullPolicy: v1.PullIfNotPresent,
|
||||
Env: scalyrEnv,
|
||||
VolumeMounts: mounts,
|
||||
})
|
||||
|
||||
}
|
||||
|
|
|
|||
|
|
@ -530,3 +530,22 @@ func (c *Cluster) needConnectionPoolerWorker(spec *acidv1.PostgresSpec) bool {
|
|||
func (c *Cluster) needConnectionPooler() bool {
|
||||
return c.needConnectionPoolerWorker(&c.Spec)
|
||||
}
|
||||
|
||||
// Earlier arguments take priority
|
||||
func mergeContainers(containers ...[]v1.Container) ([]v1.Container, []string) {
|
||||
containerNameTaken := map[string]bool{}
|
||||
result := make([]v1.Container, 0)
|
||||
conflicts := make([]string, 0)
|
||||
|
||||
for _, containerArray := range containers {
|
||||
for _, container := range containerArray {
|
||||
if _, taken := containerNameTaken[container.Name]; taken {
|
||||
conflicts = append(conflicts, container.Name)
|
||||
} else {
|
||||
containerNameTaken[container.Name] = true
|
||||
result = append(result, container)
|
||||
}
|
||||
}
|
||||
}
|
||||
return result, conflicts
|
||||
}
|
||||
|
|
|
|||
|
|
@ -178,6 +178,11 @@ func (c *Controller) warnOnDeprecatedOperatorParameters() {
|
|||
c.logger.Warningf("Operator configuration parameter 'enable_load_balancer' is deprecated and takes no effect. " +
|
||||
"Consider using the 'enable_master_load_balancer' or 'enable_replica_load_balancer' instead.")
|
||||
}
|
||||
|
||||
if len(c.opConfig.SidecarImages) > 0 {
|
||||
c.logger.Warningf("Operator configuration parameter 'sidecar_docker_images' is deprecated. " +
|
||||
"Consider using 'sidecars' instead.")
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Controller) initPodServiceAccount() {
|
||||
|
|
|
|||
|
|
@ -44,7 +44,8 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
|
|||
result.RepairPeriod = time.Duration(fromCRD.RepairPeriod)
|
||||
result.SetMemoryRequestToLimit = fromCRD.SetMemoryRequestToLimit
|
||||
result.ShmVolume = fromCRD.ShmVolume
|
||||
result.Sidecars = fromCRD.Sidecars
|
||||
result.SidecarImages = fromCRD.SidecarImages
|
||||
result.SidecarContainers = fromCRD.SidecarContainers
|
||||
|
||||
// user config
|
||||
result.SuperUsername = fromCRD.PostgresUsersConfiguration.SuperUsername
|
||||
|
|
|
|||
|
|
@ -9,6 +9,7 @@ import (
|
|||
|
||||
"github.com/zalando/postgres-operator/pkg/spec"
|
||||
"github.com/zalando/postgres-operator/pkg/util/constants"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
)
|
||||
|
||||
// CRD describes CustomResourceDefinition specific configuration parameters
|
||||
|
|
@ -111,7 +112,9 @@ type Config struct {
|
|||
KubernetesUseConfigMaps bool `name:"kubernetes_use_configmaps" default:"false"`
|
||||
EtcdHost string `name:"etcd_host" default:""` // special values: the empty string "" means Patroni will use K8s as a DCS
|
||||
DockerImage string `name:"docker_image" default:"registry.opensource.zalan.do/acid/spilo-12:1.6-p2"`
|
||||
Sidecars map[string]string `name:"sidecar_docker_images"`
|
||||
// deprecated in favour of SidecarContainers
|
||||
SidecarImages map[string]string `name:"sidecar_docker_images"`
|
||||
SidecarContainers []v1.Container `name:"sidecars"`
|
||||
PodServiceAccountName string `name:"pod_service_account_name" default:"postgres-pod"`
|
||||
// value of this string must be valid JSON or YAML; see initPodServiceAccount
|
||||
PodServiceAccountDefinition string `name:"pod_service_account_definition" default:""`
|
||||
|
|
|
|||
Loading…
Reference in New Issue