Merge branch 'master' of https://github.com/zalando/postgres-operator into standby
This commit is contained in:
commit
6daecf10e4
|
|
@ -43,6 +43,7 @@ rules:
|
||||||
verbs:
|
verbs:
|
||||||
- create
|
- create
|
||||||
- delete
|
- delete
|
||||||
|
- deletecollection
|
||||||
- get
|
- get
|
||||||
- list
|
- list
|
||||||
- patch
|
- patch
|
||||||
|
|
|
||||||
|
|
@ -69,6 +69,8 @@ configAwsOrGcp:
|
||||||
# kube_iam_role: ""
|
# kube_iam_role: ""
|
||||||
# log_s3_bucket: ""
|
# log_s3_bucket: ""
|
||||||
# wal_s3_bucket: ""
|
# wal_s3_bucket: ""
|
||||||
|
# additional_secret_mount: "some-secret-name"
|
||||||
|
# additional_secret_mount_path: "/some/dir"
|
||||||
|
|
||||||
configLogicalBackup:
|
configLogicalBackup:
|
||||||
logical_backup_schedule: "30 00 * * *"
|
logical_backup_schedule: "30 00 * * *"
|
||||||
|
|
@ -114,6 +116,7 @@ configKubernetesCRD:
|
||||||
cluster_name_label: cluster-name
|
cluster_name_label: cluster-name
|
||||||
enable_pod_antiaffinity: false
|
enable_pod_antiaffinity: false
|
||||||
pod_antiaffinity_topology_key: "kubernetes.io/hostname"
|
pod_antiaffinity_topology_key: "kubernetes.io/hostname"
|
||||||
|
enable_pod_disruption_budget: true
|
||||||
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
|
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
|
||||||
# inherited_labels:
|
# inherited_labels:
|
||||||
# - application
|
# - application
|
||||||
|
|
|
||||||
|
|
@ -154,6 +154,22 @@ data:
|
||||||
pod_antiaffinity_topology_key: "failure-domain.beta.kubernetes.io/zone"
|
pod_antiaffinity_topology_key: "failure-domain.beta.kubernetes.io/zone"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Pod Disruption Budget
|
||||||
|
|
||||||
|
By default the operator uses a PodDisruptionBudget (PDB) to protect the cluster
|
||||||
|
from voluntarily disruptions and hence unwanted DB downtime. The `MinAvailable`
|
||||||
|
parameter of the PDB is set to `1` which prevents killing masters in single-node
|
||||||
|
clusters and/or the last remaining running instance in a multi-node cluster.
|
||||||
|
|
||||||
|
The PDB is only relaxed in two scenarios:
|
||||||
|
* If a cluster is scaled down to `0` instances (e.g. for draining nodes)
|
||||||
|
* If the PDB is disabled in the configuration (`enable_pod_disruption_budget`)
|
||||||
|
|
||||||
|
The PDB is still in place having `MinAvailable` set to `0`. If enabled it will
|
||||||
|
be automatically set to `1` on scale up. Disabling PDBs helps avoiding blocking
|
||||||
|
Kubernetes upgrades in managed K8s environments at the cost of prolonged DB
|
||||||
|
downtime. See PR #384 for the use case.
|
||||||
|
|
||||||
### Add cluster-specific labels
|
### Add cluster-specific labels
|
||||||
|
|
||||||
In some cases, you might want to add `labels` that are specific to a given
|
In some cases, you might want to add `labels` that are specific to a given
|
||||||
|
|
@ -317,3 +333,19 @@ The operator can manage k8s cron jobs to run logical backups of Postgres cluster
|
||||||
4. You may use your own image by overwriting the relevant field in the operator configuration. Any such image must ensure the logical backup is able to finish [in presence of pod restarts](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#handling-pod-and-container-failures) and [simultaneous invocations](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations) of the backup cron job.
|
4. You may use your own image by overwriting the relevant field in the operator configuration. Any such image must ensure the logical backup is able to finish [in presence of pod restarts](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#handling-pod-and-container-failures) and [simultaneous invocations](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations) of the backup cron job.
|
||||||
|
|
||||||
5. For that feature to work, your RBAC policy must enable operations on the `cronjobs` resource from the `batch` API group for the operator service account. See [example RBAC](../manifests/operator-service-account-rbac.yaml)
|
5. For that feature to work, your RBAC policy must enable operations on the `cronjobs` resource from the `batch` API group for the operator service account. See [example RBAC](../manifests/operator-service-account-rbac.yaml)
|
||||||
|
|
||||||
|
## Access to cloud resources from clusters in non cloud environment
|
||||||
|
|
||||||
|
To access cloud resources like S3 from a cluster in a bare metal setup you can use
|
||||||
|
`additional_secret_mount` and `additional_secret_mount_path` config parameters.
|
||||||
|
With this you can provision cloud credentials to the containers in the pods of the StatefulSet.
|
||||||
|
This works this way that it mounts a volume from the given secret in the pod and this can
|
||||||
|
then accessed in the container over the configured mount path. Via [Custum Pod Environment Variables](#custom-pod-environment-variables)
|
||||||
|
you can then point the different cloud sdk's (aws, google etc.) to this mounted secret.
|
||||||
|
With this credentials the cloud sdk can then access cloud resources to upload logs etc.
|
||||||
|
|
||||||
|
A secret can be pre provisioned in different ways:
|
||||||
|
|
||||||
|
* Generic secret created via `kubectl create secret generic some-cloud-creds --from-file=some-cloud-credentials-file.json`
|
||||||
|
|
||||||
|
* Automaticly provisioned via a Controller like [kube-aws-iam-controller](https://github.com/mikkeloscar/kube-aws-iam-controller). This controller would then also rotate the credentials. Please visit the documention for more information.
|
||||||
|
|
|
||||||
|
|
@ -285,6 +285,9 @@ properties of the persistent storage that stores postgres data.
|
||||||
documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/)
|
documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/)
|
||||||
for the details on storage classes. Optional.
|
for the details on storage classes. Optional.
|
||||||
|
|
||||||
|
* **subPath**
|
||||||
|
Subpath to use when mounting volume into Spilo container
|
||||||
|
|
||||||
### Sidecar definitions
|
### Sidecar definitions
|
||||||
|
|
||||||
Those parameters are defined under the `sidecars` key. They consist of a list
|
Those parameters are defined under the `sidecars` key. They consist of a list
|
||||||
|
|
|
||||||
|
|
@ -161,6 +161,13 @@ configuration they are grouped under the `kubernetes` key.
|
||||||
replaced by the cluster name. Only the `{cluster}` placeholders is allowed in
|
replaced by the cluster name. Only the `{cluster}` placeholders is allowed in
|
||||||
the template.
|
the template.
|
||||||
|
|
||||||
|
* **enable_pod_disruption_budget**
|
||||||
|
PDB is enabled by default to protect the cluster from voluntarily disruptions
|
||||||
|
and hence unwanted DB downtime. However, on some cloud providers it could be
|
||||||
|
necessary to temporarily disabled it, e.g. for node updates. See
|
||||||
|
[admin docs](../administrator.md#pod-disruption-budget) for more information.
|
||||||
|
Default is true.
|
||||||
|
|
||||||
* **secret_name_template**
|
* **secret_name_template**
|
||||||
a template for the name of the database user secrets generated by the
|
a template for the name of the database user secrets generated by the
|
||||||
operator. `{username}` is replaced with name of the secret, `{cluster}` with
|
operator. `{username}` is replaced with name of the secret, `{cluster}` with
|
||||||
|
|
@ -400,6 +407,12 @@ yet officially supported.
|
||||||
* **aws_region**
|
* **aws_region**
|
||||||
AWS region used to store ESB volumes. The default is `eu-central-1`.
|
AWS region used to store ESB volumes. The default is `eu-central-1`.
|
||||||
|
|
||||||
|
* **additional_secret_mount**
|
||||||
|
Additional Secret (aws or gcp credentials) to mount in the pod. The default is empty.
|
||||||
|
|
||||||
|
* **additional_secret_mount_path**
|
||||||
|
Path to mount the above Secret in the filesystem of the container(s). The default is empty.
|
||||||
|
|
||||||
## Debugging the operator
|
## Debugging the operator
|
||||||
|
|
||||||
Options to aid debugging of the operator itself. Grouped under the `debug` key.
|
Options to aid debugging of the operator itself. Grouped under the `debug` key.
|
||||||
|
|
|
||||||
|
|
@ -33,6 +33,8 @@ data:
|
||||||
# https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees
|
# https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees
|
||||||
# inherited_labels: ""
|
# inherited_labels: ""
|
||||||
aws_region: eu-central-1
|
aws_region: eu-central-1
|
||||||
|
# additional_secret_mount: "some-secret-name"
|
||||||
|
# additional_secret_mount_path: "/some/dir"
|
||||||
db_hosted_zone: db.example.com
|
db_hosted_zone: db.example.com
|
||||||
master_dns_name_format: '{cluster}.{team}.staging.{hostedzone}'
|
master_dns_name_format: '{cluster}.{team}.staging.{hostedzone}'
|
||||||
replica_dns_name_format: '{cluster}-repl.{team}.staging.{hostedzone}'
|
replica_dns_name_format: '{cluster}-repl.{team}.staging.{hostedzone}'
|
||||||
|
|
|
||||||
|
|
@ -40,6 +40,7 @@ rules:
|
||||||
verbs:
|
verbs:
|
||||||
- create
|
- create
|
||||||
- delete
|
- delete
|
||||||
|
- deletecollection
|
||||||
- get
|
- get
|
||||||
- list
|
- list
|
||||||
- patch
|
- patch
|
||||||
|
|
|
||||||
|
|
@ -20,6 +20,7 @@ configuration:
|
||||||
pod_service_account_name: operator
|
pod_service_account_name: operator
|
||||||
pod_terminate_grace_period: 5m
|
pod_terminate_grace_period: 5m
|
||||||
pdb_name_format: "postgres-{cluster}-pdb"
|
pdb_name_format: "postgres-{cluster}-pdb"
|
||||||
|
enable_pod_disruption_budget: true
|
||||||
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
|
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
|
||||||
cluster_domain: cluster.local
|
cluster_domain: cluster.local
|
||||||
oauth_token_secret_name: postgresql-operator
|
oauth_token_secret_name: postgresql-operator
|
||||||
|
|
@ -66,6 +67,8 @@ configuration:
|
||||||
# log_s3_bucket: ""
|
# log_s3_bucket: ""
|
||||||
# kube_iam_role: ""
|
# kube_iam_role: ""
|
||||||
aws_region: eu-central-1
|
aws_region: eu-central-1
|
||||||
|
# additional_secret_mount: "some-secret-name"
|
||||||
|
# additional_secret_mount_path: "/some/dir"
|
||||||
debug:
|
debug:
|
||||||
debug_logging: true
|
debug_logging: true
|
||||||
enable_database_access: true
|
enable_database_access: true
|
||||||
|
|
|
||||||
|
|
@ -49,6 +49,7 @@ type KubernetesMetaConfiguration struct {
|
||||||
SpiloFSGroup *int64 `json:"spilo_fsgroup,omitempty"`
|
SpiloFSGroup *int64 `json:"spilo_fsgroup,omitempty"`
|
||||||
WatchedNamespace string `json:"watched_namespace,omitempty"`
|
WatchedNamespace string `json:"watched_namespace,omitempty"`
|
||||||
PDBNameFormat config.StringTemplate `json:"pdb_name_format,omitempty"`
|
PDBNameFormat config.StringTemplate `json:"pdb_name_format,omitempty"`
|
||||||
|
EnablePodDisruptionBudget *bool `json:"enable_pod_disruption_budget,omitempty"`
|
||||||
SecretNameTemplate config.StringTemplate `json:"secret_name_template,omitempty"`
|
SecretNameTemplate config.StringTemplate `json:"secret_name_template,omitempty"`
|
||||||
ClusterDomain string `json:"cluster_domain"`
|
ClusterDomain string `json:"cluster_domain"`
|
||||||
OAuthTokenSecretName spec.NamespacedName `json:"oauth_token_secret_name,omitempty"`
|
OAuthTokenSecretName spec.NamespacedName `json:"oauth_token_secret_name,omitempty"`
|
||||||
|
|
@ -104,6 +105,8 @@ type AWSGCPConfiguration struct {
|
||||||
AWSRegion string `json:"aws_region,omitempty"`
|
AWSRegion string `json:"aws_region,omitempty"`
|
||||||
LogS3Bucket string `json:"log_s3_bucket,omitempty"`
|
LogS3Bucket string `json:"log_s3_bucket,omitempty"`
|
||||||
KubeIAMRole string `json:"kube_iam_role,omitempty"`
|
KubeIAMRole string `json:"kube_iam_role,omitempty"`
|
||||||
|
AdditionalSecretMount string `json:"additional_secret_mount,omitempty"`
|
||||||
|
AdditionalSecretMountPath string `json:"additional_secret_mount_path" default:"/meta/credentials"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// OperatorDebugConfiguration defines options for the debug mode
|
// OperatorDebugConfiguration defines options for the debug mode
|
||||||
|
|
|
||||||
|
|
@ -83,6 +83,7 @@ type MaintenanceWindow struct {
|
||||||
type Volume struct {
|
type Volume struct {
|
||||||
Size string `json:"size"`
|
Size string `json:"size"`
|
||||||
StorageClass string `json:"storageClass"`
|
StorageClass string `json:"storageClass"`
|
||||||
|
SubPath string `json:"subPath,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// PostgresqlParam describes PostgreSQL version and pairs of configuration parameter name - values.
|
// PostgresqlParam describes PostgreSQL version and pairs of configuration parameter name - values.
|
||||||
|
|
|
||||||
|
|
@ -181,7 +181,8 @@ var unmarshalCluster = []struct {
|
||||||
"teamId": "ACID",
|
"teamId": "ACID",
|
||||||
"volume": {
|
"volume": {
|
||||||
"size": "5Gi",
|
"size": "5Gi",
|
||||||
"storageClass": "SSD"
|
"storageClass": "SSD",
|
||||||
|
"subPath": "subdir"
|
||||||
},
|
},
|
||||||
"numberOfInstances": 2,
|
"numberOfInstances": 2,
|
||||||
"users": {
|
"users": {
|
||||||
|
|
@ -263,6 +264,7 @@ var unmarshalCluster = []struct {
|
||||||
Volume: Volume{
|
Volume: Volume{
|
||||||
Size: "5Gi",
|
Size: "5Gi",
|
||||||
StorageClass: "SSD",
|
StorageClass: "SSD",
|
||||||
|
SubPath: "subdir",
|
||||||
},
|
},
|
||||||
Patroni: Patroni{
|
Patroni: Patroni{
|
||||||
InitDB: map[string]string{
|
InitDB: map[string]string{
|
||||||
|
|
@ -311,7 +313,7 @@ var unmarshalCluster = []struct {
|
||||||
},
|
},
|
||||||
Error: "",
|
Error: "",
|
||||||
},
|
},
|
||||||
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"acid-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"9.6","parameters":{"log_statement":"all","max_connections":"10","shared_buffers":"32MB"}},"volume":{"size":"5Gi","storageClass":"SSD"},"patroni":{"initdb":{"data-checksums":"true","encoding":"UTF8","locale":"en_US.UTF-8"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"],"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}}},"resources":{"requests":{"cpu":"10m","memory":"50Mi"},"limits":{"cpu":"300m","memory":"3000Mi"}},"teamId":"ACID","allowedSourceRanges":["127.0.0.1/32"],"numberOfInstances":2,"users":{"zalando":["superuser","createdb"]},"maintenanceWindows":["Mon:01:00-06:00","Sat:00:00-04:00","05:00-05:15"],"clone":{"cluster":"acid-batman"}},"status":{"PostgresClusterStatus":""}}`),
|
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"acid-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"9.6","parameters":{"log_statement":"all","max_connections":"10","shared_buffers":"32MB"}},"volume":{"size":"5Gi","storageClass":"SSD", "subPath": "subdir"},"patroni":{"initdb":{"data-checksums":"true","encoding":"UTF8","locale":"en_US.UTF-8"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"],"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}}},"resources":{"requests":{"cpu":"10m","memory":"50Mi"},"limits":{"cpu":"300m","memory":"3000Mi"}},"teamId":"ACID","allowedSourceRanges":["127.0.0.1/32"],"numberOfInstances":2,"users":{"zalando":["superuser","createdb"]},"maintenanceWindows":["Mon:01:00-06:00","Sat:00:00-04:00","05:00-05:15"],"clone":{"cluster":"acid-batman"}},"status":{"PostgresClusterStatus":""}}`),
|
||||||
err: nil},
|
err: nil},
|
||||||
// example with teamId set in input
|
// example with teamId set in input
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -76,6 +76,11 @@ func (in *KubernetesMetaConfiguration) DeepCopyInto(out *KubernetesMetaConfigura
|
||||||
*out = new(int64)
|
*out = new(int64)
|
||||||
**out = **in
|
**out = **in
|
||||||
}
|
}
|
||||||
|
if in.EnablePodDisruptionBudget != nil {
|
||||||
|
in, out := &in.EnablePodDisruptionBudget, &out.EnablePodDisruptionBudget
|
||||||
|
*out = new(bool)
|
||||||
|
**out = **in
|
||||||
|
}
|
||||||
out.OAuthTokenSecretName = in.OAuthTokenSecretName
|
out.OAuthTokenSecretName = in.OAuthTokenSecretName
|
||||||
out.InfrastructureRolesSecretName = in.InfrastructureRolesSecretName
|
out.InfrastructureRolesSecretName = in.InfrastructureRolesSecretName
|
||||||
if in.ClusterLabels != nil {
|
if in.ClusterLabels != nil {
|
||||||
|
|
|
||||||
|
|
@ -579,6 +579,15 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
|
// pod disruption budget
|
||||||
|
if oldSpec.Spec.NumberOfInstances != newSpec.Spec.NumberOfInstances {
|
||||||
|
c.logger.Debug("syncing pod disruption budgets")
|
||||||
|
if err := c.syncPodDisruptionBudget(true); err != nil {
|
||||||
|
c.logger.Errorf("could not sync pod disruption budget: %v", err)
|
||||||
|
updateFailed = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// logical backup job
|
// logical backup job
|
||||||
func() {
|
func() {
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -342,11 +342,12 @@ func isBootstrapOnlyParameter(param string) bool {
|
||||||
param == "track_commit_timestamp"
|
param == "track_commit_timestamp"
|
||||||
}
|
}
|
||||||
|
|
||||||
func generateVolumeMounts() []v1.VolumeMount {
|
func generateVolumeMounts(volume acidv1.Volume) []v1.VolumeMount {
|
||||||
return []v1.VolumeMount{
|
return []v1.VolumeMount{
|
||||||
{
|
{
|
||||||
Name: constants.DataVolumeName,
|
Name: constants.DataVolumeName,
|
||||||
MountPath: constants.PostgresDataMount, //TODO: fetch from manifest
|
MountPath: constants.PostgresDataMount, //TODO: fetch from manifest
|
||||||
|
SubPath: volume.SubPath,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -359,6 +360,8 @@ func generateContainer(
|
||||||
volumeMounts []v1.VolumeMount,
|
volumeMounts []v1.VolumeMount,
|
||||||
privilegedMode bool,
|
privilegedMode bool,
|
||||||
) *v1.Container {
|
) *v1.Container {
|
||||||
|
falseBool := false
|
||||||
|
|
||||||
return &v1.Container{
|
return &v1.Container{
|
||||||
Name: name,
|
Name: name,
|
||||||
Image: *dockerImage,
|
Image: *dockerImage,
|
||||||
|
|
@ -382,6 +385,7 @@ func generateContainer(
|
||||||
Env: envVars,
|
Env: envVars,
|
||||||
SecurityContext: &v1.SecurityContext{
|
SecurityContext: &v1.SecurityContext{
|
||||||
Privileged: &privilegedMode,
|
Privileged: &privilegedMode,
|
||||||
|
ReadOnlyRootFilesystem: &falseBool,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -441,6 +445,8 @@ func generatePodTemplate(
|
||||||
shmVolume bool,
|
shmVolume bool,
|
||||||
podAntiAffinity bool,
|
podAntiAffinity bool,
|
||||||
podAntiAffinityTopologyKey string,
|
podAntiAffinityTopologyKey string,
|
||||||
|
additionalSecretMount string,
|
||||||
|
additionalSecretMountPath string,
|
||||||
) (*v1.PodTemplateSpec, error) {
|
) (*v1.PodTemplateSpec, error) {
|
||||||
|
|
||||||
terminateGracePeriodSeconds := terminateGracePeriod
|
terminateGracePeriodSeconds := terminateGracePeriod
|
||||||
|
|
@ -475,6 +481,10 @@ func generatePodTemplate(
|
||||||
podSpec.PriorityClassName = priorityClassName
|
podSpec.PriorityClassName = priorityClassName
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if additionalSecretMount != "" {
|
||||||
|
addSecretVolume(&podSpec, additionalSecretMount, additionalSecretMountPath)
|
||||||
|
}
|
||||||
|
|
||||||
template := v1.PodTemplateSpec{
|
template := v1.PodTemplateSpec{
|
||||||
ObjectMeta: metav1.ObjectMeta{
|
ObjectMeta: metav1.ObjectMeta{
|
||||||
Labels: labels,
|
Labels: labels,
|
||||||
|
|
@ -804,7 +814,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State
|
||||||
// pickup the docker image for the spilo container
|
// pickup the docker image for the spilo container
|
||||||
effectiveDockerImage := util.Coalesce(spec.DockerImage, c.OpConfig.DockerImage)
|
effectiveDockerImage := util.Coalesce(spec.DockerImage, c.OpConfig.DockerImage)
|
||||||
|
|
||||||
volumeMounts := generateVolumeMounts()
|
volumeMounts := generateVolumeMounts(spec.Volume)
|
||||||
|
|
||||||
// generate the spilo container
|
// generate the spilo container
|
||||||
c.logger.Debugf("Generating Spilo container, environment variables: %v", spiloEnvVars)
|
c.logger.Debugf("Generating Spilo container, environment variables: %v", spiloEnvVars)
|
||||||
|
|
@ -867,7 +877,9 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State
|
||||||
effectivePodPriorityClassName,
|
effectivePodPriorityClassName,
|
||||||
mountShmVolumeNeeded(c.OpConfig, spec),
|
mountShmVolumeNeeded(c.OpConfig, spec),
|
||||||
c.OpConfig.EnablePodAntiAffinity,
|
c.OpConfig.EnablePodAntiAffinity,
|
||||||
c.OpConfig.PodAntiAffinityTopologyKey); err != nil {
|
c.OpConfig.PodAntiAffinityTopologyKey,
|
||||||
|
c.OpConfig.AdditionalSecretMount,
|
||||||
|
c.OpConfig.AdditionalSecretMountPath); err != nil {
|
||||||
return nil, fmt.Errorf("could not generate pod template: %v", err)
|
return nil, fmt.Errorf("could not generate pod template: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -1021,6 +1033,28 @@ func addShmVolume(podSpec *v1.PodSpec) {
|
||||||
podSpec.Volumes = volumes
|
podSpec.Volumes = volumes
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func addSecretVolume(podSpec *v1.PodSpec, additionalSecretMount string, additionalSecretMountPath string) {
|
||||||
|
volumes := append(podSpec.Volumes, v1.Volume{
|
||||||
|
Name: additionalSecretMount,
|
||||||
|
VolumeSource: v1.VolumeSource{
|
||||||
|
Secret: &v1.SecretVolumeSource{
|
||||||
|
SecretName: additionalSecretMount,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
for i := range podSpec.Containers {
|
||||||
|
mounts := append(podSpec.Containers[i].VolumeMounts,
|
||||||
|
v1.VolumeMount{
|
||||||
|
Name: additionalSecretMount,
|
||||||
|
MountPath: additionalSecretMountPath,
|
||||||
|
})
|
||||||
|
podSpec.Containers[i].VolumeMounts = mounts
|
||||||
|
}
|
||||||
|
|
||||||
|
podSpec.Volumes = volumes
|
||||||
|
}
|
||||||
|
|
||||||
func generatePersistentVolumeClaimTemplate(volumeSize, volumeStorageClass string) (*v1.PersistentVolumeClaim, error) {
|
func generatePersistentVolumeClaimTemplate(volumeSize, volumeStorageClass string) (*v1.PersistentVolumeClaim, error) {
|
||||||
|
|
||||||
var storageClassName *string
|
var storageClassName *string
|
||||||
|
|
@ -1329,6 +1363,12 @@ func (c *Cluster) generateStandbyEnvironment(description *acidv1.StandbyDescript
|
||||||
|
|
||||||
func (c *Cluster) generatePodDisruptionBudget() *policybeta1.PodDisruptionBudget {
|
func (c *Cluster) generatePodDisruptionBudget() *policybeta1.PodDisruptionBudget {
|
||||||
minAvailable := intstr.FromInt(1)
|
minAvailable := intstr.FromInt(1)
|
||||||
|
pdbEnabled := c.OpConfig.EnablePodDisruptionBudget
|
||||||
|
|
||||||
|
// if PodDisruptionBudget is disabled or if there are no DB pods, set the budget to 0.
|
||||||
|
if (pdbEnabled != nil && !*pdbEnabled) || c.Spec.NumberOfInstances <= 0 {
|
||||||
|
minAvailable = intstr.FromInt(0)
|
||||||
|
}
|
||||||
|
|
||||||
return &policybeta1.PodDisruptionBudget{
|
return &policybeta1.PodDisruptionBudget{
|
||||||
ObjectMeta: metav1.ObjectMeta{
|
ObjectMeta: metav1.ObjectMeta{
|
||||||
|
|
@ -1418,6 +1458,8 @@ func (c *Cluster) generateLogicalBackupJob() (*batchv1beta1.CronJob, error) {
|
||||||
"",
|
"",
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
|
"",
|
||||||
|
"",
|
||||||
""); err != nil {
|
""); err != nil {
|
||||||
return nil, fmt.Errorf("could not generate pod template for logical backup pod: %v", err)
|
return nil, fmt.Errorf("could not generate pod template for logical backup pod: %v", err)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,8 @@
|
||||||
package cluster
|
package cluster
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"reflect"
|
||||||
|
|
||||||
"k8s.io/api/core/v1"
|
"k8s.io/api/core/v1"
|
||||||
|
|
||||||
"testing"
|
"testing"
|
||||||
|
|
@ -9,6 +11,10 @@ import (
|
||||||
"github.com/zalando/postgres-operator/pkg/util/config"
|
"github.com/zalando/postgres-operator/pkg/util/config"
|
||||||
"github.com/zalando/postgres-operator/pkg/util/constants"
|
"github.com/zalando/postgres-operator/pkg/util/constants"
|
||||||
"github.com/zalando/postgres-operator/pkg/util/k8sutil"
|
"github.com/zalando/postgres-operator/pkg/util/k8sutil"
|
||||||
|
|
||||||
|
policyv1beta1 "k8s.io/api/policy/v1beta1"
|
||||||
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
|
"k8s.io/apimachinery/pkg/util/intstr"
|
||||||
)
|
)
|
||||||
|
|
||||||
func True() *bool {
|
func True() *bool {
|
||||||
|
|
@ -21,6 +27,11 @@ func False() *bool {
|
||||||
return &b
|
return &b
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func toIntStr(val int) *intstr.IntOrString {
|
||||||
|
b := intstr.FromInt(val)
|
||||||
|
return &b
|
||||||
|
}
|
||||||
|
|
||||||
func TestGenerateSpiloJSONConfiguration(t *testing.T) {
|
func TestGenerateSpiloJSONConfiguration(t *testing.T) {
|
||||||
var cluster = New(
|
var cluster = New(
|
||||||
Config{
|
Config{
|
||||||
|
|
@ -143,6 +154,113 @@ func TestCreateLoadBalancerLogic(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestGeneratePodDisruptionBudget(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
c *Cluster
|
||||||
|
out policyv1beta1.PodDisruptionBudget
|
||||||
|
}{
|
||||||
|
// With multiple instances.
|
||||||
|
{
|
||||||
|
New(
|
||||||
|
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
|
||||||
|
k8sutil.KubernetesClient{},
|
||||||
|
acidv1.Postgresql{
|
||||||
|
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||||
|
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||||
|
logger),
|
||||||
|
policyv1beta1.PodDisruptionBudget{
|
||||||
|
ObjectMeta: metav1.ObjectMeta{
|
||||||
|
Name: "postgres-myapp-database-pdb",
|
||||||
|
Namespace: "myapp",
|
||||||
|
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||||
|
},
|
||||||
|
Spec: policyv1beta1.PodDisruptionBudgetSpec{
|
||||||
|
MinAvailable: toIntStr(1),
|
||||||
|
Selector: &metav1.LabelSelector{
|
||||||
|
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
// With zero instances.
|
||||||
|
{
|
||||||
|
New(
|
||||||
|
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
|
||||||
|
k8sutil.KubernetesClient{},
|
||||||
|
acidv1.Postgresql{
|
||||||
|
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||||
|
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 0}},
|
||||||
|
logger),
|
||||||
|
policyv1beta1.PodDisruptionBudget{
|
||||||
|
ObjectMeta: metav1.ObjectMeta{
|
||||||
|
Name: "postgres-myapp-database-pdb",
|
||||||
|
Namespace: "myapp",
|
||||||
|
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||||
|
},
|
||||||
|
Spec: policyv1beta1.PodDisruptionBudgetSpec{
|
||||||
|
MinAvailable: toIntStr(0),
|
||||||
|
Selector: &metav1.LabelSelector{
|
||||||
|
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
// With PodDisruptionBudget disabled.
|
||||||
|
{
|
||||||
|
New(
|
||||||
|
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: False()}},
|
||||||
|
k8sutil.KubernetesClient{},
|
||||||
|
acidv1.Postgresql{
|
||||||
|
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||||
|
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||||
|
logger),
|
||||||
|
policyv1beta1.PodDisruptionBudget{
|
||||||
|
ObjectMeta: metav1.ObjectMeta{
|
||||||
|
Name: "postgres-myapp-database-pdb",
|
||||||
|
Namespace: "myapp",
|
||||||
|
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||||
|
},
|
||||||
|
Spec: policyv1beta1.PodDisruptionBudgetSpec{
|
||||||
|
MinAvailable: toIntStr(0),
|
||||||
|
Selector: &metav1.LabelSelector{
|
||||||
|
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
// With non-default PDBNameFormat and PodDisruptionBudget explicitly enabled.
|
||||||
|
{
|
||||||
|
New(
|
||||||
|
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-databass-budget", EnablePodDisruptionBudget: True()}},
|
||||||
|
k8sutil.KubernetesClient{},
|
||||||
|
acidv1.Postgresql{
|
||||||
|
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||||
|
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||||
|
logger),
|
||||||
|
policyv1beta1.PodDisruptionBudget{
|
||||||
|
ObjectMeta: metav1.ObjectMeta{
|
||||||
|
Name: "postgres-myapp-database-databass-budget",
|
||||||
|
Namespace: "myapp",
|
||||||
|
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||||
|
},
|
||||||
|
Spec: policyv1beta1.PodDisruptionBudgetSpec{
|
||||||
|
MinAvailable: toIntStr(1),
|
||||||
|
Selector: &metav1.LabelSelector{
|
||||||
|
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
result := tt.c.generatePodDisruptionBudget()
|
||||||
|
if !reflect.DeepEqual(*result, tt.out) {
|
||||||
|
t.Errorf("Expected PodDisruptionBudget: %#v, got %#v", tt.out, *result)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestShmVolume(t *testing.T) {
|
func TestShmVolume(t *testing.T) {
|
||||||
testName := "TestShmVolume"
|
testName := "TestShmVolume"
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
|
|
@ -269,6 +387,76 @@ func TestCloneEnv(t *testing.T) {
|
||||||
t.Errorf("%s %s: Expected env value %s, have %s instead",
|
t.Errorf("%s %s: Expected env value %s, have %s instead",
|
||||||
testName, tt.subTest, tt.env.Value, env.Value)
|
testName, tt.subTest, tt.env.Value, env.Value)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSecretVolume(t *testing.T) {
|
||||||
|
testName := "TestSecretVolume"
|
||||||
|
tests := []struct {
|
||||||
|
subTest string
|
||||||
|
podSpec *v1.PodSpec
|
||||||
|
secretPos int
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
subTest: "empty PodSpec",
|
||||||
|
podSpec: &v1.PodSpec{
|
||||||
|
Volumes: []v1.Volume{},
|
||||||
|
Containers: []v1.Container{
|
||||||
|
{
|
||||||
|
VolumeMounts: []v1.VolumeMount{},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
secretPos: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
subTest: "non empty PodSpec",
|
||||||
|
podSpec: &v1.PodSpec{
|
||||||
|
Volumes: []v1.Volume{{}},
|
||||||
|
Containers: []v1.Container{
|
||||||
|
{
|
||||||
|
VolumeMounts: []v1.VolumeMount{
|
||||||
|
{
|
||||||
|
Name: "data",
|
||||||
|
ReadOnly: false,
|
||||||
|
MountPath: "/data",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
secretPos: 1,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range tests {
|
||||||
|
additionalSecretMount := "aws-iam-s3-role"
|
||||||
|
additionalSecretMountPath := "/meta/credentials"
|
||||||
|
|
||||||
|
numMounts := len(tt.podSpec.Containers[0].VolumeMounts)
|
||||||
|
|
||||||
|
addSecretVolume(tt.podSpec, additionalSecretMount, additionalSecretMountPath)
|
||||||
|
|
||||||
|
volumeName := tt.podSpec.Volumes[tt.secretPos].Name
|
||||||
|
|
||||||
|
if volumeName != additionalSecretMount {
|
||||||
|
t.Errorf("%s %s: Expected volume %s was not created, have %s instead",
|
||||||
|
testName, tt.subTest, additionalSecretMount, volumeName)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range tt.podSpec.Containers {
|
||||||
|
volumeMountName := tt.podSpec.Containers[i].VolumeMounts[tt.secretPos].Name
|
||||||
|
|
||||||
|
if volumeMountName != additionalSecretMount {
|
||||||
|
t.Errorf("%s %s: Expected mount %s was not created, have %s instead",
|
||||||
|
testName, tt.subTest, additionalSecretMount, volumeMountName)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
numMountsCheck := len(tt.podSpec.Containers[0].VolumeMounts)
|
||||||
|
|
||||||
|
if numMountsCheck != numMounts+1 {
|
||||||
|
t.Errorf("Unexpected number of VolumeMounts: got %v instead of %v",
|
||||||
|
numMountsCheck, numMounts+1)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -73,20 +73,6 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// create database objects unless we are running without pods or disabled that feature explicitly
|
|
||||||
if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&newSpec.Spec) <= 0) {
|
|
||||||
c.logger.Debugf("syncing roles")
|
|
||||||
if err = c.syncRoles(); err != nil {
|
|
||||||
err = fmt.Errorf("could not sync roles: %v", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
c.logger.Debugf("syncing databases")
|
|
||||||
if err = c.syncDatabases(); err != nil {
|
|
||||||
err = fmt.Errorf("could not sync databases: %v", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
c.logger.Debug("syncing pod disruption budgets")
|
c.logger.Debug("syncing pod disruption budgets")
|
||||||
if err = c.syncPodDisruptionBudget(false); err != nil {
|
if err = c.syncPodDisruptionBudget(false); err != nil {
|
||||||
err = fmt.Errorf("could not sync pod disruption budget: %v", err)
|
err = fmt.Errorf("could not sync pod disruption budget: %v", err)
|
||||||
|
|
@ -103,6 +89,20 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// create database objects unless we are running without pods or disabled that feature explicitly
|
||||||
|
if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&newSpec.Spec) <= 0) {
|
||||||
|
c.logger.Debugf("syncing roles")
|
||||||
|
if err = c.syncRoles(); err != nil {
|
||||||
|
err = fmt.Errorf("could not sync roles: %v", err)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
c.logger.Debugf("syncing databases")
|
||||||
|
if err = c.syncDatabases(); err != nil {
|
||||||
|
err = fmt.Errorf("could not sync databases: %v", err)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -46,6 +46,7 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
|
||||||
result.ClusterDomain = fromCRD.Kubernetes.ClusterDomain
|
result.ClusterDomain = fromCRD.Kubernetes.ClusterDomain
|
||||||
result.WatchedNamespace = fromCRD.Kubernetes.WatchedNamespace
|
result.WatchedNamespace = fromCRD.Kubernetes.WatchedNamespace
|
||||||
result.PDBNameFormat = fromCRD.Kubernetes.PDBNameFormat
|
result.PDBNameFormat = fromCRD.Kubernetes.PDBNameFormat
|
||||||
|
result.EnablePodDisruptionBudget = fromCRD.Kubernetes.EnablePodDisruptionBudget
|
||||||
result.SecretNameTemplate = fromCRD.Kubernetes.SecretNameTemplate
|
result.SecretNameTemplate = fromCRD.Kubernetes.SecretNameTemplate
|
||||||
result.OAuthTokenSecretName = fromCRD.Kubernetes.OAuthTokenSecretName
|
result.OAuthTokenSecretName = fromCRD.Kubernetes.OAuthTokenSecretName
|
||||||
result.InfrastructureRolesSecretName = fromCRD.Kubernetes.InfrastructureRolesSecretName
|
result.InfrastructureRolesSecretName = fromCRD.Kubernetes.InfrastructureRolesSecretName
|
||||||
|
|
@ -85,6 +86,8 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
|
||||||
result.AWSRegion = fromCRD.AWSGCP.AWSRegion
|
result.AWSRegion = fromCRD.AWSGCP.AWSRegion
|
||||||
result.LogS3Bucket = fromCRD.AWSGCP.LogS3Bucket
|
result.LogS3Bucket = fromCRD.AWSGCP.LogS3Bucket
|
||||||
result.KubeIAMRole = fromCRD.AWSGCP.KubeIAMRole
|
result.KubeIAMRole = fromCRD.AWSGCP.KubeIAMRole
|
||||||
|
result.AdditionalSecretMount = fromCRD.AWSGCP.AdditionalSecretMount
|
||||||
|
result.AdditionalSecretMountPath = fromCRD.AWSGCP.AdditionalSecretMountPath
|
||||||
|
|
||||||
result.DebugLogging = fromCRD.OperatorDebug.DebugLogging
|
result.DebugLogging = fromCRD.OperatorDebug.DebugLogging
|
||||||
result.EnableDBAccess = fromCRD.OperatorDebug.EnableDBAccess
|
result.EnableDBAccess = fromCRD.OperatorDebug.EnableDBAccess
|
||||||
|
|
|
||||||
|
|
@ -98,6 +98,8 @@ type Config struct {
|
||||||
WALES3Bucket string `name:"wal_s3_bucket"`
|
WALES3Bucket string `name:"wal_s3_bucket"`
|
||||||
LogS3Bucket string `name:"log_s3_bucket"`
|
LogS3Bucket string `name:"log_s3_bucket"`
|
||||||
KubeIAMRole string `name:"kube_iam_role"`
|
KubeIAMRole string `name:"kube_iam_role"`
|
||||||
|
AdditionalSecretMount string `name:"additional_secret_mount"`
|
||||||
|
AdditionalSecretMountPath string `name:"additional_secret_mount_path" default:"/meta/credentials"`
|
||||||
DebugLogging bool `name:"debug_logging" default:"true"`
|
DebugLogging bool `name:"debug_logging" default:"true"`
|
||||||
EnableDBAccess bool `name:"enable_database_access" default:"true"`
|
EnableDBAccess bool `name:"enable_database_access" default:"true"`
|
||||||
EnableTeamsAPI bool `name:"enable_teams_api" default:"true"`
|
EnableTeamsAPI bool `name:"enable_teams_api" default:"true"`
|
||||||
|
|
@ -114,6 +116,7 @@ type Config struct {
|
||||||
MasterDNSNameFormat StringTemplate `name:"master_dns_name_format" default:"{cluster}.{team}.{hostedzone}"`
|
MasterDNSNameFormat StringTemplate `name:"master_dns_name_format" default:"{cluster}.{team}.{hostedzone}"`
|
||||||
ReplicaDNSNameFormat StringTemplate `name:"replica_dns_name_format" default:"{cluster}-repl.{team}.{hostedzone}"`
|
ReplicaDNSNameFormat StringTemplate `name:"replica_dns_name_format" default:"{cluster}-repl.{team}.{hostedzone}"`
|
||||||
PDBNameFormat StringTemplate `name:"pdb_name_format" default:"postgres-{cluster}-pdb"`
|
PDBNameFormat StringTemplate `name:"pdb_name_format" default:"postgres-{cluster}-pdb"`
|
||||||
|
EnablePodDisruptionBudget *bool `name:"enable_pod_disruption_budget" default:"true"`
|
||||||
Workers uint32 `name:"workers" default:"4"`
|
Workers uint32 `name:"workers" default:"4"`
|
||||||
APIPort int `name:"api_port" default:"8080"`
|
APIPort int `name:"api_port" default:"8080"`
|
||||||
RingLogLines int `name:"ring_log_lines" default:"100"`
|
RingLogLines int `name:"ring_log_lines" default:"100"`
|
||||||
|
|
@ -123,7 +126,7 @@ type Config struct {
|
||||||
PodManagementPolicy string `name:"pod_management_policy" default:"ordered_ready"`
|
PodManagementPolicy string `name:"pod_management_policy" default:"ordered_ready"`
|
||||||
ProtectedRoles []string `name:"protected_role_names" default:"admin"`
|
ProtectedRoles []string `name:"protected_role_names" default:"admin"`
|
||||||
PostgresSuperuserTeams []string `name:"postgres_superuser_teams" default:""`
|
PostgresSuperuserTeams []string `name:"postgres_superuser_teams" default:""`
|
||||||
SetMemoryRequestToLimit bool `name:"set_memory_request_to_limit" defaults:"false"`
|
SetMemoryRequestToLimit bool `name:"set_memory_request_to_limit" default:"false"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// MustMarshal marshals the config or panics
|
// MustMarshal marshals the config or panics
|
||||||
|
|
|
||||||
|
|
@ -158,7 +158,7 @@ func SamePDB(cur, new *policybeta1.PodDisruptionBudget) (match bool, reason stri
|
||||||
//TODO: improve comparison
|
//TODO: improve comparison
|
||||||
match = reflect.DeepEqual(new.Spec, cur.Spec)
|
match = reflect.DeepEqual(new.Spec, cur.Spec)
|
||||||
if !match {
|
if !match {
|
||||||
reason = "new service spec doesn't match the current one"
|
reason = "new PDB spec doesn't match the current one"
|
||||||
}
|
}
|
||||||
|
|
||||||
return
|
return
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue