allow delete only if annotations meet configured criteria (#1069)

* define annotations for delete protection

* change log level and reduce log lines for e2e tests

* reduce wait_for_pod_start even further
This commit is contained in:
Felix Kunde 2020-08-13 16:36:22 +02:00 committed by GitHub
parent 0d81f972a1
commit 3ddc56e5b9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 343 additions and 19 deletions

View File

@ -117,6 +117,10 @@ spec:
type: object type: object
additionalProperties: additionalProperties:
type: string type: string
delete_annotation_date_key:
type: string
delete_annotation_name_key:
type: string
downscaler_annotations: downscaler_annotations:
type: array type: array
items: items:

View File

@ -67,6 +67,12 @@ configKubernetes:
# keya: valuea # keya: valuea
# keyb: valueb # keyb: valueb
# key name for annotation that compares manifest value with current date
# delete_annotation_date_key: "delete-date"
# key name for annotation that compares manifest value with cluster name
# delete_annotation_name_key: "delete-clustername"
# list of annotations propagated from cluster manifest to statefulset and deployment # list of annotations propagated from cluster manifest to statefulset and deployment
# downscaler_annotations: # downscaler_annotations:
# - deployment-time # - deployment-time

View File

@ -63,6 +63,12 @@ configKubernetes:
# annotations attached to each database pod # annotations attached to each database pod
# custom_pod_annotations: "keya:valuea,keyb:valueb" # custom_pod_annotations: "keya:valuea,keyb:valueb"
# key name for annotation that compares manifest value with current date
# delete_annotation_date_key: "delete-date"
# key name for annotation that compares manifest value with cluster name
# delete_annotation_name_key: "delete-clustername"
# list of annotations propagated from cluster manifest to statefulset and deployment # list of annotations propagated from cluster manifest to statefulset and deployment
# downscaler_annotations: "deployment-time,downscaler/*" # downscaler_annotations: "deployment-time,downscaler/*"

View File

@ -44,7 +44,7 @@ Once the validation is enabled it can only be disabled manually by editing or
patching the CRD manifest: patching the CRD manifest:
```bash ```bash
zk8 patch crd postgresqls.acid.zalan.do -p '{"spec":{"validation": null}}' kubectl patch crd postgresqls.acid.zalan.do -p '{"spec":{"validation": null}}'
``` ```
## Non-default cluster domain ## Non-default cluster domain
@ -123,6 +123,68 @@ Every other Postgres cluster which lacks the annotation will be ignored by this
operator. Conversely, operators without a defined `CONTROLLER_ID` will ignore operator. Conversely, operators without a defined `CONTROLLER_ID` will ignore
clusters with defined ownership of another operator. clusters with defined ownership of another operator.
## Delete protection via annotations
To avoid accidental deletes of Postgres clusters the operator can check the
manifest for two existing annotations containing the cluster name and/or the
current date (in YYYY-MM-DD format). The name of the annotation keys can be
defined in the configuration. By default, they are not set which disables the
delete protection. Thus, one could choose to only go with one annotation.
**postgres-operator ConfigMap**
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-operator
data:
delete_annotation_date_key: "delete-date"
delete_annotation_name_key: "delete-clustername"
```
**OperatorConfiguration**
```yaml
apiVersion: "acid.zalan.do/v1"
kind: OperatorConfiguration
metadata:
name: postgresql-operator-configuration
configuration:
kubernetes:
delete_annotation_date_key: "delete-date"
delete_annotation_name_key: "delete-clustername"
```
Now, every cluster manifest must contain the configured annotation keys to
trigger the delete process when running `kubectl delete pg`. Note, that the
`Postgresql` resource would still get deleted as K8s' API server does not
block it. Only the operator logs will tell, that the delete criteria wasn't
met.
**cluster manifest**
```yaml
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
name: demo-cluster
annotations:
delete-date: "2020-08-31"
delete-clustername: "demo-cluster"
spec:
...
```
In case, the resource has been deleted accidentally or the annotations were
simply forgotten, it's safe to recreate the cluster with `kubectl create`.
Existing Postgres cluster are not replaced by the operator. But, as the
original cluster still exists the status will show `CreateFailed` at first.
On the next sync event it should change to `Running`. However, as it is in
fact a new resource for K8s, the UID will differ which can trigger a rolling
update of the pods because the UID is used as part of backup path to S3.
## Role-based access control for the operator ## Role-based access control for the operator
The manifest [`operator-service-account-rbac.yaml`](../manifests/operator-service-account-rbac.yaml) The manifest [`operator-service-account-rbac.yaml`](../manifests/operator-service-account-rbac.yaml)
@ -586,9 +648,9 @@ The configuration paramaters that we will be using are:
* `gcp_credentials` * `gcp_credentials`
* `wal_gs_bucket` * `wal_gs_bucket`
### Generate a K8 secret resource ### Generate a K8s secret resource
Generate the K8 secret resource that will contain your service account's Generate the K8s secret resource that will contain your service account's
credentials. It's highly recommended to use a service account and limit its credentials. It's highly recommended to use a service account and limit its
scope to just the WAL-E bucket. scope to just the WAL-E bucket.
@ -613,13 +675,13 @@ the operator's configuration is set up like the following:
... ...
aws_or_gcp: aws_or_gcp:
additional_secret_mount: "pgsql-wale-creds" additional_secret_mount: "pgsql-wale-creds"
additional_secret_mount_path: "/var/secrets/google" # or where ever you want to mount the file additional_secret_mount_path: "/var/secrets/google" # or where ever you want to mount the file
# aws_region: eu-central-1 # aws_region: eu-central-1
# kube_iam_role: "" # kube_iam_role: ""
# log_s3_bucket: "" # log_s3_bucket: ""
# wal_s3_bucket: "" # wal_s3_bucket: ""
wal_gs_bucket: "postgres-backups-bucket-28302F2" # name of bucket on where to save the WAL-E logs wal_gs_bucket: "postgres-backups-bucket-28302F2" # name of bucket on where to save the WAL-E logs
gcp_credentials: "/var/secrets/google/key.json" # combination of the mount path & key in the K8 resource. (i.e. key.json) gcp_credentials: "/var/secrets/google/key.json" # combination of the mount path & key in the K8s resource. (i.e. key.json)
... ...
``` ```

View File

@ -200,6 +200,16 @@ configuration they are grouped under the `kubernetes` key.
of a database created by the operator. If the annotation key is also provided of a database created by the operator. If the annotation key is also provided
by the database definition, the database definition value is used. by the database definition, the database definition value is used.
* **delete_annotation_date_key**
key name for annotation that compares manifest value with current date in the
YYYY-MM-DD format. Allowed pattern: `'([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'`.
The default is empty which also disables this delete protection check.
* **delete_annotation_name_key**
key name for annotation that compares manifest value with Postgres cluster name.
Allowed pattern: `'([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'`. The default is
empty which also disables this delete protection check.
* **downscaler_annotations** * **downscaler_annotations**
An array of annotations that should be passed from Postgres CRD on to the An array of annotations that should be passed from Postgres CRD on to the
statefulset and, if exists, to the connection pooler deployment as well. statefulset and, if exists, to the connection pooler deployment as well.

View File

@ -1,3 +1,3 @@
kubernetes==9.0.0 kubernetes==11.0.0
timeout_decorator==0.4.1 timeout_decorator==0.4.1
pyyaml==5.1 pyyaml==5.3.1

View File

@ -7,6 +7,7 @@ import warnings
import os import os
import yaml import yaml
from datetime import datetime
from kubernetes import client, config from kubernetes import client, config
@ -614,6 +615,71 @@ class EndToEndTestCase(unittest.TestCase):
"Origin": 2, "Origin": 2,
}) })
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
def test_x_cluster_deletion(self):
'''
Test deletion with configured protection
'''
k8s = self.k8s
cluster_label = 'application=spilo,cluster-name=acid-minimal-cluster'
# configure delete protection
patch_delete_annotations = {
"data": {
"delete_annotation_date_key": "delete-date",
"delete_annotation_name_key": "delete-clustername"
}
}
k8s.update_config(patch_delete_annotations)
# this delete attempt should be omitted because of missing annotations
k8s.api.custom_objects_api.delete_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-minimal-cluster")
# check that pods and services are still there
k8s.wait_for_running_pods(cluster_label, 2)
k8s.wait_for_service(cluster_label)
# recreate Postgres cluster resource
k8s.create_with_kubectl("manifests/minimal-postgres-manifest.yaml")
# wait a little before proceeding
time.sleep(10)
# add annotations to manifest
deleteDate = datetime.today().strftime('%Y-%m-%d')
pg_patch_delete_annotations = {
"metadata": {
"annotations": {
"delete-date": deleteDate,
"delete-clustername": "acid-minimal-cluster",
}
}
}
k8s.api.custom_objects_api.patch_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-minimal-cluster", pg_patch_delete_annotations)
# wait a little before proceeding
time.sleep(10)
k8s.wait_for_running_pods(cluster_label, 2)
k8s.wait_for_service(cluster_label)
# now delete process should be triggered
k8s.api.custom_objects_api.delete_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-minimal-cluster")
# wait until cluster is deleted
time.sleep(120)
# check if everything has been deleted
self.assertEqual(0, k8s.count_pods_with_label(cluster_label))
self.assertEqual(0, k8s.count_services_with_label(cluster_label))
self.assertEqual(0, k8s.count_endpoints_with_label(cluster_label))
self.assertEqual(0, k8s.count_statefulsets_with_label(cluster_label))
self.assertEqual(0, k8s.count_deployments_with_label(cluster_label))
self.assertEqual(0, k8s.count_pdbs_with_label(cluster_label))
self.assertEqual(0, k8s.count_secrets_with_label(cluster_label))
def get_failover_targets(self, master_node, replica_nodes): def get_failover_targets(self, master_node, replica_nodes):
''' '''
If all pods live on the same node, failover will happen to other worker(s) If all pods live on the same node, failover will happen to other worker(s)
@ -700,11 +766,12 @@ class K8sApi:
self.apps_v1 = client.AppsV1Api() self.apps_v1 = client.AppsV1Api()
self.batch_v1_beta1 = client.BatchV1beta1Api() self.batch_v1_beta1 = client.BatchV1beta1Api()
self.custom_objects_api = client.CustomObjectsApi() self.custom_objects_api = client.CustomObjectsApi()
self.policy_v1_beta1 = client.PolicyV1beta1Api()
class K8s: class K8s:
''' '''
Wraps around K8 api client and helper methods. Wraps around K8s api client and helper methods.
''' '''
RETRY_TIMEOUT_SEC = 10 RETRY_TIMEOUT_SEC = 10
@ -755,14 +822,6 @@ class K8s:
if pods: if pods:
pod_phase = pods[0].status.phase pod_phase = pods[0].status.phase
if pods and pod_phase != 'Running':
pod_name = pods[0].metadata.name
response = self.api.core_v1.read_namespaced_pod(
name=pod_name,
namespace=namespace
)
print("Pod description {}".format(response))
time.sleep(self.RETRY_TIMEOUT_SEC) time.sleep(self.RETRY_TIMEOUT_SEC)
def get_service_type(self, svc_labels, namespace='default'): def get_service_type(self, svc_labels, namespace='default'):
@ -824,6 +883,25 @@ class K8s:
def count_pods_with_label(self, labels, namespace='default'): def count_pods_with_label(self, labels, namespace='default'):
return len(self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items) return len(self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items)
def count_services_with_label(self, labels, namespace='default'):
return len(self.api.core_v1.list_namespaced_service(namespace, label_selector=labels).items)
def count_endpoints_with_label(self, labels, namespace='default'):
return len(self.api.core_v1.list_namespaced_endpoints(namespace, label_selector=labels).items)
def count_secrets_with_label(self, labels, namespace='default'):
return len(self.api.core_v1.list_namespaced_secret(namespace, label_selector=labels).items)
def count_statefulsets_with_label(self, labels, namespace='default'):
return len(self.api.apps_v1.list_namespaced_stateful_set(namespace, label_selector=labels).items)
def count_deployments_with_label(self, labels, namespace='default'):
return len(self.api.apps_v1.list_namespaced_deployment(namespace, label_selector=labels).items)
def count_pdbs_with_label(self, labels, namespace='default'):
return len(self.api.policy_v1_beta1.list_namespaced_pod_disruption_budget(
namespace, label_selector=labels).items)
def wait_for_pod_failover(self, failover_targets, labels, namespace='default'): def wait_for_pod_failover(self, failover_targets, labels, namespace='default'):
pod_phase = 'Failing over' pod_phase = 'Failing over'
new_pod_node = '' new_pod_node = ''

View File

@ -6,6 +6,8 @@ metadata:
# environment: demo # environment: demo
# annotations: # annotations:
# "acid.zalan.do/controller": "second-operator" # "acid.zalan.do/controller": "second-operator"
# "delete-date": "2020-08-31" # can only be deleted on that day if "delete-date "key is configured
# "delete-clustername": "acid-test-cluster" # can only be deleted when name matches if "delete-clustername" key is configured
spec: spec:
dockerImage: registry.opensource.zalan.do/acid/spilo-12:1.6-p3 dockerImage: registry.opensource.zalan.do/acid/spilo-12:1.6-p3
teamId: "acid" teamId: "acid"
@ -34,7 +36,7 @@ spec:
defaultUsers: false defaultUsers: false
postgresql: postgresql:
version: "12" version: "12"
parameters: # Expert section parameters: # Expert section
shared_buffers: "32MB" shared_buffers: "32MB"
max_connections: "10" max_connections: "10"
log_statement: "all" log_statement: "all"

View File

@ -29,6 +29,8 @@ data:
# default_cpu_request: 100m # default_cpu_request: 100m
# default_memory_limit: 500Mi # default_memory_limit: 500Mi
# default_memory_request: 100Mi # default_memory_request: 100Mi
# delete_annotation_date_key: delete-date
# delete_annotation_name_key: delete-clustername
docker_image: registry.opensource.zalan.do/acid/spilo-12:1.6-p3 docker_image: registry.opensource.zalan.do/acid/spilo-12:1.6-p3
# downscaler_annotations: "deployment-time,downscaler/*" # downscaler_annotations: "deployment-time,downscaler/*"
# enable_admin_role_for_users: "true" # enable_admin_role_for_users: "true"

View File

@ -113,6 +113,10 @@ spec:
type: object type: object
additionalProperties: additionalProperties:
type: string type: string
delete_annotation_date_key:
type: string
delete_annotation_name_key:
type: string
downscaler_annotations: downscaler_annotations:
type: array type: array
items: items:

View File

@ -31,6 +31,8 @@ configuration:
# custom_pod_annotations: # custom_pod_annotations:
# keya: valuea # keya: valuea
# keyb: valueb # keyb: valueb
# delete_annotation_date_key: delete-date
# delete_annotation_name_key: delete-clustername
# downscaler_annotations: # downscaler_annotations:
# - deployment-time # - deployment-time
# - downscaler/* # - downscaler/*

View File

@ -888,6 +888,12 @@ var OperatorConfigCRDResourceValidation = apiextv1beta1.CustomResourceValidation
}, },
}, },
}, },
"delete_annotation_date_key": {
Type: "string",
},
"delete_annotation_name_key": {
Type: "string",
},
"downscaler_annotations": { "downscaler_annotations": {
Type: "array", Type: "array",
Items: &apiextv1beta1.JSONSchemaPropsOrArray{ Items: &apiextv1beta1.JSONSchemaPropsOrArray{

View File

@ -66,6 +66,8 @@ type KubernetesMetaConfiguration struct {
InheritedLabels []string `json:"inherited_labels,omitempty"` InheritedLabels []string `json:"inherited_labels,omitempty"`
DownscalerAnnotations []string `json:"downscaler_annotations,omitempty"` DownscalerAnnotations []string `json:"downscaler_annotations,omitempty"`
ClusterNameLabel string `json:"cluster_name_label,omitempty"` ClusterNameLabel string `json:"cluster_name_label,omitempty"`
DeleteAnnotationDateKey string `json:"delete_annotation_date_key,omitempty"`
DeleteAnnotationNameKey string `json:"delete_annotation_name_key,omitempty"`
NodeReadinessLabel map[string]string `json:"node_readiness_label,omitempty"` NodeReadinessLabel map[string]string `json:"node_readiness_label,omitempty"`
CustomPodAnnotations map[string]string `json:"custom_pod_annotations,omitempty"` CustomPodAnnotations map[string]string `json:"custom_pod_annotations,omitempty"`
// TODO: use a proper toleration structure? // TODO: use a proper toleration structure?

View File

@ -5,6 +5,7 @@ import (
"fmt" "fmt"
"os" "os"
"sync" "sync"
"time"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1" acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
@ -454,6 +455,37 @@ func (c *Controller) GetReference(postgresql *acidv1.Postgresql) *v1.ObjectRefer
return ref return ref
} }
func (c *Controller) meetsClusterDeleteAnnotations(postgresql *acidv1.Postgresql) error {
deleteAnnotationDateKey := c.opConfig.DeleteAnnotationDateKey
currentTime := time.Now()
currentDate := currentTime.Format("2006-01-02") // go's reference date
if deleteAnnotationDateKey != "" {
if deleteDate, ok := postgresql.Annotations[deleteAnnotationDateKey]; ok {
if deleteDate != currentDate {
return fmt.Errorf("annotation %s not matching the current date: got %s, expected %s", deleteAnnotationDateKey, deleteDate, currentDate)
}
} else {
return fmt.Errorf("annotation %s not set in manifest to allow cluster deletion", deleteAnnotationDateKey)
}
}
deleteAnnotationNameKey := c.opConfig.DeleteAnnotationNameKey
if deleteAnnotationNameKey != "" {
if clusterName, ok := postgresql.Annotations[deleteAnnotationNameKey]; ok {
if clusterName != postgresql.Name {
return fmt.Errorf("annotation %s not matching the cluster name: got %s, expected %s", deleteAnnotationNameKey, clusterName, postgresql.Name)
}
} else {
return fmt.Errorf("annotation %s not set in manifest to allow cluster deletion", deleteAnnotationNameKey)
}
}
return nil
}
// hasOwnership returns true if the controller is the "owner" of the postgresql. // hasOwnership returns true if the controller is the "owner" of the postgresql.
// Whether it's owner is determined by the value of 'acid.zalan.do/controller' // Whether it's owner is determined by the value of 'acid.zalan.do/controller'
// annotation. If the value matches the controllerID then it owns it, or if the // annotation. If the value matches the controllerID then it owns it, or if the

View File

@ -92,6 +92,8 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
result.InheritedLabels = fromCRD.Kubernetes.InheritedLabels result.InheritedLabels = fromCRD.Kubernetes.InheritedLabels
result.DownscalerAnnotations = fromCRD.Kubernetes.DownscalerAnnotations result.DownscalerAnnotations = fromCRD.Kubernetes.DownscalerAnnotations
result.ClusterNameLabel = util.Coalesce(fromCRD.Kubernetes.ClusterNameLabel, "cluster-name") result.ClusterNameLabel = util.Coalesce(fromCRD.Kubernetes.ClusterNameLabel, "cluster-name")
result.DeleteAnnotationDateKey = fromCRD.Kubernetes.DeleteAnnotationDateKey
result.DeleteAnnotationNameKey = fromCRD.Kubernetes.DeleteAnnotationNameKey
result.NodeReadinessLabel = fromCRD.Kubernetes.NodeReadinessLabel result.NodeReadinessLabel = fromCRD.Kubernetes.NodeReadinessLabel
result.PodPriorityClassName = fromCRD.Kubernetes.PodPriorityClassName result.PodPriorityClassName = fromCRD.Kubernetes.PodPriorityClassName
result.PodManagementPolicy = util.Coalesce(fromCRD.Kubernetes.PodManagementPolicy, "ordered_ready") result.PodManagementPolicy = util.Coalesce(fromCRD.Kubernetes.PodManagementPolicy, "ordered_ready")

View File

@ -2,6 +2,7 @@ package controller
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"reflect" "reflect"
"strings" "strings"
@ -420,6 +421,22 @@ func (c *Controller) queueClusterEvent(informerOldSpec, informerNewSpec *acidv1.
clusterError = informerNewSpec.Error clusterError = informerNewSpec.Error
} }
// only allow deletion if delete annotations are set and conditions are met
if eventType == EventDelete {
if err := c.meetsClusterDeleteAnnotations(informerOldSpec); err != nil {
c.logger.WithField("cluster-name", clusterName).Warnf(
"ignoring %q event for cluster %q - manifest does not fulfill delete requirements: %s", eventType, clusterName, err)
c.logger.WithField("cluster-name", clusterName).Warnf(
"please, recreate Postgresql resource %q and set annotations to delete properly", clusterName)
if currentManifest, marshalErr := json.Marshal(informerOldSpec); marshalErr != nil {
c.logger.WithField("cluster-name", clusterName).Warnf("could not marshal current manifest:\n%+v", informerOldSpec)
} else {
c.logger.WithField("cluster-name", clusterName).Warnf("%s\n", string(currentManifest))
}
return
}
}
if clusterError != "" && eventType != EventDelete { if clusterError != "" && eventType != EventDelete {
c.logger.WithField("cluster-name", clusterName).Debugf("skipping %q event for the invalid cluster: %s", eventType, clusterError) c.logger.WithField("cluster-name", clusterName).Debugf("skipping %q event for the invalid cluster: %s", eventType, clusterError)

View File

@ -1,8 +1,10 @@
package controller package controller
import ( import (
"fmt"
"reflect" "reflect"
"testing" "testing"
"time"
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1" acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
"github.com/zalando/postgres-operator/pkg/spec" "github.com/zalando/postgres-operator/pkg/spec"
@ -90,3 +92,88 @@ func TestMergeDeprecatedPostgreSQLSpecParameters(t *testing.T) {
} }
} }
} }
func TestMeetsClusterDeleteAnnotations(t *testing.T) {
// set delete annotations in configuration
postgresqlTestController.opConfig.DeleteAnnotationDateKey = "delete-date"
postgresqlTestController.opConfig.DeleteAnnotationNameKey = "delete-clustername"
currentTime := time.Now()
today := currentTime.Format("2006-01-02") // go's reference date
clusterName := "acid-test-cluster"
tests := []struct {
name string
pg *acidv1.Postgresql
error string
}{
{
"Postgres cluster with matching delete annotations",
&acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{
Name: clusterName,
Annotations: map[string]string{
"delete-date": today,
"delete-clustername": clusterName,
},
},
},
"",
},
{
"Postgres cluster with violated delete date annotation",
&acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{
Name: clusterName,
Annotations: map[string]string{
"delete-date": "2020-02-02",
"delete-clustername": clusterName,
},
},
},
fmt.Sprintf("annotation delete-date not matching the current date: got 2020-02-02, expected %s", today),
},
{
"Postgres cluster with violated delete cluster name annotation",
&acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{
Name: clusterName,
Annotations: map[string]string{
"delete-date": today,
"delete-clustername": "acid-minimal-cluster",
},
},
},
fmt.Sprintf("annotation delete-clustername not matching the cluster name: got acid-minimal-cluster, expected %s", clusterName),
},
{
"Postgres cluster with missing delete annotations",
&acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{
Name: clusterName,
Annotations: map[string]string{},
},
},
"annotation delete-date not set in manifest to allow cluster deletion",
},
{
"Postgres cluster with missing delete cluster name annotation",
&acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{
Name: clusterName,
Annotations: map[string]string{
"delete-date": today,
},
},
},
"annotation delete-clustername not set in manifest to allow cluster deletion",
},
}
for _, tt := range tests {
if err := postgresqlTestController.meetsClusterDeleteAnnotations(tt.pg); err != nil {
if !reflect.DeepEqual(err.Error(), tt.error) {
t.Errorf("Expected error %q, got: %v", tt.error, err)
}
}
}
}

View File

@ -36,6 +36,8 @@ type Resources struct {
InheritedLabels []string `name:"inherited_labels" default:""` InheritedLabels []string `name:"inherited_labels" default:""`
DownscalerAnnotations []string `name:"downscaler_annotations"` DownscalerAnnotations []string `name:"downscaler_annotations"`
ClusterNameLabel string `name:"cluster_name_label" default:"cluster-name"` ClusterNameLabel string `name:"cluster_name_label" default:"cluster-name"`
DeleteAnnotationDateKey string `name:"delete_annotation_date_key"`
DeleteAnnotationNameKey string `name:"delete_annotation_name_key"`
PodRoleLabel string `name:"pod_role_label" default:"spilo-role"` PodRoleLabel string `name:"pod_role_label" default:"spilo-role"`
PodToleration map[string]string `name:"toleration" default:""` PodToleration map[string]string `name:"toleration" default:""`
DefaultCPURequest string `name:"default_cpu_request" default:"100m"` DefaultCPURequest string `name:"default_cpu_request" default:"100m"`