Merge branch 'master' into status-changes
This commit is contained in:
commit
9efa6a1eb4
|
|
@ -9,7 +9,7 @@ assignees: ''
|
|||
|
||||
Please, answer some short questions which should help us to understand your problem / question better?
|
||||
|
||||
- **Which image of the operator are you using?** e.g. ghcr.io/zalando/postgres-operator:v1.12.2
|
||||
- **Which image of the operator are you using?** e.g. ghcr.io/zalando/postgres-operator:v1.13.0
|
||||
- **Where do you run it - cloud or metal? Kubernetes or OpenShift?** [AWS K8s | GCP ... | Bare Metal K8s]
|
||||
- **Are you running Postgres Operator in production?** [yes | no]
|
||||
- **Type of issue?** [Bug report, question, feature request, etc.]
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ jobs:
|
|||
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: "^1.22.5"
|
||||
go-version: "^1.23.4"
|
||||
|
||||
- name: Run unit tests
|
||||
run: make deps mocks test
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ jobs:
|
|||
- uses: actions/checkout@v1
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: "^1.22.5"
|
||||
go-version: "^1.23.4"
|
||||
- name: Make dependencies
|
||||
run: make deps mocks
|
||||
- name: Code generation
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: "^1.22.5"
|
||||
go-version: "^1.23.4"
|
||||
- name: Make dependencies
|
||||
run: make deps mocks
|
||||
- name: Compile
|
||||
|
|
@ -22,7 +22,7 @@ jobs:
|
|||
- name: Run unit tests
|
||||
run: go test -race -covermode atomic -coverprofile=coverage.out ./...
|
||||
- name: Convert coverage to lcov
|
||||
uses: jandelgado/gcov2lcov-action@v1.0.9
|
||||
uses: jandelgado/gcov2lcov-action@v1.1.1
|
||||
- name: Coveralls
|
||||
uses: coverallsapp/github-action@master
|
||||
with:
|
||||
|
|
|
|||
|
|
@ -104,3 +104,5 @@ e2e/tls
|
|||
mocks
|
||||
|
||||
ui/.npm/
|
||||
|
||||
.DS_Store
|
||||
|
|
|
|||
4
Makefile
4
Makefile
|
|
@ -69,7 +69,7 @@ docker: ${DOCKERDIR}/${DOCKERFILE}
|
|||
docker build --rm -t "$(IMAGE):$(TAG)$(CDP_TAG)$(DEBUG_FRESH)$(DEBUG_POSTFIX)" -f "${DOCKERDIR}/${DOCKERFILE}" --build-arg VERSION="${VERSION}" .
|
||||
|
||||
indocker-race:
|
||||
docker run --rm -v "${GOPATH}":"${GOPATH}" -e GOPATH="${GOPATH}" -e RACE=1 -w ${PWD} golang:1.22.5 bash -c "make linux"
|
||||
docker run --rm -v "${GOPATH}":"${GOPATH}" -e GOPATH="${GOPATH}" -e RACE=1 -w ${PWD} golang:1.23.4 bash -c "make linux"
|
||||
|
||||
push:
|
||||
docker push "$(IMAGE):$(TAG)$(CDP_TAG)"
|
||||
|
|
@ -78,7 +78,7 @@ mocks:
|
|||
GO111MODULE=on go generate ./...
|
||||
|
||||
tools:
|
||||
GO111MODULE=on go get -d k8s.io/client-go@kubernetes-1.28.10
|
||||
GO111MODULE=on go get k8s.io/client-go@kubernetes-1.30.4
|
||||
GO111MODULE=on go install github.com/golang/mock/mockgen@v1.6.0
|
||||
GO111MODULE=on go mod tidy
|
||||
|
||||
|
|
|
|||
14
README.md
14
README.md
|
|
@ -28,13 +28,13 @@ pipelines with no access to Kubernetes API directly, promoting infrastructure as
|
|||
|
||||
### PostgreSQL features
|
||||
|
||||
* Supports PostgreSQL 16, starting from 11+
|
||||
* Supports PostgreSQL 17, starting from 13+
|
||||
* Streaming replication cluster via Patroni
|
||||
* Point-In-Time-Recovery with
|
||||
[pg_basebackup](https://www.postgresql.org/docs/16/app-pgbasebackup.html) /
|
||||
[pg_basebackup](https://www.postgresql.org/docs/17/app-pgbasebackup.html) /
|
||||
[WAL-E](https://github.com/wal-e/wal-e) via [Spilo](https://github.com/zalando/spilo)
|
||||
* Preload libraries: [bg_mon](https://github.com/CyberDem0n/bg_mon),
|
||||
[pg_stat_statements](https://www.postgresql.org/docs/16/pgstatstatements.html),
|
||||
[pg_stat_statements](https://www.postgresql.org/docs/17/pgstatstatements.html),
|
||||
[pgextwlist](https://github.com/dimitri/pgextwlist),
|
||||
[pg_auth_mon](https://github.com/RafiaSabih/pg_auth_mon)
|
||||
* Incl. popular Postgres extensions such as
|
||||
|
|
@ -57,14 +57,12 @@ production for over five years.
|
|||
|
||||
| Release | Postgres versions | K8s versions | Golang |
|
||||
| :-------- | :---------------: | :---------------: | :-----: |
|
||||
| v1.13.0* | 12 → 16 | 1.27+ | 1.22.5 |
|
||||
| v1.12.2 | 11 → 16 | 1.27+ | 1.22.3 |
|
||||
| v1.14.0 | 13 → 17 | 1.27+ | 1.23.4 |
|
||||
| v1.13.0 | 12 → 16 | 1.27+ | 1.22.5 |
|
||||
| v1.12.0 | 11 → 16 | 1.27+ | 1.22.3 |
|
||||
| v1.11.0 | 11 → 16 | 1.27+ | 1.21.7 |
|
||||
| v1.10.1 | 10 → 15 | 1.21+ | 1.19.8 |
|
||||
| v1.9.0 | 10 → 15 | 1.21+ | 1.18.9 |
|
||||
| v1.8.2 | 9.5 → 14 | 1.20 → 1.24 | 1.17.4 |
|
||||
|
||||
*not yet released
|
||||
|
||||
## Getting started
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
apiVersion: v2
|
||||
name: postgres-operator-ui
|
||||
version: 1.12.2
|
||||
appVersion: 1.12.2
|
||||
version: 1.14.0
|
||||
appVersion: 1.14.0
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient database-as-a-service user experience
|
||||
keywords:
|
||||
|
|
|
|||
|
|
@ -1,9 +1,55 @@
|
|||
apiVersion: v1
|
||||
entries:
|
||||
postgres-operator-ui:
|
||||
- apiVersion: v2
|
||||
appVersion: 1.14.0
|
||||
created: "2024-12-23T11:26:07.721761867+01:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: e87ed898079a852957a67a4caf3fbd27b9098e413f5d961b7a771a6ae8b3e17c
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
keywords:
|
||||
- postgres
|
||||
- operator
|
||||
- ui
|
||||
- cloud-native
|
||||
- patroni
|
||||
- spilo
|
||||
maintainers:
|
||||
- email: opensource@zalando.de
|
||||
name: Zalando
|
||||
name: postgres-operator-ui
|
||||
sources:
|
||||
- https://github.com/zalando/postgres-operator
|
||||
urls:
|
||||
- postgres-operator-ui-1.14.0.tgz
|
||||
version: 1.14.0
|
||||
- apiVersion: v2
|
||||
appVersion: 1.13.0
|
||||
created: "2024-12-23T11:26:07.719409282+01:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: e0444e516b50f82002d1a733527813c51759a627cefdd1005cea73659f824ea8
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
keywords:
|
||||
- postgres
|
||||
- operator
|
||||
- ui
|
||||
- cloud-native
|
||||
- patroni
|
||||
- spilo
|
||||
maintainers:
|
||||
- email: opensource@zalando.de
|
||||
name: Zalando
|
||||
name: postgres-operator-ui
|
||||
sources:
|
||||
- https://github.com/zalando/postgres-operator
|
||||
urls:
|
||||
- postgres-operator-ui-1.13.0.tgz
|
||||
version: 1.13.0
|
||||
- apiVersion: v2
|
||||
appVersion: 1.12.2
|
||||
created: "2024-06-14T10:31:52.852963015+02:00"
|
||||
created: "2024-12-23T11:26:07.717202918+01:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: cbcef400c23ccece27d97369ad629278265c013e0a45c0b7f33e7568a082fedd
|
||||
|
|
@ -26,7 +72,7 @@ entries:
|
|||
version: 1.12.2
|
||||
- apiVersion: v2
|
||||
appVersion: 1.11.0
|
||||
created: "2024-06-14T10:31:52.849576888+02:00"
|
||||
created: "2024-12-23T11:26:07.714792146+01:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: a45f2284045c2a9a79750a36997386444f39b01ac722b17c84b431457577a3a2
|
||||
|
|
@ -49,7 +95,7 @@ entries:
|
|||
version: 1.11.0
|
||||
- apiVersion: v2
|
||||
appVersion: 1.10.1
|
||||
created: "2024-06-14T10:31:52.843219526+02:00"
|
||||
created: "2024-12-23T11:26:07.712194397+01:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: 2e5e7a82aebee519ec57c6243eb8735124aa4585a3a19c66ffd69638fbeb11ce
|
||||
|
|
@ -72,7 +118,7 @@ entries:
|
|||
version: 1.10.1
|
||||
- apiVersion: v2
|
||||
appVersion: 1.9.0
|
||||
created: "2024-06-14T10:31:52.857573553+02:00"
|
||||
created: "2024-12-23T11:26:07.723891496+01:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: df434af6c8b697fe0631017ecc25e3c79e125361ae6622347cea41a545153bdc
|
||||
|
|
@ -93,27 +139,4 @@ entries:
|
|||
urls:
|
||||
- postgres-operator-ui-1.9.0.tgz
|
||||
version: 1.9.0
|
||||
- apiVersion: v2
|
||||
appVersion: 1.8.2
|
||||
created: "2024-06-14T10:31:52.855335455+02:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: fbfc90fa8fd007a08a7c02e0ec9108bb8282cbb42b8c976d88f2193d6edff30c
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
keywords:
|
||||
- postgres
|
||||
- operator
|
||||
- ui
|
||||
- cloud-native
|
||||
- patroni
|
||||
- spilo
|
||||
maintainers:
|
||||
- email: opensource@zalando.de
|
||||
name: Zalando
|
||||
name: postgres-operator-ui
|
||||
sources:
|
||||
- https://github.com/zalando/postgres-operator
|
||||
urls:
|
||||
- postgres-operator-ui-1.8.2.tgz
|
||||
version: 1.8.2
|
||||
generated: "2024-06-14T10:31:52.839113675+02:00"
|
||||
generated: "2024-12-23T11:26:07.709192608+01:00"
|
||||
|
|
|
|||
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
|
@ -9,7 +9,7 @@ metadata:
|
|||
name: {{ template "postgres-operator-ui.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
spec:
|
||||
replicas: 1
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: {{ template "postgres-operator-ui.name" . }}
|
||||
|
|
@ -84,11 +84,11 @@ spec:
|
|||
"limit_iops": 16000,
|
||||
"limit_throughput": 1000,
|
||||
"postgresql_versions": [
|
||||
"17",
|
||||
"16",
|
||||
"15",
|
||||
"14",
|
||||
"13",
|
||||
"12"
|
||||
"13"
|
||||
]
|
||||
}
|
||||
{{- if .Values.extraEnvs }}
|
||||
|
|
@ -102,4 +102,4 @@ spec:
|
|||
{{ toYaml .Values.tolerations | indent 8 }}
|
||||
{{- if .Values.priorityClassName }}
|
||||
priorityClassName: {{ .Values.priorityClassName }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ replicaCount: 1
|
|||
image:
|
||||
registry: ghcr.io
|
||||
repository: zalando/postgres-operator-ui
|
||||
tag: v1.12.2
|
||||
tag: v1.14.0
|
||||
pullPolicy: "IfNotPresent"
|
||||
|
||||
# Optionally specify an array of imagePullSecrets.
|
||||
|
|
@ -62,8 +62,6 @@ podAnnotations:
|
|||
extraEnvs:
|
||||
[]
|
||||
# Exemple of settings to make snapshot view working in the ui when using AWS
|
||||
# - name: WALE_S3_ENDPOINT
|
||||
# value: https+path://s3.us-east-1.amazonaws.com:443
|
||||
# - name: SPILO_S3_BACKUP_PREFIX
|
||||
# value: spilo/
|
||||
# - name: AWS_ACCESS_KEY_ID
|
||||
|
|
@ -83,8 +81,6 @@ extraEnvs:
|
|||
# key: AWS_DEFAULT_REGION
|
||||
# - name: SPILO_S3_BACKUP_BUCKET
|
||||
# value: <s3 bucket used by the operator>
|
||||
# - name: "USE_AWS_INSTANCE_PROFILE"
|
||||
# value: "true"
|
||||
|
||||
# configure UI service
|
||||
service:
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
apiVersion: v2
|
||||
name: postgres-operator
|
||||
version: 1.12.2
|
||||
appVersion: 1.12.2
|
||||
version: 1.14.0
|
||||
appVersion: 1.14.0
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running in Kubernetes
|
||||
keywords:
|
||||
|
|
|
|||
|
|
@ -68,7 +68,7 @@ spec:
|
|||
type: string
|
||||
docker_image:
|
||||
type: string
|
||||
default: "ghcr.io/zalando/spilo-16:3.2-p3"
|
||||
default: "ghcr.io/zalando/spilo-17:4.0-p2"
|
||||
enable_crd_registration:
|
||||
type: boolean
|
||||
default: true
|
||||
|
|
@ -160,17 +160,17 @@ spec:
|
|||
properties:
|
||||
major_version_upgrade_mode:
|
||||
type: string
|
||||
default: "off"
|
||||
default: "manual"
|
||||
major_version_upgrade_team_allow_list:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
minimal_major_version:
|
||||
type: string
|
||||
default: "12"
|
||||
default: "13"
|
||||
target_major_version:
|
||||
type: string
|
||||
default: "16"
|
||||
default: "17"
|
||||
kubernetes:
|
||||
type: object
|
||||
properties:
|
||||
|
|
@ -211,9 +211,9 @@ spec:
|
|||
enable_init_containers:
|
||||
type: boolean
|
||||
default: true
|
||||
enable_secrets_deletion:
|
||||
enable_owner_references:
|
||||
type: boolean
|
||||
default: true
|
||||
default: false
|
||||
enable_persistent_volume_claim_deletion:
|
||||
type: boolean
|
||||
default: true
|
||||
|
|
@ -226,6 +226,9 @@ spec:
|
|||
enable_readiness_probe:
|
||||
type: boolean
|
||||
default: false
|
||||
enable_secrets_deletion:
|
||||
type: boolean
|
||||
default: true
|
||||
enable_sidecars:
|
||||
type: boolean
|
||||
default: true
|
||||
|
|
@ -373,28 +376,28 @@ spec:
|
|||
properties:
|
||||
default_cpu_limit:
|
||||
type: string
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
|
||||
default_cpu_request:
|
||||
type: string
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
|
||||
default_memory_limit:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
|
||||
default_memory_request:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
|
||||
max_cpu_request:
|
||||
type: string
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
|
||||
max_memory_request:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
|
||||
min_cpu_limit:
|
||||
type: string
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
|
||||
min_memory_limit:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
|
||||
timeouts:
|
||||
type: object
|
||||
properties:
|
||||
|
|
@ -469,7 +472,6 @@ spec:
|
|||
type: string
|
||||
additional_secret_mount_path:
|
||||
type: string
|
||||
default: "/meta/credentials"
|
||||
aws_region:
|
||||
type: string
|
||||
default: "eu-central-1"
|
||||
|
|
@ -508,7 +510,7 @@ spec:
|
|||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
logical_backup_docker_image:
|
||||
type: string
|
||||
default: "ghcr.io/zalando/postgres-operator/logical-backup:v1.12.2"
|
||||
default: "ghcr.io/zalando/postgres-operator/logical-backup:v1.13.0"
|
||||
logical_backup_google_application_credentials:
|
||||
type: string
|
||||
logical_backup_job_prefix:
|
||||
|
|
|
|||
|
|
@ -226,7 +226,7 @@ spec:
|
|||
type: array
|
||||
items:
|
||||
type: string
|
||||
pattern: '^\ *((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))-((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))\ *$'
|
||||
pattern: '^\ *((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))-((2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))\ *$'
|
||||
masterServiceAnnotations:
|
||||
type: object
|
||||
additionalProperties:
|
||||
|
|
@ -375,12 +375,11 @@ spec:
|
|||
version:
|
||||
type: string
|
||||
enum:
|
||||
- "11"
|
||||
- "12"
|
||||
- "13"
|
||||
- "14"
|
||||
- "15"
|
||||
- "16"
|
||||
- "17"
|
||||
parameters:
|
||||
type: object
|
||||
additionalProperties:
|
||||
|
|
@ -515,6 +514,9 @@ spec:
|
|||
type: string
|
||||
batchSize:
|
||||
type: integer
|
||||
cpu:
|
||||
type: string
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
database:
|
||||
type: string
|
||||
enableRecovery:
|
||||
|
|
@ -523,6 +525,9 @@ spec:
|
|||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
memory:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
tables:
|
||||
type: object
|
||||
additionalProperties:
|
||||
|
|
@ -534,6 +539,8 @@ spec:
|
|||
type: string
|
||||
idColumn:
|
||||
type: string
|
||||
ignoreRecovery:
|
||||
type: boolean
|
||||
payloadColumn:
|
||||
type: string
|
||||
recoveryEventType:
|
||||
|
|
|
|||
|
|
@ -1,9 +1,53 @@
|
|||
apiVersion: v1
|
||||
entries:
|
||||
postgres-operator:
|
||||
- apiVersion: v2
|
||||
appVersion: 1.14.0
|
||||
created: "2024-12-23T11:25:32.596716566+01:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: 36e1571f3f455b213f16cdda7b1158648e8e84deb804ba47ed6b9b6d19263ba8
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
keywords:
|
||||
- postgres
|
||||
- operator
|
||||
- cloud-native
|
||||
- patroni
|
||||
- spilo
|
||||
maintainers:
|
||||
- email: opensource@zalando.de
|
||||
name: Zalando
|
||||
name: postgres-operator
|
||||
sources:
|
||||
- https://github.com/zalando/postgres-operator
|
||||
urls:
|
||||
- postgres-operator-1.14.0.tgz
|
||||
version: 1.14.0
|
||||
- apiVersion: v2
|
||||
appVersion: 1.13.0
|
||||
created: "2024-12-23T11:25:32.591136261+01:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: a839601689aea0a7e6bc0712a5244d435683cf3314c95794097ff08540e1dfef
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
keywords:
|
||||
- postgres
|
||||
- operator
|
||||
- cloud-native
|
||||
- patroni
|
||||
- spilo
|
||||
maintainers:
|
||||
- email: opensource@zalando.de
|
||||
name: Zalando
|
||||
name: postgres-operator
|
||||
sources:
|
||||
- https://github.com/zalando/postgres-operator
|
||||
urls:
|
||||
- postgres-operator-1.13.0.tgz
|
||||
version: 1.13.0
|
||||
- apiVersion: v2
|
||||
appVersion: 1.12.2
|
||||
created: "2024-06-14T10:30:44.071387784+02:00"
|
||||
created: "2024-12-23T11:25:32.585419709+01:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: 65858d14a40d7fd90c32bd9fc60021acc9555c161079f43a365c70171eaf21d8
|
||||
|
|
@ -25,7 +69,7 @@ entries:
|
|||
version: 1.12.2
|
||||
- apiVersion: v2
|
||||
appVersion: 1.11.0
|
||||
created: "2024-06-14T10:30:44.065353504+02:00"
|
||||
created: "2024-12-23T11:25:32.580077286+01:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: 3914b5e117bda0834f05c9207f007e2ac372864cf6e86dcc2e1362bbe46c14d9
|
||||
|
|
@ -47,7 +91,7 @@ entries:
|
|||
version: 1.11.0
|
||||
- apiVersion: v2
|
||||
appVersion: 1.10.1
|
||||
created: "2024-06-14T10:30:44.059080224+02:00"
|
||||
created: "2024-12-23T11:25:32.574641578+01:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: cc3baa41753da92466223d0b334df27e79c882296577b404a8e9071411fcf19c
|
||||
|
|
@ -69,7 +113,7 @@ entries:
|
|||
version: 1.10.1
|
||||
- apiVersion: v2
|
||||
appVersion: 1.9.0
|
||||
created: "2024-06-14T10:30:44.084760658+02:00"
|
||||
created: "2024-12-23T11:25:32.604748814+01:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: 64df90c898ca591eb3a330328173ffaadfbf9ddd474d8c42ed143edc9e3f4276
|
||||
|
|
@ -89,26 +133,4 @@ entries:
|
|||
urls:
|
||||
- postgres-operator-1.9.0.tgz
|
||||
version: 1.9.0
|
||||
- apiVersion: v2
|
||||
appVersion: 1.8.2
|
||||
created: "2024-06-14T10:30:44.077744166+02:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: f77ffad2e98b72a621e5527015cf607935d3ed688f10ba4b626435acb9631b5b
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
keywords:
|
||||
- postgres
|
||||
- operator
|
||||
- cloud-native
|
||||
- patroni
|
||||
- spilo
|
||||
maintainers:
|
||||
- email: opensource@zalando.de
|
||||
name: Zalando
|
||||
name: postgres-operator
|
||||
sources:
|
||||
- https://github.com/zalando/postgres-operator
|
||||
urls:
|
||||
- postgres-operator-1.8.2.tgz
|
||||
version: 1.8.2
|
||||
generated: "2024-06-14T10:30:44.052436544+02:00"
|
||||
generated: "2024-12-23T11:25:32.568598763+01:00"
|
||||
|
|
|
|||
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
|
@ -120,6 +120,7 @@ rules:
|
|||
- create
|
||||
- delete
|
||||
- get
|
||||
- patch
|
||||
- update
|
||||
# to check nodes for node readiness label
|
||||
- apiGroups:
|
||||
|
|
@ -139,8 +140,8 @@ rules:
|
|||
- delete
|
||||
- get
|
||||
- list
|
||||
{{- if toString .Values.configKubernetes.storage_resize_mode | eq "pvc" }}
|
||||
- patch
|
||||
{{- if or (toString .Values.configKubernetes.storage_resize_mode | eq "pvc") (toString .Values.configKubernetes.storage_resize_mode | eq "mixed") }}
|
||||
- update
|
||||
{{- end }}
|
||||
# to read existing PVs. Creation should be done via dynamic provisioning
|
||||
|
|
@ -196,6 +197,7 @@ rules:
|
|||
- get
|
||||
- list
|
||||
- patch
|
||||
- update
|
||||
# to CRUD cron jobs for logical backups
|
||||
- apiGroups:
|
||||
- batch
|
||||
|
|
|
|||
|
|
@ -52,6 +52,9 @@ spec:
|
|||
{{- if .Values.controllerID.create }}
|
||||
- name: CONTROLLER_ID
|
||||
value: {{ template "postgres-operator.controllerID" . }}
|
||||
{{- end }}
|
||||
{{- if .Values.extraEnvs }}
|
||||
{{ toYaml .Values.extraEnvs | indent 8 }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.resources | indent 10 }}
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
image:
|
||||
registry: ghcr.io
|
||||
repository: zalando/postgres-operator
|
||||
tag: v1.12.2
|
||||
tag: v1.14.0
|
||||
pullPolicy: "IfNotPresent"
|
||||
|
||||
# Optionally specify an array of imagePullSecrets.
|
||||
|
|
@ -38,7 +38,7 @@ configGeneral:
|
|||
# etcd connection string for Patroni. Empty uses K8s-native DCS.
|
||||
etcd_host: ""
|
||||
# Spilo docker image
|
||||
docker_image: ghcr.io/zalando/spilo-16:3.2-p3
|
||||
docker_image: ghcr.io/zalando/spilo-17:4.0-p2
|
||||
|
||||
# key name for annotation to ignore globally configured instance limits
|
||||
# ignore_instance_limits_annotation_key: ""
|
||||
|
|
@ -83,15 +83,15 @@ configUsers:
|
|||
|
||||
configMajorVersionUpgrade:
|
||||
# "off": no upgrade, "manual": manifest triggers action, "full": minimal version violation triggers too
|
||||
major_version_upgrade_mode: "off"
|
||||
major_version_upgrade_mode: "manual"
|
||||
# upgrades will only be carried out for clusters of listed teams when mode is "off"
|
||||
# major_version_upgrade_team_allow_list:
|
||||
# - acid
|
||||
|
||||
# minimal Postgres major version that will not automatically be upgraded
|
||||
minimal_major_version: "12"
|
||||
minimal_major_version: "13"
|
||||
# target Postgres major version when upgrading clusters automatically
|
||||
target_major_version: "16"
|
||||
target_major_version: "17"
|
||||
|
||||
configKubernetes:
|
||||
# list of additional capabilities for postgres container
|
||||
|
|
@ -129,8 +129,8 @@ configKubernetes:
|
|||
enable_finalizers: false
|
||||
# enables initContainers to run actions before Spilo is started
|
||||
enable_init_containers: true
|
||||
# toggles if operator should delete secrets on cluster deletion
|
||||
enable_secrets_deletion: true
|
||||
# toggles if child resources should have an owner reference to the postgresql CR
|
||||
enable_owner_references: false
|
||||
# toggles if operator should delete PVCs on cluster deletion
|
||||
enable_persistent_volume_claim_deletion: true
|
||||
# toggles pod anti affinity on the Postgres pods
|
||||
|
|
@ -139,6 +139,8 @@ configKubernetes:
|
|||
enable_pod_disruption_budget: true
|
||||
# toogles readiness probe for database pods
|
||||
enable_readiness_probe: false
|
||||
# toggles if operator should delete secrets on cluster deletion
|
||||
enable_secrets_deletion: true
|
||||
# enables sidecar containers to run alongside Spilo in the same pod
|
||||
enable_sidecars: true
|
||||
|
||||
|
|
@ -362,7 +364,7 @@ configLogicalBackup:
|
|||
# logical_backup_memory_request: ""
|
||||
|
||||
# image for pods of the logical backup job (example runs pg_dumpall)
|
||||
logical_backup_docker_image: "ghcr.io/zalando/postgres-operator/logical-backup:v1.12.2"
|
||||
logical_backup_docker_image: "ghcr.io/zalando/postgres-operator/logical-backup:v1.14.0"
|
||||
# path of google cloud service account json file
|
||||
# logical_backup_google_application_credentials: ""
|
||||
|
||||
|
|
@ -478,7 +480,7 @@ priorityClassName: ""
|
|||
# priority class for database pods
|
||||
podPriorityClassName:
|
||||
# If create is false with no name set, no podPriorityClassName is specified.
|
||||
# Hence, the pod priorityClass is the one with globalDefault set.
|
||||
# Hence, the pod priorityClass is the one with globalDefault set.
|
||||
# If there is no PriorityClass with globalDefault set, the priority of Pods with no priorityClassName is zero.
|
||||
create: true
|
||||
# If not set a name is generated using the fullname template and "-pod" suffix
|
||||
|
|
@ -504,6 +506,24 @@ readinessProbe:
|
|||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
|
||||
# configure extra environment variables
|
||||
# Extra environment variables are writen in kubernetes format and added "as is" to the pod's env variables
|
||||
# https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
|
||||
# https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables
|
||||
extraEnvs:
|
||||
[]
|
||||
# Exemple of settings maximum amount of memory / cpu that can be used by go process (to match resources.limits)
|
||||
# - name: MY_VAR
|
||||
# value: my-value
|
||||
# - name: GOMAXPROCS
|
||||
# valueFrom:
|
||||
# resourceFieldRef:
|
||||
# resource: limits.cpu
|
||||
# - name: GOMEMLIMIT
|
||||
# valueFrom:
|
||||
# resourceFieldRef:
|
||||
# resource: limits.memory
|
||||
|
||||
# Affinity for pod assignment
|
||||
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
|
||||
affinity: {}
|
||||
|
|
|
|||
|
|
@ -35,6 +35,8 @@ func init() {
|
|||
flag.BoolVar(&outOfCluster, "outofcluster", false, "Whether the operator runs in- our outside of the Kubernetes cluster.")
|
||||
flag.BoolVar(&config.NoDatabaseAccess, "nodatabaseaccess", false, "Disable all access to the database from the operator side.")
|
||||
flag.BoolVar(&config.NoTeamsAPI, "noteamsapi", false, "Disable all access to the teams API")
|
||||
flag.IntVar(&config.KubeQPS, "kubeqps", 10, "Kubernetes api requests per second.")
|
||||
flag.IntVar(&config.KubeBurst, "kubeburst", 20, "Kubernetes api requests burst limit.")
|
||||
flag.Parse()
|
||||
|
||||
config.EnableJsonLogging = os.Getenv("ENABLE_JSON_LOGGING") == "true"
|
||||
|
|
@ -83,6 +85,9 @@ func main() {
|
|||
log.Fatalf("couldn't get REST config: %v", err)
|
||||
}
|
||||
|
||||
config.RestConfig.QPS = float32(config.KubeQPS)
|
||||
config.RestConfig.Burst = config.KubeBurst
|
||||
|
||||
c := controller.NewController(&config, "")
|
||||
|
||||
c.Run(stop, wg)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
FROM golang:1.22-alpine
|
||||
FROM golang:1.23-alpine
|
||||
LABEL maintainer="Team ACID @ Zalando <team-acid@zalando.de>"
|
||||
|
||||
# We need root certificates to deal with teams api over https
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
ARG BASE_IMAGE=registry.opensource.zalan.do/library/alpine-3:latest
|
||||
FROM golang:1.22-alpine AS builder
|
||||
FROM golang:1.23-alpine AS builder
|
||||
ARG VERSION=latest
|
||||
|
||||
COPY . /go/src/github.com/zalando/postgres-operator
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ apt-get install -y wget
|
|||
|
||||
(
|
||||
cd /tmp
|
||||
wget -q "https://storage.googleapis.com/golang/go1.22.5.linux-${arch}.tar.gz" -O go.tar.gz
|
||||
wget -q "https://storage.googleapis.com/golang/go1.23.4.linux-${arch}.tar.gz" -O go.tar.gz
|
||||
tar -xf go.tar.gz
|
||||
mv go /usr/local
|
||||
ln -s /usr/local/go/bin/go /usr/bin/go
|
||||
|
|
|
|||
|
|
@ -63,14 +63,17 @@ the `PGVERSION` environment variable is set for the database pods. Since
|
|||
`v1.6.0` the related option `enable_pgversion_env_var` is enabled by default.
|
||||
|
||||
In-place major version upgrades can be configured to be executed by the
|
||||
operator with the `major_version_upgrade_mode` option. By default it is set
|
||||
to `off` which means the cluster version will not change when increased in
|
||||
the manifest. Still, a rolling update would be triggered updating the
|
||||
`PGVERSION` variable. But Spilo's [`configure_spilo`](https://github.com/zalando/spilo/blob/master/postgres-appliance/scripts/configure_spilo.py)
|
||||
script will notice the version mismatch and start the old version again.
|
||||
operator with the `major_version_upgrade_mode` option. By default, it is
|
||||
enabled (mode: `manual`). In any case, altering the version in the manifest
|
||||
will trigger a rolling update of pods to update the `PGVERSION` env variable.
|
||||
Spilo's [`configure_spilo`](https://github.com/zalando/spilo/blob/master/postgres-appliance/scripts/configure_spilo.py)
|
||||
script will notice the version mismatch but start the current version again.
|
||||
|
||||
In this scenario the major version could then be run by a user from within the
|
||||
master pod. Exec into the container and run:
|
||||
Next, the operator would call an updage script inside Spilo. When automatic
|
||||
upgrades are disabled (mode: `off`) the upgrade could still be run by a user
|
||||
from within the primary pod. This gives you full control about the point in
|
||||
time when the upgrade can be started (check also maintenance windows below).
|
||||
Exec into the container and run:
|
||||
```bash
|
||||
python3 /scripts/inplace_upgrade.py N
|
||||
```
|
||||
|
|
@ -79,8 +82,32 @@ The upgrade is usually fast, well under one minute for most DBs. Note, that
|
|||
changes become irrevertible once `pg_upgrade` is called. To understand the
|
||||
upgrade procedure, refer to the [corresponding PR in Spilo](https://github.com/zalando/spilo/pull/488).
|
||||
|
||||
When `major_version_upgrade_mode` is set to `manual` the operator will run
|
||||
the upgrade script for you after the manifest is updated and pods are rotated.
|
||||
When `major_version_upgrade_mode` is set to `full` the operator will compare
|
||||
the version in the manifest with the configured `minimal_major_version`. If it
|
||||
is lower the operator would start an automatic upgrade as described above. The
|
||||
configured `major_target_version` will be used as the new version. This option
|
||||
can be useful if you have to get rid of outdated major versions in your fleet.
|
||||
Please note, that the operator does not patch the version in the manifest.
|
||||
Thus, the `full` mode can create drift between desired and actual state.
|
||||
|
||||
### Upgrade during maintenance windows
|
||||
|
||||
When `maintenanceWindows` are defined in the Postgres manifest the operator
|
||||
will trigger a major version upgrade only during these periods. Make sure they
|
||||
are at least twice as long as your configured `resync_period` to guarantee
|
||||
that operator actions can be triggered.
|
||||
|
||||
### Upgrade annotations
|
||||
|
||||
When an upgrade is executed, the operator sets an annotation in the PostgreSQL
|
||||
resource, either `last-major-upgrade-success` if the upgrade succeeds, or
|
||||
`last-major-upgrade-failure` if it fails. The value of the annotation is a
|
||||
timestamp indicating when the upgrade occurred.
|
||||
|
||||
If a PostgreSQL resource contains a failure annotation, the operator will not
|
||||
attempt to retry the upgrade during a sync event. To remove the failure
|
||||
annotation, you can revert the PostgreSQL version back to the current version.
|
||||
This action will trigger the removal of the failure annotation.
|
||||
|
||||
## Non-default cluster domain
|
||||
|
||||
|
|
@ -223,9 +250,9 @@ configuration:
|
|||
|
||||
Now, every cluster manifest must contain the configured annotation keys to
|
||||
trigger the delete process when running `kubectl delete pg`. Note, that the
|
||||
`Postgresql` resource would still get deleted as K8s' API server does not
|
||||
block it. Only the operator logs will tell, that the delete criteria wasn't
|
||||
met.
|
||||
`Postgresql` resource would still get deleted because the operator does not
|
||||
instruct K8s' API server to block it. Only the operator logs will tell, that
|
||||
the delete criteria was not met.
|
||||
|
||||
**cluster manifest**
|
||||
|
||||
|
|
@ -243,11 +270,64 @@ spec:
|
|||
|
||||
In case, the resource has been deleted accidentally or the annotations were
|
||||
simply forgotten, it's safe to recreate the cluster with `kubectl create`.
|
||||
Existing Postgres cluster are not replaced by the operator. But, as the
|
||||
original cluster still exists the status will show `CreateFailed` at first.
|
||||
On the next sync event it should change to `Running`. However, as it is in
|
||||
fact a new resource for K8s, the UID will differ which can trigger a rolling
|
||||
update of the pods because the UID is used as part of backup path to S3.
|
||||
Existing Postgres cluster are not replaced by the operator. But, when the
|
||||
original cluster still exists the status will be `CreateFailed` at first. On
|
||||
the next sync event it should change to `Running`. However, because it is in
|
||||
fact a new resource for K8s, the UID and therefore, the backup path to S3,
|
||||
will differ and trigger a rolling update of the pods.
|
||||
|
||||
## Owner References and Finalizers
|
||||
|
||||
The Postgres Operator can set [owner references](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/) to most of a cluster's child resources to improve
|
||||
monitoring with GitOps tools and enable cascading deletes. There are two
|
||||
exceptions:
|
||||
|
||||
* Persistent Volume Claims, because they are handled by the [PV Reclaim Policy]https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/ of the Stateful Set
|
||||
* Cross-namespace secrets, because owner references are not allowed across namespaces by design
|
||||
|
||||
The operator would clean these resources up with its regular delete loop
|
||||
unless they got synced correctly. If for some reason the initial cluster sync
|
||||
fails, e.g. after a cluster creation or operator restart, a deletion of the
|
||||
cluster manifest might leave orphaned resources behind which the user has to
|
||||
clean up manually.
|
||||
|
||||
Another option is to enable finalizers which first ensures the deletion of all
|
||||
child resources before the cluster manifest gets removed. There is a trade-off
|
||||
though: The deletion is only performed after the next two operator SYNC cycles
|
||||
with the first one setting a `deletionTimestamp` and the latter reacting to it.
|
||||
The final removal of the custom resource will add a DELETE event to the worker
|
||||
queue but the child resources are already gone at this point. If you do not
|
||||
desire this behavior consider enabling owner references instead.
|
||||
|
||||
**postgres-operator ConfigMap**
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: postgres-operator
|
||||
data:
|
||||
enable_finalizers: "false"
|
||||
enable_owner_references: "true"
|
||||
```
|
||||
|
||||
**OperatorConfiguration**
|
||||
|
||||
```yaml
|
||||
apiVersion: "acid.zalan.do/v1"
|
||||
kind: OperatorConfiguration
|
||||
metadata:
|
||||
name: postgresql-operator-configuration
|
||||
configuration:
|
||||
kubernetes:
|
||||
enable_finalizers: false
|
||||
enable_owner_references: true
|
||||
```
|
||||
|
||||
:warning: Please note, both options are disabled by default. When enabling owner
|
||||
references the operator cannot block cascading deletes, even when the [delete protection annotations](administrator.md#delete-protection-via-annotations)
|
||||
are in place. You would need an K8s admission controller that blocks the actual
|
||||
`kubectl delete` API call e.g. based on existing annotations.
|
||||
|
||||
## Role-based access control for the operator
|
||||
|
||||
|
|
@ -304,7 +384,7 @@ exceptions:
|
|||
The interval of days can be set with `password_rotation_interval` (default
|
||||
`90` = 90 days, minimum 1). On each rotation the user name and password values
|
||||
are replaced in the K8s secret. They belong to a newly created user named after
|
||||
the original role plus rotation date in YYMMDD format. All priviliges are
|
||||
the original role plus rotation date in YYMMDD format. All privileges are
|
||||
inherited meaning that migration scripts should still grant and revoke rights
|
||||
against the original role. The timestamp of the next rotation (in RFC 3339
|
||||
format, UTC timezone) is written to the secret as well. Note, if the rotation
|
||||
|
|
@ -484,7 +564,7 @@ manifest affinity.
|
|||
```
|
||||
|
||||
If `node_readiness_label_merge` is set to `"OR"` (default) the readiness label
|
||||
affinty will be appended with its own expressions block:
|
||||
affinity will be appended with its own expressions block:
|
||||
|
||||
```yaml
|
||||
affinity:
|
||||
|
|
@ -540,22 +620,34 @@ By default the topology key for the pod anti affinity is set to
|
|||
`kubernetes.io/hostname`, you can set another topology key e.g.
|
||||
`failure-domain.beta.kubernetes.io/zone`. See [built-in node labels](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#interlude-built-in-node-labels) for available topology keys.
|
||||
|
||||
## Pod Disruption Budget
|
||||
## Pod Disruption Budgets
|
||||
|
||||
By default the operator uses a PodDisruptionBudget (PDB) to protect the cluster
|
||||
from voluntarily disruptions and hence unwanted DB downtime. The `MinAvailable`
|
||||
parameter of the PDB is set to `1` which prevents killing masters in single-node
|
||||
clusters and/or the last remaining running instance in a multi-node cluster.
|
||||
By default the operator creates two PodDisruptionBudgets (PDB) to protect the cluster
|
||||
from voluntarily disruptions and hence unwanted DB downtime: so-called primary PDB and
|
||||
and PDB for critical operations.
|
||||
|
||||
### Primary PDB
|
||||
The `MinAvailable` parameter of this PDB is set to `1` and, if `pdb_master_label_selector`
|
||||
is enabled, label selector includes `spilo-role=master` condition, which prevents killing
|
||||
masters in single-node clusters and/or the last remaining running instance in a multi-node
|
||||
cluster.
|
||||
|
||||
## PDB for critical operations
|
||||
The `MinAvailable` parameter of this PDB is equal to the `numberOfInstances` set in the
|
||||
cluster manifest, while label selector includes `critical-operation=true` condition. This
|
||||
allows to protect all pods of a cluster, given they are labeled accordingly.
|
||||
For example, Operator labels all Spilo pods with `critical-operation=true` during the major
|
||||
version upgrade run. You may want to protect cluster pods during other critical operations
|
||||
by assigning the label to pods yourself or using other means of automation.
|
||||
|
||||
The PDB is only relaxed in two scenarios:
|
||||
|
||||
* If a cluster is scaled down to `0` instances (e.g. for draining nodes)
|
||||
* If the PDB is disabled in the configuration (`enable_pod_disruption_budget`)
|
||||
|
||||
The PDB is still in place having `MinAvailable` set to `0`. If enabled it will
|
||||
be automatically set to `1` on scale up. Disabling PDBs helps avoiding blocking
|
||||
Kubernetes upgrades in managed K8s environments at the cost of prolonged DB
|
||||
downtime. See PR [#384](https://github.com/zalando/postgres-operator/pull/384)
|
||||
The PDBs are still in place having `MinAvailable` set to `0`. Disabling PDBs
|
||||
helps avoiding blocking Kubernetes upgrades in managed K8s environments at the
|
||||
cost of prolonged DB downtime. See PR [#384](https://github.com/zalando/postgres-operator/pull/384)
|
||||
for the use case.
|
||||
|
||||
## Add cluster-specific labels
|
||||
|
|
@ -1048,7 +1140,7 @@ metadata:
|
|||
iam.gke.io/gcp-service-account: <GCP_SERVICE_ACCOUNT_NAME>@<GCP_PROJECT_ID>.iam.gserviceaccount.com
|
||||
```
|
||||
|
||||
2. Specify the new custom service account in your [operator paramaters](./reference/operator_parameters.md)
|
||||
2. Specify the new custom service account in your [operator parameters](./reference/operator_parameters.md)
|
||||
|
||||
If using manual deployment or kustomize, this is done by setting
|
||||
`pod_service_account_name` in your configuration file specified in the
|
||||
|
|
@ -1217,7 +1309,7 @@ aws_or_gcp:
|
|||
|
||||
If cluster members have to be (re)initialized restoring physical backups
|
||||
happens automatically either from the backup location or by running
|
||||
[pg_basebackup](https://www.postgresql.org/docs/16/app-pgbasebackup.html)
|
||||
[pg_basebackup](https://www.postgresql.org/docs/17/app-pgbasebackup.html)
|
||||
on one of the other running instances (preferably replicas if they do not lag
|
||||
behind). You can test restoring backups by [cloning](user.md#how-to-clone-an-existing-postgresql-cluster)
|
||||
clusters.
|
||||
|
|
@ -1325,6 +1417,10 @@ configuration:
|
|||
volumeMounts:
|
||||
- mountPath: /custom-pgdata-mountpoint
|
||||
name: pgdata
|
||||
env:
|
||||
- name: "ENV_VAR_NAME"
|
||||
value: "any-k8s-env-things"
|
||||
command: ['sh', '-c', 'echo "logging" > /opt/logs.txt']
|
||||
- ...
|
||||
```
|
||||
|
||||
|
|
@ -1399,7 +1495,7 @@ make docker
|
|||
|
||||
# build in image in minikube docker env
|
||||
eval $(minikube docker-env)
|
||||
docker build -t ghcr.io/zalando/postgres-operator-ui:v1.12.2 .
|
||||
docker build -t ghcr.io/zalando/postgres-operator-ui:v1.13.0 .
|
||||
|
||||
# apply UI manifests next to a running Postgres Operator
|
||||
kubectl apply -f manifests/
|
||||
|
|
|
|||
|
|
@ -186,7 +186,7 @@ go get -u github.com/derekparker/delve/cmd/dlv
|
|||
|
||||
```
|
||||
RUN apk --no-cache add go git musl-dev
|
||||
RUN go get -d github.com/derekparker/delve/cmd/dlv
|
||||
RUN go get github.com/derekparker/delve/cmd/dlv
|
||||
```
|
||||
|
||||
* Update the `Makefile` to build the project with debugging symbols. For that
|
||||
|
|
|
|||
|
|
@ -230,7 +230,7 @@ kubectl delete postgresql acid-minimal-cluster
|
|||
```
|
||||
|
||||
This should remove the associated StatefulSet, database Pods, Services and
|
||||
Endpoints. The PersistentVolumes are released and the PodDisruptionBudget is
|
||||
Endpoints. The PersistentVolumes are released and the PodDisruptionBudgets are
|
||||
deleted. Secrets however are not deleted and backups will remain in place.
|
||||
|
||||
When deleting a cluster while it is still starting up or got stuck during that
|
||||
|
|
|
|||
|
|
@ -114,6 +114,12 @@ These parameters are grouped directly under the `spec` key in the manifest.
|
|||
this parameter. Optional, when empty the load balancer service becomes
|
||||
inaccessible from outside of the Kubernetes cluster.
|
||||
|
||||
* **maintenanceWindows**
|
||||
a list which defines specific time frames when certain maintenance operations
|
||||
such as automatic major upgrades or master pod migration. Accepted formats
|
||||
are "01:00-06:00" for daily maintenance windows or "Sat:00:00-04:00" for specific
|
||||
days, with all times in UTC.
|
||||
|
||||
* **users**
|
||||
a map of usernames to user flags for the users that should be created in the
|
||||
cluster by the operator. User flags are a list, allowed elements are
|
||||
|
|
@ -241,7 +247,7 @@ These parameters are grouped directly under the `spec` key in the manifest.
|
|||
[kubernetes volumeSource](https://godoc.org/k8s.io/api/core/v1#VolumeSource).
|
||||
It allows you to mount existing PersistentVolumeClaims, ConfigMaps and Secrets inside the StatefulSet.
|
||||
Also an `emptyDir` volume can be shared between initContainer and statefulSet.
|
||||
Additionaly, you can provide a `SubPath` for volume mount (a file in a configMap source volume, for example).
|
||||
Additionally, you can provide a `SubPath` for volume mount (a file in a configMap source volume, for example).
|
||||
Set `isSubPathExpr` to true if you want to include [API environment variables](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment).
|
||||
You can also specify in which container the additional Volumes will be mounted with the `targetContainers` array option.
|
||||
If `targetContainers` is empty, additional volumes will be mounted only in the `postgres` container.
|
||||
|
|
@ -251,7 +257,7 @@ These parameters are grouped directly under the `spec` key in the manifest.
|
|||
## Prepared Databases
|
||||
|
||||
The operator can create databases with default owner, reader and writer roles
|
||||
without the need to specifiy them under `users` or `databases` sections. Those
|
||||
without the need to specify them under `users` or `databases` sections. Those
|
||||
parameters are grouped under the `preparedDatabases` top-level key. For more
|
||||
information, see [user docs](../user.md#prepared-databases-with-roles-and-default-privileges).
|
||||
|
||||
|
|
@ -632,7 +638,7 @@ the global configuration before adding the `tls` section'.
|
|||
## Change data capture streams
|
||||
|
||||
This sections enables change data capture (CDC) streams via Postgres'
|
||||
[logical decoding](https://www.postgresql.org/docs/16/logicaldecoding.html)
|
||||
[logical decoding](https://www.postgresql.org/docs/17/logicaldecoding.html)
|
||||
feature and `pgoutput` plugin. While the Postgres operator takes responsibility
|
||||
for providing the setup to publish change events, it relies on external tools
|
||||
to consume them. At Zalando, we are using a workflow based on
|
||||
|
|
@ -646,11 +652,11 @@ can have the following properties:
|
|||
|
||||
* **applicationId**
|
||||
The application name to which the database and CDC belongs to. For each
|
||||
set of streams with a distinct `applicationId` a separate stream CR as well
|
||||
as a separate logical replication slot will be created. This means there can
|
||||
be different streams in the same database and streams with the same
|
||||
`applicationId` are bundled in one stream CR. The stream CR will be called
|
||||
like the Postgres cluster plus "-<applicationId>" suffix. Required.
|
||||
set of streams with a distinct `applicationId` a separate stream resource as
|
||||
well as a separate logical replication slot will be created. This means there
|
||||
can be different streams in the same database and streams with the same
|
||||
`applicationId` are bundled in one stream resource. The stream resource will
|
||||
be called like the Postgres cluster plus "-<applicationId>" suffix. Required.
|
||||
|
||||
* **database**
|
||||
Name of the database from where events will be published via Postgres'
|
||||
|
|
@ -661,21 +667,37 @@ can have the following properties:
|
|||
|
||||
* **tables**
|
||||
Defines a map of table names and their properties (`eventType`, `idColumn`
|
||||
and `payloadColumn`). The CDC operator is following the [outbox pattern](https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/).
|
||||
and `payloadColumn`). Required.
|
||||
The CDC operator is following the [outbox pattern](https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/).
|
||||
The application is responsible for putting events into a (JSON/B or VARCHAR)
|
||||
payload column of the outbox table in the structure of the specified target
|
||||
event type. The operator will create a [PUBLICATION](https://www.postgresql.org/docs/16/logical-replication-publication.html)
|
||||
event type. The operator will create a [PUBLICATION](https://www.postgresql.org/docs/17/logical-replication-publication.html)
|
||||
in Postgres for all tables specified for one `database` and `applicationId`.
|
||||
The CDC operator will consume from it shortly after transactions are
|
||||
committed to the outbox table. The `idColumn` will be used in telemetry for
|
||||
the CDC operator. The names for `idColumn` and `payloadColumn` can be
|
||||
configured. Defaults are `id` and `payload`. The target `eventType` has to
|
||||
be defined. Required.
|
||||
be defined. One can also specify a `recoveryEventType` that will be used
|
||||
for a dead letter queue. By enabling `ignoreRecovery`, you can choose to
|
||||
ignore failing events.
|
||||
|
||||
* **filter**
|
||||
Streamed events can be filtered by a jsonpath expression for each table.
|
||||
Optional.
|
||||
|
||||
* **enableRecovery**
|
||||
Flag to enable a dead letter queue recovery for all streams tables.
|
||||
Alternatively, recovery can also be enable for single outbox tables by only
|
||||
specifying a `recoveryEventType` and no `enableRecovery` flag. When set to
|
||||
false or missing, events will be retried until consuming succeeded. You can
|
||||
use a `filter` expression to get rid of poison pills. Optional.
|
||||
|
||||
* **batchSize**
|
||||
Defines the size of batches in which events are consumed. Optional.
|
||||
Defaults to 1.
|
||||
|
||||
* **cpu**
|
||||
CPU requests to be set as an annotation on the stream resource. Optional.
|
||||
|
||||
* **memory**
|
||||
memory requests to be set as an annotation on the stream resource. Optional.
|
||||
|
|
|
|||
|
|
@ -94,9 +94,6 @@ Those are top-level keys, containing both leaf keys and groups.
|
|||
* **enable_pgversion_env_var**
|
||||
With newer versions of Spilo, it is preferable to use `PGVERSION` pod environment variable instead of the setting `postgresql.bin_dir` in the `SPILO_CONFIGURATION` env variable. When this option is true, the operator sets `PGVERSION` and omits `postgresql.bin_dir` from `SPILO_CONFIGURATION`. When false, the `postgresql.bin_dir` is set. This setting takes precedence over `PGVERSION`; see PR 222 in Spilo. The default is `true`.
|
||||
|
||||
* **enable_spilo_wal_path_compat**
|
||||
enables backwards compatible path between Spilo 12 and Spilo 13+ images. The default is `false`.
|
||||
|
||||
* **enable_team_id_clustername_prefix**
|
||||
To lower the risk of name clashes between clusters of different teams you
|
||||
can turn on this flag and the operator will sync only clusters where the
|
||||
|
|
@ -212,7 +209,7 @@ under the `users` key.
|
|||
For all `LOGIN` roles that are not database owners the operator can rotate
|
||||
credentials in the corresponding K8s secrets by replacing the username and
|
||||
password. This means, new users will be added on each rotation inheriting
|
||||
all priviliges from the original roles. The rotation date (in YYMMDD format)
|
||||
all privileges from the original roles. The rotation date (in YYMMDD format)
|
||||
is appended to the names of the new user. The timestamp of the next rotation
|
||||
is written to the secret. The default is `false`.
|
||||
|
||||
|
|
@ -242,7 +239,7 @@ CRD-configuration, they are grouped under the `major_version_upgrade` key.
|
|||
`"manual"` = manifest triggers action,
|
||||
`"full"` = manifest and minimal version violation trigger upgrade.
|
||||
Note, that with all three modes increasing the version in the manifest will
|
||||
trigger a rolling update of the pods. The default is `"off"`.
|
||||
trigger a rolling update of the pods. The default is `"manual"`.
|
||||
|
||||
* **major_version_upgrade_team_allow_list**
|
||||
Upgrades will only be carried out for clusters of listed teams when mode is
|
||||
|
|
@ -250,12 +247,12 @@ CRD-configuration, they are grouped under the `major_version_upgrade` key.
|
|||
|
||||
* **minimal_major_version**
|
||||
The minimal Postgres major version that will not automatically be upgraded
|
||||
when `major_version_upgrade_mode` is set to `"full"`. The default is `"12"`.
|
||||
when `major_version_upgrade_mode` is set to `"full"`. The default is `"13"`.
|
||||
|
||||
* **target_major_version**
|
||||
The target Postgres major version when upgrading clusters automatically
|
||||
which violate the configured allowed `minimal_major_version` when
|
||||
`major_version_upgrade_mode` is set to `"full"`. The default is `"16"`.
|
||||
`major_version_upgrade_mode` is set to `"full"`. The default is `"17"`.
|
||||
|
||||
## Kubernetes resources
|
||||
|
||||
|
|
@ -263,6 +260,31 @@ Parameters to configure cluster-related Kubernetes objects created by the
|
|||
operator, as well as some timeouts associated with them. In a CRD-based
|
||||
configuration they are grouped under the `kubernetes` key.
|
||||
|
||||
* **enable_finalizers**
|
||||
By default, a deletion of the Postgresql resource will trigger an event
|
||||
that leads to a cleanup of all child resources. However, if the database
|
||||
cluster is in a broken state (e.g. failed initialization) and the operator
|
||||
cannot fully sync it, there can be leftovers. By enabling finalizers the
|
||||
operator will ensure all managed resources are deleted prior to the
|
||||
Postgresql resource. See also [admin docs](../administrator.md#owner-references-and-finalizers)
|
||||
for more information The default is `false`.
|
||||
|
||||
* **enable_owner_references**
|
||||
The operator can set owner references on its child resources (except PVCs,
|
||||
Patroni config service/endpoint, cross-namespace secrets) to improve cluster
|
||||
monitoring and enable cascading deletion. The default is `false`. Warning,
|
||||
enabling this option disables configured delete protection checks (see below).
|
||||
|
||||
* **delete_annotation_date_key**
|
||||
key name for annotation that compares manifest value with current date in the
|
||||
YYYY-MM-DD format. Allowed pattern: `'([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'`.
|
||||
The default is empty which also disables this delete protection check.
|
||||
|
||||
* **delete_annotation_name_key**
|
||||
key name for annotation that compares manifest value with Postgres cluster name.
|
||||
Allowed pattern: `'([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'`. The default is
|
||||
empty which also disables this delete protection check.
|
||||
|
||||
* **pod_service_account_name**
|
||||
service account used by Patroni running on individual Pods to communicate
|
||||
with the operator. Required even if native Kubernetes support in Patroni is
|
||||
|
|
@ -293,16 +315,6 @@ configuration they are grouped under the `kubernetes` key.
|
|||
of a database created by the operator. If the annotation key is also provided
|
||||
by the database definition, the database definition value is used.
|
||||
|
||||
* **delete_annotation_date_key**
|
||||
key name for annotation that compares manifest value with current date in the
|
||||
YYYY-MM-DD format. Allowed pattern: `'([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'`.
|
||||
The default is empty which also disables this delete protection check.
|
||||
|
||||
* **delete_annotation_name_key**
|
||||
key name for annotation that compares manifest value with Postgres cluster name.
|
||||
Allowed pattern: `'([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'`. The default is
|
||||
empty which also disables this delete protection check.
|
||||
|
||||
* **downscaler_annotations**
|
||||
An array of annotations that should be passed from Postgres CRD on to the
|
||||
statefulset and, if exists, to the connection pooler deployment as well.
|
||||
|
|
@ -322,30 +334,16 @@ configuration they are grouped under the `kubernetes` key.
|
|||
pod namespace).
|
||||
|
||||
* **pdb_name_format**
|
||||
defines the template for PDB (Pod Disruption Budget) names created by the
|
||||
defines the template for primary PDB (Pod Disruption Budget) name created by the
|
||||
operator. The default is `postgres-{cluster}-pdb`, where `{cluster}` is
|
||||
replaced by the cluster name. Only the `{cluster}` placeholders is allowed in
|
||||
the template.
|
||||
|
||||
* **pdb_master_label_selector**
|
||||
By default the PDB will match the master role hence preventing nodes to be
|
||||
By default the primary PDB will match the master role hence preventing nodes to be
|
||||
drained if the node_readiness_label is not used. If this option if set to
|
||||
`false` the `spilo-role=master` selector will not be added to the PDB.
|
||||
|
||||
* **enable_finalizers**
|
||||
By default, a deletion of the Postgresql resource will trigger an event
|
||||
that leads to a cleanup of all child resources. However, if the database
|
||||
cluster is in a broken state (e.g. failed initialization) and the operator
|
||||
cannot fully sync it, there can be leftovers. By enabling finalizers the
|
||||
operator will ensure all managed resources are deleted prior to the
|
||||
Postgresql resource. There is a trade-off though: The deletion is only
|
||||
performed after the next two SYNC cycles with the first one updating the
|
||||
internal spec and the latter reacting on the `deletionTimestamp` while
|
||||
processing the SYNC event. The final removal of the custom resource will
|
||||
add a DELETE event to the worker queue but the child resources are already
|
||||
gone at this point.
|
||||
The default is `false`.
|
||||
|
||||
* **persistent_volume_claim_retention_policy**
|
||||
The operator tries to protect volumes as much as possible. If somebody
|
||||
accidentally deletes the statefulset or scales in the `numberOfInstances` the
|
||||
|
|
@ -365,7 +363,7 @@ configuration they are grouped under the `kubernetes` key.
|
|||
manifest. To keep secrets, set this option to `false`. The default is `true`.
|
||||
|
||||
* **enable_persistent_volume_claim_deletion**
|
||||
By default, the operator deletes PersistentVolumeClaims when removing the
|
||||
By default, the operator deletes persistent volume claims when removing the
|
||||
Postgres cluster manifest, no matter if `persistent_volume_claim_retention_policy`
|
||||
on the statefulset is set to `retain`. To keep PVCs set this option to `false`.
|
||||
The default is `true`.
|
||||
|
|
@ -554,7 +552,7 @@ configuration they are grouped under the `kubernetes` key.
|
|||
pods with `InitialDelaySeconds: 6`, `PeriodSeconds: 10`, `TimeoutSeconds: 5`,
|
||||
`SuccessThreshold: 1` and `FailureThreshold: 3`. When enabling readiness
|
||||
probes it is recommended to switch the `pod_management_policy` to `parallel`
|
||||
to avoid unneccesary waiting times in case of multiple instances failing.
|
||||
to avoid unnecessary waiting times in case of multiple instances failing.
|
||||
The default is `false`.
|
||||
|
||||
* **storage_resize_mode**
|
||||
|
|
@ -703,7 +701,7 @@ In the CRD-based configuration they are grouped under the `load_balancer` key.
|
|||
replaced by the cluster name, `{namespace}` is replaced with the namespace
|
||||
and `{hostedzone}` is replaced with the hosted zone (the value of the
|
||||
`db_hosted_zone` parameter). The `{team}` placeholder can still be used,
|
||||
although it is not recommened because the team of a cluster can change.
|
||||
although it is not recommended because the team of a cluster can change.
|
||||
If the cluster name starts with the `teamId` it will also be part of the
|
||||
DNS, aynway. No other placeholders are allowed!
|
||||
|
||||
|
|
@ -722,7 +720,7 @@ In the CRD-based configuration they are grouped under the `load_balancer` key.
|
|||
is replaced by the cluster name, `{namespace}` is replaced with the
|
||||
namespace and `{hostedzone}` is replaced with the hosted zone (the value of
|
||||
the `db_hosted_zone` parameter). The `{team}` placeholder can still be used,
|
||||
although it is not recommened because the team of a cluster can change.
|
||||
although it is not recommended because the team of a cluster can change.
|
||||
If the cluster name starts with the `teamId` it will also be part of the
|
||||
DNS, aynway. No other placeholders are allowed!
|
||||
|
||||
|
|
@ -821,7 +819,7 @@ grouped under the `logical_backup` key.
|
|||
runs `pg_dumpall` on a replica if possible and uploads compressed results to
|
||||
an S3 bucket under the key `/<configured-s3-bucket-prefix>/<pg_cluster_name>/<cluster_k8s_uuid>/logical_backups`.
|
||||
The default image is the same image built with the Zalando-internal CI
|
||||
pipeline. Default: "ghcr.io/zalando/postgres-operator/logical-backup:v1.12.2"
|
||||
pipeline. Default: "ghcr.io/zalando/postgres-operator/logical-backup:v1.13.0"
|
||||
|
||||
* **logical_backup_google_application_credentials**
|
||||
Specifies the path of the google cloud service account json file. Default is empty.
|
||||
|
|
|
|||
21
docs/user.md
21
docs/user.md
|
|
@ -30,7 +30,7 @@ spec:
|
|||
databases:
|
||||
foo: zalando
|
||||
postgresql:
|
||||
version: "16"
|
||||
version: "17"
|
||||
```
|
||||
|
||||
Once you cloned the Postgres Operator [repository](https://github.com/zalando/postgres-operator)
|
||||
|
|
@ -109,7 +109,7 @@ metadata:
|
|||
spec:
|
||||
[...]
|
||||
postgresql:
|
||||
version: "16"
|
||||
version: "17"
|
||||
parameters:
|
||||
password_encryption: scram-sha-256
|
||||
```
|
||||
|
|
@ -517,7 +517,7 @@ Postgres Operator will create the following NOLOGIN roles:
|
|||
|
||||
The `<dbname>_owner` role is the database owner and should be used when creating
|
||||
new database objects. All members of the `admin` role, e.g. teams API roles, can
|
||||
become the owner with the `SET ROLE` command. [Default privileges](https://www.postgresql.org/docs/16/sql-alterdefaultprivileges.html)
|
||||
become the owner with the `SET ROLE` command. [Default privileges](https://www.postgresql.org/docs/17/sql-alterdefaultprivileges.html)
|
||||
are configured for the owner role so that the `<dbname>_reader` role
|
||||
automatically gets read-access (SELECT) to new tables and sequences and the
|
||||
`<dbname>_writer` receives write-access (INSERT, UPDATE, DELETE on tables,
|
||||
|
|
@ -594,7 +594,7 @@ spec:
|
|||
|
||||
### Schema `search_path` for default roles
|
||||
|
||||
The schema [`search_path`](https://www.postgresql.org/docs/16/ddl-schemas.html#DDL-SCHEMAS-PATH)
|
||||
The schema [`search_path`](https://www.postgresql.org/docs/17/ddl-schemas.html#DDL-SCHEMAS-PATH)
|
||||
for each role will include the role name and the schemas, this role should have
|
||||
access to. So `foo_bar_writer` does not have to schema-qualify tables from
|
||||
schemas `foo_bar_writer, bar`, while `foo_writer` can look up `foo_writer` and
|
||||
|
|
@ -695,7 +695,7 @@ handle it.
|
|||
|
||||
### HugePages support
|
||||
|
||||
The operator supports [HugePages](https://www.postgresql.org/docs/16/kernel-resources.html#LINUX-HUGEPAGES).
|
||||
The operator supports [HugePages](https://www.postgresql.org/docs/17/kernel-resources.html#LINUX-HUGEPAGES).
|
||||
To enable HugePages, set the matching resource requests and/or limits in the manifest:
|
||||
|
||||
```yaml
|
||||
|
|
@ -758,7 +758,7 @@ If you need to define a `nodeAffinity` for all your Postgres clusters use the
|
|||
## In-place major version upgrade
|
||||
|
||||
Starting with Spilo 13, operator supports in-place major version upgrade to a
|
||||
higher major version (e.g. from PG 11 to PG 13). To trigger the upgrade,
|
||||
higher major version (e.g. from PG 14 to PG 16). To trigger the upgrade,
|
||||
simply increase the version in the manifest. It is your responsibility to test
|
||||
your applications against the new version before the upgrade; downgrading is
|
||||
not supported. The easiest way to do so is to try the upgrade on the cloned
|
||||
|
|
@ -838,7 +838,7 @@ spec:
|
|||
### Clone directly
|
||||
|
||||
Another way to get a fresh copy of your source DB cluster is via
|
||||
[pg_basebackup](https://www.postgresql.org/docs/16/app-pgbasebackup.html). To
|
||||
[pg_basebackup](https://www.postgresql.org/docs/17/app-pgbasebackup.html). To
|
||||
use this feature simply leave out the timestamp field from the clone section.
|
||||
The operator will connect to the service of the source cluster by name. If the
|
||||
cluster is called test, then the connection string will look like host=test
|
||||
|
|
@ -900,7 +900,7 @@ the PostgreSQL version between source and target cluster has to be the same.
|
|||
|
||||
To start a cluster as standby, add the following `standby` section in the YAML
|
||||
file. You can stream changes from archived WAL files (AWS S3 or Google Cloud
|
||||
Storage) or from a remote primary. Only one option can be specfied in the
|
||||
Storage) or from a remote primary. Only one option can be specified in the
|
||||
manifest:
|
||||
|
||||
```yaml
|
||||
|
|
@ -911,7 +911,7 @@ spec:
|
|||
|
||||
For GCS, you have to define STANDBY_GOOGLE_APPLICATION_CREDENTIALS as a
|
||||
[custom pod environment variable](administrator.md#custom-pod-environment-variables).
|
||||
It is not set from the config to allow for overridding.
|
||||
It is not set from the config to allow for overriding.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
|
|
@ -1005,6 +1005,7 @@ spec:
|
|||
env:
|
||||
- name: "ENV_VAR_NAME"
|
||||
value: "any-k8s-env-things"
|
||||
command: ['sh', '-c', 'echo "logging" > /opt/logs.txt']
|
||||
```
|
||||
|
||||
In addition to any environment variables you specify, the following environment
|
||||
|
|
@ -1281,7 +1282,7 @@ minutes if the certificates have changed and reloads postgres accordingly.
|
|||
### TLS certificates for connection pooler
|
||||
|
||||
By default, the pgBouncer image generates its own TLS certificate like Spilo.
|
||||
When the `tls` section is specfied in the manifest it will be used for the
|
||||
When the `tls` section is specified in the manifest it will be used for the
|
||||
connection pooler pod(s) as well. The security context options are hard coded
|
||||
to `runAsUser: 100` and `runAsGroup: 101`. The `fsGroup` will be the same
|
||||
like for Spilo.
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ tools:
|
|||
# install pinned version of 'kind'
|
||||
# go install must run outside of a dir with a (module-based) Go project !
|
||||
# otherwise go install updates project's dependencies and/or behaves differently
|
||||
cd "/tmp" && GO111MODULE=on go install sigs.k8s.io/kind@v0.22.0
|
||||
cd "/tmp" && GO111MODULE=on go install sigs.k8s.io/kind@v0.24.0
|
||||
|
||||
e2etest: tools copy clean
|
||||
./run.sh main
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ IFS=$'\n\t'
|
|||
|
||||
readonly cluster_name="postgres-operator-e2e-tests"
|
||||
readonly kubeconfig_path="/tmp/kind-config-${cluster_name}"
|
||||
readonly spilo_image="registry.opensource.zalan.do/acid/spilo-16-e2e:0.1"
|
||||
readonly spilo_image="registry.opensource.zalan.do/acid/spilo-17-e2e:0.3"
|
||||
readonly e2e_test_runner_image="registry.opensource.zalan.do/acid/postgres-operator-e2e-tests-runner:0.4"
|
||||
|
||||
export GOPATH=${GOPATH-~/go}
|
||||
|
|
|
|||
|
|
@ -218,7 +218,6 @@ class K8s:
|
|||
pod_phase = 'Failing over'
|
||||
new_pod_node = ''
|
||||
pods_with_update_flag = self.count_pods_with_rolling_update_flag(labels, namespace)
|
||||
|
||||
while (pod_phase != 'Running') or (new_pod_node not in failover_targets):
|
||||
pods = self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items
|
||||
if pods:
|
||||
|
|
@ -525,7 +524,6 @@ class K8sBase:
|
|||
pod_phase = 'Failing over'
|
||||
new_pod_node = ''
|
||||
pods_with_update_flag = self.count_pods_with_rolling_update_flag(labels, namespace)
|
||||
|
||||
while (pod_phase != 'Running') or (new_pod_node not in failover_targets):
|
||||
pods = self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items
|
||||
if pods:
|
||||
|
|
|
|||
|
|
@ -12,9 +12,9 @@ from kubernetes import client
|
|||
from tests.k8s_api import K8s
|
||||
from kubernetes.client.rest import ApiException
|
||||
|
||||
SPILO_CURRENT = "registry.opensource.zalan.do/acid/spilo-16-e2e:0.1"
|
||||
SPILO_LAZY = "registry.opensource.zalan.do/acid/spilo-16-e2e:0.2"
|
||||
|
||||
SPILO_CURRENT = "registry.opensource.zalan.do/acid/spilo-17-e2e:0.3"
|
||||
SPILO_LAZY = "registry.opensource.zalan.do/acid/spilo-17-e2e:0.4"
|
||||
SPILO_FULL_IMAGE = "ghcr.io/zalando/spilo-17:4.0-p2"
|
||||
|
||||
def to_selector(labels):
|
||||
return ",".join(["=".join(lbl) for lbl in labels.items()])
|
||||
|
|
@ -95,7 +95,7 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
print("Failed to delete the 'standard' storage class: {0}".format(e))
|
||||
|
||||
# operator deploys pod service account there on start up
|
||||
# needed for test_multi_namespace_support()
|
||||
# needed for test_multi_namespace_support and test_owner_references
|
||||
cls.test_namespace = "test"
|
||||
try:
|
||||
v1_namespace = client.V1Namespace(metadata=client.V1ObjectMeta(name=cls.test_namespace))
|
||||
|
|
@ -115,6 +115,7 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
configmap = yaml.safe_load(f)
|
||||
configmap["data"]["workers"] = "1"
|
||||
configmap["data"]["docker_image"] = SPILO_CURRENT
|
||||
configmap["data"]["major_version_upgrade_mode"] = "full"
|
||||
|
||||
with open("manifests/configmap.yaml", 'w') as f:
|
||||
yaml.dump(configmap, f, Dumper=yaml.Dumper)
|
||||
|
|
@ -400,8 +401,8 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
"max_connections": new_max_connections_value,
|
||||
"wal_level": "logical"
|
||||
}
|
||||
},
|
||||
"patroni": {
|
||||
},
|
||||
"patroni": {
|
||||
"slots": {
|
||||
"first_slot": {
|
||||
"type": "physical"
|
||||
|
|
@ -412,7 +413,7 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
"retry_timeout": 9,
|
||||
"synchronous_mode": True,
|
||||
"failsafe_mode": True,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -515,7 +516,7 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
pg_add_new_slots_patch = {
|
||||
"spec": {
|
||||
"patroni": {
|
||||
"slots": {
|
||||
"slots": {
|
||||
"test_slot": {
|
||||
"type": "logical",
|
||||
"database": "foo",
|
||||
|
|
@ -1181,31 +1182,143 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
self.eventuallyEqual(lambda: len(k8s.get_patroni_running_members("acid-minimal-cluster-0")), 2, "Postgres status did not enter running")
|
||||
|
||||
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
|
||||
@unittest.skip("Skipping this test until fixed")
|
||||
def test_major_version_upgrade(self):
|
||||
k8s = self.k8s
|
||||
result = k8s.create_with_kubectl("manifests/minimal-postgres-manifest-12.yaml")
|
||||
self.eventuallyEqual(lambda: k8s.count_running_pods(labels="application=spilo,cluster-name=acid-upgrade-test"), 2, "No 2 pods running")
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
"""
|
||||
Test major version upgrade: with full upgrade, maintenance window, and annotation
|
||||
"""
|
||||
def check_version():
|
||||
p = k8s.patroni_rest("acid-upgrade-test-0", "") or {}
|
||||
version = p.get("server_version", 0) // 10000
|
||||
return version
|
||||
|
||||
pg_patch_version = {
|
||||
def get_annotations():
|
||||
pg_manifest = k8s.api.custom_objects_api.get_namespaced_custom_object(
|
||||
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test")
|
||||
annotations = pg_manifest["metadata"]["annotations"]
|
||||
return annotations
|
||||
|
||||
k8s = self.k8s
|
||||
cluster_label = 'application=spilo,cluster-name=acid-upgrade-test'
|
||||
|
||||
with open("manifests/minimal-postgres-lowest-version-manifest.yaml", 'r+') as f:
|
||||
upgrade_manifest = yaml.safe_load(f)
|
||||
upgrade_manifest["spec"]["dockerImage"] = SPILO_FULL_IMAGE
|
||||
|
||||
with open("manifests/minimal-postgres-lowest-version-manifest.yaml", 'w') as f:
|
||||
yaml.dump(upgrade_manifest, f, Dumper=yaml.Dumper)
|
||||
|
||||
k8s.create_with_kubectl("manifests/minimal-postgres-lowest-version-manifest.yaml")
|
||||
self.eventuallyEqual(lambda: k8s.count_running_pods(labels=cluster_label), 2, "No 2 pods running")
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
self.eventuallyEqual(check_version, 13, "Version is not correct")
|
||||
|
||||
master_nodes, _ = k8s.get_cluster_nodes(cluster_labels=cluster_label)
|
||||
# should upgrade immediately
|
||||
pg_patch_version_14 = {
|
||||
"spec": {
|
||||
"postgres": {
|
||||
"postgresql": {
|
||||
"version": "14"
|
||||
}
|
||||
}
|
||||
}
|
||||
k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version)
|
||||
|
||||
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version_14)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
|
||||
def check_version_14():
|
||||
p = k8s.get_patroni_state("acid-upgrade-test-0")
|
||||
version = p["server_version"][0:2]
|
||||
return version
|
||||
k8s.wait_for_pod_failover(master_nodes, 'spilo-role=replica,' + cluster_label)
|
||||
k8s.wait_for_pod_start('spilo-role=master,' + cluster_label)
|
||||
k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)
|
||||
self.eventuallyEqual(check_version, 14, "Version should be upgraded from 13 to 14")
|
||||
|
||||
self.eventuallyEqual(check_version_14, "14", "Version was not upgrade to 14")
|
||||
# check if annotation for last upgrade's success is set
|
||||
annotations = get_annotations()
|
||||
self.assertIsNotNone(annotations.get("last-major-upgrade-success"), "Annotation for last upgrade's success is not set")
|
||||
|
||||
# should not upgrade because current time is not in maintenanceWindow
|
||||
current_time = datetime.now()
|
||||
maintenance_window_future = f"{(current_time+timedelta(minutes=60)).strftime('%H:%M')}-{(current_time+timedelta(minutes=120)).strftime('%H:%M')}"
|
||||
pg_patch_version_15_outside_mw = {
|
||||
"spec": {
|
||||
"postgresql": {
|
||||
"version": "15"
|
||||
},
|
||||
"maintenanceWindows": [
|
||||
maintenance_window_future
|
||||
]
|
||||
}
|
||||
}
|
||||
k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version_15_outside_mw)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
|
||||
# no pod replacement outside of the maintenance window
|
||||
k8s.wait_for_pod_start('spilo-role=master,' + cluster_label)
|
||||
k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)
|
||||
self.eventuallyEqual(check_version, 14, "Version should not be upgraded")
|
||||
|
||||
second_annotations = get_annotations()
|
||||
self.assertIsNone(second_annotations.get("last-major-upgrade-failure"), "Annotation for last upgrade's failure should not be set")
|
||||
|
||||
# change maintenanceWindows to current
|
||||
maintenance_window_current = f"{(current_time-timedelta(minutes=30)).strftime('%H:%M')}-{(current_time+timedelta(minutes=30)).strftime('%H:%M')}"
|
||||
pg_patch_version_15_in_mw = {
|
||||
"spec": {
|
||||
"postgresql": {
|
||||
"version": "15"
|
||||
},
|
||||
"maintenanceWindows": [
|
||||
maintenance_window_current
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version_15_in_mw)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
|
||||
k8s.wait_for_pod_failover(master_nodes, 'spilo-role=master,' + cluster_label)
|
||||
k8s.wait_for_pod_start('spilo-role=master,' + cluster_label)
|
||||
k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)
|
||||
self.eventuallyEqual(check_version, 15, "Version should be upgraded from 14 to 15")
|
||||
|
||||
# check if annotation for last upgrade's success is updated after second upgrade
|
||||
third_annotations = get_annotations()
|
||||
self.assertIsNotNone(third_annotations.get("last-major-upgrade-success"), "Annotation for last upgrade's success is not set")
|
||||
self.assertNotEqual(annotations.get("last-major-upgrade-success"), third_annotations.get("last-major-upgrade-success"), "Annotation for last upgrade's success is not updated")
|
||||
|
||||
# test upgrade with failed upgrade annotation
|
||||
pg_patch_version_17 = {
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"last-major-upgrade-failure": "2024-01-02T15:04:05Z"
|
||||
},
|
||||
},
|
||||
"spec": {
|
||||
"postgresql": {
|
||||
"version": "17"
|
||||
},
|
||||
},
|
||||
}
|
||||
k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version_17)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
|
||||
k8s.wait_for_pod_failover(master_nodes, 'spilo-role=replica,' + cluster_label)
|
||||
k8s.wait_for_pod_start('spilo-role=master,' + cluster_label)
|
||||
k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)
|
||||
self.eventuallyEqual(check_version, 15, "Version should not be upgraded because annotation for last upgrade's failure is set")
|
||||
|
||||
# change the version back to 15 and should remove failure annotation
|
||||
k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version_15_in_mw)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
|
||||
k8s.wait_for_pod_start('spilo-role=master,' + cluster_label)
|
||||
k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)
|
||||
|
||||
self.eventuallyEqual(check_version, 15, "Version should not be upgraded from 15")
|
||||
fourth_annotations = get_annotations()
|
||||
self.assertIsNone(fourth_annotations.get("last-major-upgrade-failure"), "Annotation for last upgrade's failure is not removed")
|
||||
|
||||
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
|
||||
def test_persistent_volume_claim_retention_policy(self):
|
||||
|
|
@ -1354,17 +1467,11 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
k8s.wait_for_pod_start("spilo-role=master", self.test_namespace)
|
||||
k8s.wait_for_pod_start("spilo-role=replica", self.test_namespace)
|
||||
self.assert_master_is_unique(self.test_namespace, "acid-test-cluster")
|
||||
# acid-test-cluster will be deleted in test_owner_references test
|
||||
|
||||
except timeout_decorator.TimeoutError:
|
||||
print('Operator log: {}'.format(k8s.get_operator_log()))
|
||||
raise
|
||||
finally:
|
||||
# delete the new cluster so that the k8s_api.get_operator_state works correctly in subsequent tests
|
||||
# ideally we should delete the 'test' namespace here but
|
||||
# the pods inside the namespace stuck in the Terminating state making the test time out
|
||||
k8s.api.custom_objects_api.delete_namespaced_custom_object(
|
||||
"acid.zalan.do", "v1", self.test_namespace, "postgresqls", "acid-test-cluster")
|
||||
time.sleep(5)
|
||||
|
||||
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
|
||||
@unittest.skip("Skipping this test until fixed")
|
||||
|
|
@ -1575,15 +1682,83 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
self.eventuallyEqual(lambda: k8s.count_running_pods("connection-pooler="+pooler_name),
|
||||
0, "Pooler pods not scaled down")
|
||||
|
||||
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
|
||||
def test_owner_references(self):
|
||||
'''
|
||||
Enable owner references, test if resources get updated and test cascade deletion of test cluster.
|
||||
'''
|
||||
k8s = self.k8s
|
||||
cluster_name = 'acid-test-cluster'
|
||||
cluster_label = 'application=spilo,cluster-name={}'.format(cluster_name)
|
||||
default_test_cluster = 'acid-minimal-cluster'
|
||||
|
||||
try:
|
||||
# enable owner references in config
|
||||
enable_owner_refs = {
|
||||
"data": {
|
||||
"enable_owner_references": "true"
|
||||
}
|
||||
}
|
||||
k8s.update_config(enable_owner_refs)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
|
||||
time.sleep(5) # wait for the operator to sync the cluster and update resources
|
||||
|
||||
# check if child resources were updated with owner references
|
||||
self.assertTrue(self.check_cluster_child_resources_owner_references(cluster_name, self.test_namespace), "Owner references not set on all child resources of {}".format(cluster_name))
|
||||
self.assertTrue(self.check_cluster_child_resources_owner_references(default_test_cluster), "Owner references not set on all child resources of {}".format(default_test_cluster))
|
||||
|
||||
# delete the new cluster to test owner references
|
||||
# and also to make k8s_api.get_operator_state work better in subsequent tests
|
||||
# ideally we should delete the 'test' namespace here but the pods
|
||||
# inside the namespace stuck in the Terminating state making the test time out
|
||||
k8s.api.custom_objects_api.delete_namespaced_custom_object(
|
||||
"acid.zalan.do", "v1", self.test_namespace, "postgresqls", cluster_name)
|
||||
|
||||
# child resources with owner references should be deleted via owner references
|
||||
self.eventuallyEqual(lambda: k8s.count_pods_with_label(cluster_label), 0, "Pods not deleted")
|
||||
self.eventuallyEqual(lambda: k8s.count_statefulsets_with_label(cluster_label), 0, "Statefulset not deleted")
|
||||
self.eventuallyEqual(lambda: k8s.count_services_with_label(cluster_label), 0, "Services not deleted")
|
||||
self.eventuallyEqual(lambda: k8s.count_endpoints_with_label(cluster_label), 0, "Endpoints not deleted")
|
||||
self.eventuallyEqual(lambda: k8s.count_pdbs_with_label(cluster_label), 0, "Pod disruption budget not deleted")
|
||||
self.eventuallyEqual(lambda: k8s.count_secrets_with_label(cluster_label), 0, "Secrets were not deleted")
|
||||
|
||||
time.sleep(5) # wait for the operator to also delete the PVCs
|
||||
|
||||
# pvcs do not have an owner reference but will deleted by the operator almost immediately
|
||||
self.eventuallyEqual(lambda: k8s.count_pvcs_with_label(cluster_label), 0, "PVCs not deleted")
|
||||
|
||||
# disable owner references in config
|
||||
disable_owner_refs = {
|
||||
"data": {
|
||||
"enable_owner_references": "false"
|
||||
}
|
||||
}
|
||||
k8s.update_config(disable_owner_refs)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
|
||||
time.sleep(5) # wait for the operator to remove owner references
|
||||
|
||||
# check if child resources were updated without Postgresql owner references
|
||||
self.assertTrue(self.check_cluster_child_resources_owner_references(default_test_cluster, "default", True), "Owner references still present on some child resources of {}".format(default_test_cluster))
|
||||
|
||||
except timeout_decorator.TimeoutError:
|
||||
print('Operator log: {}'.format(k8s.get_operator_log()))
|
||||
raise
|
||||
|
||||
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
|
||||
def test_password_rotation(self):
|
||||
'''
|
||||
Test password rotation and removal of users due to retention policy
|
||||
'''
|
||||
k8s = self.k8s
|
||||
cluster_label = 'application=spilo,cluster-name=acid-minimal-cluster'
|
||||
leader = k8s.get_cluster_leader_pod()
|
||||
today = date.today()
|
||||
|
||||
# remember number of secrets to make sure it stays the same
|
||||
secret_count = k8s.count_secrets_with_label(cluster_label)
|
||||
|
||||
# enable password rotation for owner of foo database
|
||||
pg_patch_rotation_single_users = {
|
||||
"spec": {
|
||||
|
|
@ -1639,6 +1814,7 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
enable_password_rotation = {
|
||||
"data": {
|
||||
"enable_password_rotation": "true",
|
||||
"inherited_annotations": "environment",
|
||||
"password_rotation_interval": "30",
|
||||
"password_rotation_user_retention": "30", # should be set to 60
|
||||
},
|
||||
|
|
@ -1685,13 +1861,29 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
self.eventuallyEqual(lambda: len(self.query_database_with_user(leader.metadata.name, "postgres", "SELECT 1", "foo_user")), 1,
|
||||
"Could not connect to the database with rotation user {}".format(rotation_user), 10, 5)
|
||||
|
||||
# add annotation which triggers syncSecrets call
|
||||
pg_annotation_patch = {
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"environment": "test",
|
||||
}
|
||||
}
|
||||
}
|
||||
k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
"acid.zalan.do", "v1", "default", "postgresqls", "acid-minimal-cluster", pg_annotation_patch)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
time.sleep(10)
|
||||
self.eventuallyEqual(lambda: k8s.count_secrets_with_label(cluster_label), secret_count, "Unexpected number of secrets")
|
||||
|
||||
# check if rotation has been ignored for user from test_cross_namespace_secrets test
|
||||
db_user_secret = k8s.get_secret(username="test.db_user", namespace="test")
|
||||
secret_username = str(base64.b64decode(db_user_secret.data["username"]), 'utf-8')
|
||||
|
||||
self.assertEqual("test.db_user", secret_username,
|
||||
"Unexpected username in secret of test.db_user: expected {}, got {}".format("test.db_user", secret_username))
|
||||
|
||||
# check if annotation for secret has been updated
|
||||
self.assertTrue("environment" in db_user_secret.metadata.annotations, "Added annotation was not propagated to secret")
|
||||
|
||||
# disable password rotation for all other users (foo_user)
|
||||
# and pick smaller intervals to see if the third fake rotation user is dropped
|
||||
enable_password_rotation = {
|
||||
|
|
@ -1773,7 +1965,6 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
replica = k8s.get_cluster_replica_pod()
|
||||
self.assertTrue(replica.metadata.creation_timestamp > old_creation_timestamp, "Old master pod was not recreated")
|
||||
|
||||
|
||||
except timeout_decorator.TimeoutError:
|
||||
print('Operator log: {}'.format(k8s.get_operator_log()))
|
||||
raise
|
||||
|
|
@ -1930,7 +2121,7 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
patch_sset_propagate_annotations = {
|
||||
"data": {
|
||||
"downscaler_annotations": "deployment-time,downscaler/*",
|
||||
"inherited_annotations": "owned-by",
|
||||
"inherited_annotations": "environment,owned-by",
|
||||
}
|
||||
}
|
||||
k8s.update_config(patch_sset_propagate_annotations)
|
||||
|
|
@ -2009,104 +2200,138 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
verbs=["create", "delete", "deletecollection", "get", "list", "patch", "update", "watch"]
|
||||
)
|
||||
cluster_role.rules.append(fes_cluster_role_rule)
|
||||
k8s.api.rbac_api.patch_cluster_role("postgres-operator", cluster_role)
|
||||
|
||||
# create a table in one of the database of acid-minimal-cluster
|
||||
create_stream_table = """
|
||||
CREATE TABLE test_table (id int, payload jsonb);
|
||||
"""
|
||||
self.query_database(leader.metadata.name, "foo", create_stream_table)
|
||||
try:
|
||||
k8s.api.rbac_api.patch_cluster_role("postgres-operator", cluster_role)
|
||||
|
||||
# update the manifest with the streams section
|
||||
patch_streaming_config = {
|
||||
"spec": {
|
||||
"patroni": {
|
||||
"slots": {
|
||||
"manual_slot": {
|
||||
"type": "physical"
|
||||
}
|
||||
}
|
||||
},
|
||||
"streams": [
|
||||
{
|
||||
"applicationId": "test-app",
|
||||
"batchSize": 100,
|
||||
"database": "foo",
|
||||
"enableRecovery": True,
|
||||
"tables": {
|
||||
"test_table": {
|
||||
"eventType": "test-event",
|
||||
"idColumn": "id",
|
||||
"payloadColumn": "payload",
|
||||
"recoveryEventType": "test-event-dlq"
|
||||
# create a table in one of the database of acid-minimal-cluster
|
||||
create_stream_table = """
|
||||
CREATE TABLE test_table (id int, payload jsonb);
|
||||
"""
|
||||
self.query_database(leader.metadata.name, "foo", create_stream_table)
|
||||
|
||||
# update the manifest with the streams section
|
||||
patch_streaming_config = {
|
||||
"spec": {
|
||||
"patroni": {
|
||||
"slots": {
|
||||
"manual_slot": {
|
||||
"type": "physical"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"streams": [
|
||||
{
|
||||
"applicationId": "test-app",
|
||||
"batchSize": 100,
|
||||
"cpu": "100m",
|
||||
"memory": "200Mi",
|
||||
"database": "foo",
|
||||
"enableRecovery": True,
|
||||
"tables": {
|
||||
"test_table": {
|
||||
"eventType": "test-event",
|
||||
"idColumn": "id",
|
||||
"payloadColumn": "payload",
|
||||
"recoveryEventType": "test-event-dlq"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"applicationId": "test-app2",
|
||||
"batchSize": 100,
|
||||
"database": "foo",
|
||||
"enableRecovery": True,
|
||||
"tables": {
|
||||
"test_non_exist_table": {
|
||||
"eventType": "test-event",
|
||||
"idColumn": "id",
|
||||
"payloadColumn": "payload",
|
||||
"ignoreRecovery": True
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
'acid.zalan.do', 'v1', 'default', 'postgresqls', 'acid-minimal-cluster', patch_streaming_config)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
'acid.zalan.do', 'v1', 'default', 'postgresqls', 'acid-minimal-cluster', patch_streaming_config)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
|
||||
# check if publication, slot, and fes resource are created
|
||||
get_publication_query = """
|
||||
SELECT * FROM pg_publication WHERE pubname = 'fes_foo_test_app';
|
||||
"""
|
||||
get_slot_query = """
|
||||
SELECT * FROM pg_replication_slots WHERE slot_name = 'fes_foo_test_app';
|
||||
"""
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_publication_query)), 1,
|
||||
"Publication is not created", 10, 5)
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_slot_query)), 1,
|
||||
"Replication slot is not created", 10, 5)
|
||||
self.eventuallyEqual(lambda: len(k8s.api.custom_objects_api.list_namespaced_custom_object(
|
||||
# check if publication, slot, and fes resource are created
|
||||
get_publication_query = """
|
||||
SELECT * FROM pg_publication WHERE pubname = 'fes_foo_test_app';
|
||||
"""
|
||||
get_slot_query = """
|
||||
SELECT * FROM pg_replication_slots WHERE slot_name = 'fes_foo_test_app';
|
||||
"""
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_publication_query)), 1,
|
||||
"Publication is not created", 10, 5)
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_slot_query)), 1,
|
||||
"Replication slot is not created", 10, 5)
|
||||
self.eventuallyEqual(lambda: len(k8s.api.custom_objects_api.list_namespaced_custom_object(
|
||||
"zalando.org", "v1", "default", "fabriceventstreams", label_selector="cluster-name=acid-minimal-cluster")["items"]), 1,
|
||||
"Could not find Fabric Event Stream resource", 10, 5)
|
||||
|
||||
# grant create and ownership of test_table to foo_user, reset search path to default
|
||||
grant_permission_foo_user = """
|
||||
GRANT CREATE ON DATABASE foo TO foo_user;
|
||||
ALTER TABLE test_table OWNER TO foo_user;
|
||||
ALTER ROLE foo_user RESET search_path;
|
||||
"""
|
||||
self.query_database(leader.metadata.name, "foo", grant_permission_foo_user)
|
||||
# non-postgres user creates a publication
|
||||
create_nonstream_publication = """
|
||||
CREATE PUBLICATION mypublication FOR TABLE test_table;
|
||||
"""
|
||||
self.query_database_with_user(leader.metadata.name, "foo", create_nonstream_publication, "foo_user")
|
||||
# check if the non-existing table in the stream section does not create a publication and slot
|
||||
get_publication_query_not_exist_table = """
|
||||
SELECT * FROM pg_publication WHERE pubname = 'fes_foo_test_app2';
|
||||
"""
|
||||
get_slot_query_not_exist_table = """
|
||||
SELECT * FROM pg_replication_slots WHERE slot_name = 'fes_foo_test_app2';
|
||||
"""
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_publication_query_not_exist_table)), 0,
|
||||
"Publication is created for non-existing tables", 10, 5)
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_slot_query_not_exist_table)), 0,
|
||||
"Replication slot is created for non-existing tables", 10, 5)
|
||||
|
||||
# remove the streams section from the manifest
|
||||
patch_streaming_config_removal = {
|
||||
"spec": {
|
||||
"streams": []
|
||||
# grant create and ownership of test_table to foo_user, reset search path to default
|
||||
grant_permission_foo_user = """
|
||||
GRANT CREATE ON DATABASE foo TO foo_user;
|
||||
ALTER TABLE test_table OWNER TO foo_user;
|
||||
ALTER ROLE foo_user RESET search_path;
|
||||
"""
|
||||
self.query_database(leader.metadata.name, "foo", grant_permission_foo_user)
|
||||
# non-postgres user creates a publication
|
||||
create_nonstream_publication = """
|
||||
CREATE PUBLICATION mypublication FOR TABLE test_table;
|
||||
"""
|
||||
self.query_database_with_user(leader.metadata.name, "foo", create_nonstream_publication, "foo_user")
|
||||
|
||||
# remove the streams section from the manifest
|
||||
patch_streaming_config_removal = {
|
||||
"spec": {
|
||||
"streams": []
|
||||
}
|
||||
}
|
||||
}
|
||||
k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
'acid.zalan.do', 'v1', 'default', 'postgresqls', 'acid-minimal-cluster', patch_streaming_config_removal)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
'acid.zalan.do', 'v1', 'default', 'postgresqls', 'acid-minimal-cluster', patch_streaming_config_removal)
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
|
||||
|
||||
# check if publication, slot, and fes resource are removed
|
||||
self.eventuallyEqual(lambda: len(k8s.api.custom_objects_api.list_namespaced_custom_object(
|
||||
# check if publication, slot, and fes resource are removed
|
||||
self.eventuallyEqual(lambda: len(k8s.api.custom_objects_api.list_namespaced_custom_object(
|
||||
"zalando.org", "v1", "default", "fabriceventstreams", label_selector="cluster-name=acid-minimal-cluster")["items"]), 0,
|
||||
'Could not delete Fabric Event Stream resource', 10, 5)
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_publication_query)), 0,
|
||||
"Publication is not deleted", 10, 5)
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_slot_query)), 0,
|
||||
"Replication slot is not deleted", 10, 5)
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_publication_query)), 0,
|
||||
"Publication is not deleted", 10, 5)
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_slot_query)), 0,
|
||||
"Replication slot is not deleted", 10, 5)
|
||||
|
||||
# check the manual_slot and mypublication should not get deleted
|
||||
get_manual_slot_query = """
|
||||
SELECT * FROM pg_replication_slots WHERE slot_name = 'manual_slot';
|
||||
"""
|
||||
get_nonstream_publication_query = """
|
||||
SELECT * FROM pg_publication WHERE pubname = 'mypublication';
|
||||
"""
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", get_manual_slot_query)), 1,
|
||||
"Slot defined in patroni config is deleted", 10, 5)
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_nonstream_publication_query)), 1,
|
||||
"Publication defined not in stream section is deleted", 10, 5)
|
||||
# check the manual_slot and mypublication should not get deleted
|
||||
get_manual_slot_query = """
|
||||
SELECT * FROM pg_replication_slots WHERE slot_name = 'manual_slot';
|
||||
"""
|
||||
get_nonstream_publication_query = """
|
||||
SELECT * FROM pg_publication WHERE pubname = 'mypublication';
|
||||
"""
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", get_manual_slot_query)), 1,
|
||||
"Slot defined in patroni config is deleted", 10, 5)
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_nonstream_publication_query)), 1,
|
||||
"Publication defined not in stream section is deleted", 10, 5)
|
||||
|
||||
except timeout_decorator.TimeoutError:
|
||||
print('Operator log: {}'.format(k8s.get_operator_log()))
|
||||
raise
|
||||
|
||||
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
|
||||
def test_taint_based_eviction(self):
|
||||
|
|
@ -2321,6 +2546,46 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
|
||||
return True
|
||||
|
||||
def check_cluster_child_resources_owner_references(self, cluster_name, cluster_namespace='default', inverse=False):
|
||||
k8s = self.k8s
|
||||
|
||||
# check if child resources were updated with owner references
|
||||
sset = k8s.api.apps_v1.read_namespaced_stateful_set(cluster_name, cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(sset.metadata.owner_references, inverse), "statefulset owner reference check failed")
|
||||
|
||||
svc = k8s.api.core_v1.read_namespaced_service(cluster_name, cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(svc.metadata.owner_references, inverse), "primary service owner reference check failed")
|
||||
replica_svc = k8s.api.core_v1.read_namespaced_service(cluster_name + "-repl", cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(replica_svc.metadata.owner_references, inverse), "replica service owner reference check failed")
|
||||
config_svc = k8s.api.core_v1.read_namespaced_service(cluster_name + "-config", cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(config_svc.metadata.owner_references, inverse), "config service owner reference check failed")
|
||||
|
||||
ep = k8s.api.core_v1.read_namespaced_endpoints(cluster_name, cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(ep.metadata.owner_references, inverse), "primary endpoint owner reference check failed")
|
||||
replica_ep = k8s.api.core_v1.read_namespaced_endpoints(cluster_name + "-repl", cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(replica_ep.metadata.owner_references, inverse), "replica endpoint owner reference check failed")
|
||||
config_ep = k8s.api.core_v1.read_namespaced_endpoints(cluster_name + "-config", cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(config_ep.metadata.owner_references, inverse), "config endpoint owner reference check failed")
|
||||
|
||||
pdb = k8s.api.policy_v1.read_namespaced_pod_disruption_budget("postgres-{}-pdb".format(cluster_name), cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(pdb.metadata.owner_references, inverse), "primary pod disruption budget owner reference check failed")
|
||||
|
||||
pdb = k8s.api.policy_v1.read_namespaced_pod_disruption_budget("postgres-{}-critical-op-pdb".format(cluster_name), cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(pdb.metadata.owner_references, inverse), "pod disruption budget for critical operations owner reference check failed")
|
||||
|
||||
pg_secret = k8s.api.core_v1.read_namespaced_secret("postgres.{}.credentials.postgresql.acid.zalan.do".format(cluster_name), cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(pg_secret.metadata.owner_references, inverse), "postgres secret owner reference check failed")
|
||||
standby_secret = k8s.api.core_v1.read_namespaced_secret("standby.{}.credentials.postgresql.acid.zalan.do".format(cluster_name), cluster_namespace)
|
||||
self.assertTrue(self.has_postgresql_owner_reference(standby_secret.metadata.owner_references, inverse), "standby secret owner reference check failed")
|
||||
|
||||
return True
|
||||
|
||||
def has_postgresql_owner_reference(self, owner_references, inverse):
|
||||
if inverse:
|
||||
return owner_references is None or owner_references[0].kind != 'postgresql'
|
||||
|
||||
return owner_references is not None and owner_references[0].kind == 'postgresql' and owner_references[0].controller
|
||||
|
||||
def list_databases(self, pod_name):
|
||||
'''
|
||||
Get list of databases we might want to iterate over
|
||||
|
|
|
|||
43
go.mod
43
go.mod
|
|
@ -1,6 +1,6 @@
|
|||
module github.com/zalando/postgres-operator
|
||||
|
||||
go 1.22
|
||||
go 1.23.4
|
||||
|
||||
require (
|
||||
github.com/aws/aws-sdk-go v1.53.8
|
||||
|
|
@ -11,21 +11,22 @@ require (
|
|||
github.com/r3labs/diff v1.1.0
|
||||
github.com/sirupsen/logrus v1.9.3
|
||||
github.com/stretchr/testify v1.9.0
|
||||
golang.org/x/crypto v0.23.0
|
||||
golang.org/x/crypto v0.31.0
|
||||
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
k8s.io/api v0.28.10
|
||||
k8s.io/api v0.30.4
|
||||
k8s.io/apiextensions-apiserver v0.25.9
|
||||
k8s.io/apimachinery v0.28.10
|
||||
k8s.io/client-go v0.28.10
|
||||
k8s.io/apimachinery v0.30.4
|
||||
k8s.io/client-go v0.30.4
|
||||
k8s.io/code-generator v0.25.9
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/Masterminds/semver v1.5.0
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.9.0 // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
|
||||
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
|
||||
github.com/go-logr/logr v1.2.4 // indirect
|
||||
github.com/go-logr/logr v1.4.1 // indirect
|
||||
github.com/go-openapi/jsonpointer v0.19.6 // indirect
|
||||
github.com/go-openapi/jsonreference v0.20.2 // indirect
|
||||
github.com/go-openapi/swag v0.22.3 // indirect
|
||||
|
|
@ -33,9 +34,10 @@ require (
|
|||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
||||
github.com/golang/protobuf v1.5.4 // indirect
|
||||
github.com/google/gnostic-models v0.6.8 // indirect
|
||||
github.com/google/go-cmp v0.5.9 // indirect
|
||||
github.com/google/go-cmp v0.6.0 // indirect
|
||||
github.com/google/gofuzz v1.2.0 // indirect
|
||||
github.com/google/uuid v1.3.0 // indirect
|
||||
github.com/gorilla/websocket v1.5.0 // indirect
|
||||
github.com/imdario/mergo v0.3.6 // indirect
|
||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
|
|
@ -46,25 +48,28 @@ require (
|
|||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/spf13/pflag v1.0.5 // indirect
|
||||
golang.org/x/mod v0.14.0 // indirect
|
||||
golang.org/x/net v0.23.0 // indirect
|
||||
golang.org/x/oauth2 v0.8.0 // indirect
|
||||
golang.org/x/sys v0.20.0 // indirect
|
||||
golang.org/x/term v0.20.0 // indirect
|
||||
golang.org/x/text v0.15.0 // indirect
|
||||
golang.org/x/mod v0.17.0 // indirect
|
||||
golang.org/x/net v0.25.0 // indirect
|
||||
golang.org/x/oauth2 v0.10.0 // indirect
|
||||
golang.org/x/sync v0.10.0 // indirect
|
||||
golang.org/x/sys v0.28.0 // indirect
|
||||
golang.org/x/term v0.27.0 // indirect
|
||||
golang.org/x/text v0.21.0 // indirect
|
||||
golang.org/x/time v0.3.0 // indirect
|
||||
golang.org/x/tools v0.17.0 // indirect
|
||||
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/protobuf v1.33.0 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
k8s.io/gengo v0.0.0-20220902162205-c0856e24416d // indirect
|
||||
k8s.io/klog/v2 v2.100.1 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 // indirect
|
||||
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 // indirect
|
||||
k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70 // indirect
|
||||
k8s.io/klog/v2 v2.120.1 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect
|
||||
k8s.io/utils v0.0.0-20230726121419-3b25d923346b // indirect
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
|
||||
sigs.k8s.io/yaml v1.3.0 // indirect
|
||||
)
|
||||
|
|
|
|||
92
go.sum
92
go.sum
|
|
@ -1,3 +1,5 @@
|
|||
github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3QEww=
|
||||
github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
|
||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
|
||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
|
||||
github.com/aws/aws-sdk-go v1.53.8 h1:eoqGb1WOHIrCFKo1d51cMcnt1ralfLFaEqRkC5Zzv8k=
|
||||
|
|
@ -6,14 +8,13 @@ github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ3
|
|||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/emicklei/go-restful/v3 v3.9.0 h1:XwGDlfxEnQZzuopoqxwSEllNcCOM9DhhFyhFIIGKwxE=
|
||||
github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
|
||||
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
|
||||
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
|
||||
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
|
||||
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
|
||||
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ=
|
||||
github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
|
||||
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
|
||||
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
|
||||
|
|
@ -34,8 +35,9 @@ github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6
|
|||
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
|
||||
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
|
||||
|
|
@ -45,6 +47,8 @@ github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLe
|
|||
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
|
||||
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
|
||||
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/imdario/mergo v0.3.6 h1:xTNEAn+kxVO7dTZGu0CegyqKZmoWFI0rF8UxjlB2d28=
|
||||
github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
|
||||
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
|
||||
|
|
@ -80,10 +84,12 @@ github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d h1:LznySqW8MqVeFh+p
|
|||
github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d/go.mod h1:u3hJ0kqCQu/cPpsu3RbCOPZ0d7V3IjPjv1adNRleM9I=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||
github.com/onsi/ginkgo/v2 v2.9.4 h1:xR7vG4IXt5RWx6FfIjyAtsoMAtnc3C/rFXBBd2AjZwE=
|
||||
github.com/onsi/ginkgo/v2 v2.9.4/go.mod h1:gCQYp2Q+kSoIj7ykSVb9nskRSsR6PUj4AiLywzIhbKM=
|
||||
github.com/onsi/gomega v1.27.6 h1:ENqfyGeS5AX/rlXDd/ETokDz93u0YufY1Pgxuy/PvWE=
|
||||
github.com/onsi/gomega v1.27.6/go.mod h1:PIQNjfQwkP3aQAH7lf7j87O/5FiNr+ZR8+ipb+qQlhg=
|
||||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
|
||||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
|
||||
github.com/onsi/ginkgo/v2 v2.15.0 h1:79HwNRBAZHOEwrczrgSOPy+eFTTlIGELKy5as+ClttY=
|
||||
github.com/onsi/ginkgo/v2 v2.15.0/go.mod h1:HlxMHtYF57y6Dpf+mc5529KKmSq9h2FpCF+/ZkwUxKM=
|
||||
github.com/onsi/gomega v1.31.0 h1:54UJxxj6cPInHS3a35wm6BK/F9nHYueZ1NVujHDrnXE=
|
||||
github.com/onsi/gomega v1.31.0/go.mod h1:DW9aCi7U6Yi40wNVAvT6kzFnEVEI5n3DloYBiKiT6zk=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
|
|
@ -113,31 +119,31 @@ github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1
|
|||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
|
||||
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
|
||||
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
|
||||
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
|
||||
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3 h1:hNQpMuAJe5CtcUqCXaWga3FHu+kQvCqcsoVaQgSV60o=
|
||||
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3/go.mod h1:idGWGoKP1toJGkd5/ig9ZLuPcZBC3ewk7SzmH0uou08=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.14.0 h1:dGoOF9QVLYng8IHTm7BAyWqCqSheQ5pYWGhzW00YJr0=
|
||||
golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/mod v0.17.0 h1:zY54UmvipHiNd+pm+m0x9KhZ9hl1/7QNMyxXbc6ICqA=
|
||||
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
|
||||
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
|
||||
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
|
||||
golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8=
|
||||
golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE=
|
||||
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac=
|
||||
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
|
||||
golang.org/x/oauth2 v0.10.0 h1:zHCpF2Khkwy4mMB4bv0U37YtJdTGW8jI0glAApi0Kh8=
|
||||
golang.org/x/oauth2 v0.10.0/go.mod h1:kTpgurOux7LqtuxjuyZa4Gj2gdezIt/jQtGnNFfypQI=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ=
|
||||
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
|
||||
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
|
|
@ -145,16 +151,16 @@ golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
|
||||
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
|
||||
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.20.0 h1:VnkxpohqXaOBYJtBmEppKUG6mXpi+4O6purfc2+sMhw=
|
||||
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
|
||||
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
|
||||
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
|
||||
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
|
||||
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
|
||||
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
|
||||
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
|
|
@ -163,8 +169,8 @@ golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roY
|
|||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||
golang.org/x/tools v0.17.0 h1:FvmRgNOcs3kOa+T20R1uhfP9F6HgG2mfxDv1vrx1Htc=
|
||||
golang.org/x/tools v0.17.0/go.mod h1:xsh6VxdV005rRVaS6SSAf9oiAqljS7UZUacMZ8Bnsps=
|
||||
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d h1:vU5i/LfpvrRCpgM/VPfJLg5KjxD3E+hfT1SH+d9zLwg=
|
||||
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
|
|
@ -186,29 +192,31 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
|||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
k8s.io/api v0.28.10 h1:q1Y+h3F+siuwP/qCQuqgqGJjaIuQWN0yFE7z367E3Q0=
|
||||
k8s.io/api v0.28.10/go.mod h1:u6EzGdzmEC2vfhyw4sD89i7OIc/2v1EAwvd1t4chQac=
|
||||
k8s.io/api v0.30.4 h1:XASIELmW8w8q0i1Y4124LqPoWMycLjyQti/fdYHYjCs=
|
||||
k8s.io/api v0.30.4/go.mod h1:ZqniWRKu7WIeLijbbzetF4U9qZ03cg5IRwl8YVs8mX0=
|
||||
k8s.io/apiextensions-apiserver v0.25.9 h1:Pycd6lm2auABp9wKQHCFSEPG+NPdFSTJXPST6NJFzB8=
|
||||
k8s.io/apiextensions-apiserver v0.25.9/go.mod h1:ijGxmSG1GLOEaWhTuaEr0M7KUeia3mWCZa6FFQqpt1M=
|
||||
k8s.io/apimachinery v0.28.10 h1:cWonrYsJK3lbuf9IgMs5+L5Jzw6QR3ZGA3hzwG0HDeI=
|
||||
k8s.io/apimachinery v0.28.10/go.mod h1:zUG757HaKs6Dc3iGtKjzIpBfqTM4yiRsEe3/E7NX15o=
|
||||
k8s.io/client-go v0.28.10 h1:y+mvUei3+RU0rE7r2BZFA2ApTAsXSN1glGs4QfULLt4=
|
||||
k8s.io/client-go v0.28.10/go.mod h1:JLwjCWhQhvm1F4J+7YAr9WVhSRNmfkRofPWU43m8LZk=
|
||||
k8s.io/apimachinery v0.30.4 h1:5QHQI2tInzr8LsT4kU/2+fSeibH1eIHswNx480cqIoY=
|
||||
k8s.io/apimachinery v0.30.4/go.mod h1:iexa2somDaxdnj7bha06bhb43Zpa6eWH8N8dbqVjTUc=
|
||||
k8s.io/client-go v0.30.4 h1:eculUe+HPQoPbixfwmaSZGsKcOf7D288tH6hDAdd+wY=
|
||||
k8s.io/client-go v0.30.4/go.mod h1:IBS0R/Mt0LHkNHF4E6n+SUDPG7+m2po6RZU7YHeOpzc=
|
||||
k8s.io/code-generator v0.25.9 h1:lgyAV9AIRYNxZxgLRXqsCAtqJLHvakot41CjEqD5W0w=
|
||||
k8s.io/code-generator v0.25.9/go.mod h1:DHfpdhSUrwqF0f4oLqCtF8gYbqlndNetjBEz45nWzJI=
|
||||
k8s.io/gengo v0.0.0-20220902162205-c0856e24416d h1:U9tB195lKdzwqicbJvyJeOXV7Klv+wNAWENRnXEGi08=
|
||||
k8s.io/gengo v0.0.0-20220902162205-c0856e24416d/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
|
||||
k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70 h1:NGrVE502P0s0/1hudf8zjgwki1X/TByhmAoILTarmzo=
|
||||
k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70/go.mod h1:VH3AT8AaQOqiGjMF9p0/IM1Dj+82ZwjfxUP1IxaHE+8=
|
||||
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
|
||||
k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg=
|
||||
k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
|
||||
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 h1:LyMgNKD2P8Wn1iAwQU5OhxCKlKJy0sHc+PcDwFB24dQ=
|
||||
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9/go.mod h1:wZK2AVp1uHCp4VamDVgBP2COHZjqD1T68Rf0CM3YjSM=
|
||||
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 h1:qY1Ad8PODbnymg2pRbkyMT/ylpTrCM8P2RJ0yroCyIk=
|
||||
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
|
||||
k8s.io/klog/v2 v2.120.1 h1:QXU6cPEOIslTGvZaXvFWiP9VKyeet3sawzTOvdXb4Vw=
|
||||
k8s.io/klog/v2 v2.120.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
|
||||
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag=
|
||||
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98=
|
||||
k8s.io/utils v0.0.0-20230726121419-3b25d923346b h1:sgn3ZU783SCgtaSJjpcVVlRqd6GSnlTLKgpAAttJvpI=
|
||||
k8s.io/utils v0.0.0-20230726121419-3b25d923346b/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
|
||||
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
|
||||
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
|
||||
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
|
||||
|
|
|
|||
|
|
@ -1,31 +1,31 @@
|
|||
module github.com/zalando/postgres-operator/kubectl-pg
|
||||
|
||||
go 1.22
|
||||
go 1.23.4
|
||||
|
||||
require (
|
||||
github.com/spf13/cobra v1.8.0
|
||||
github.com/spf13/viper v1.18.2
|
||||
github.com/zalando/postgres-operator v1.12.0
|
||||
k8s.io/api v0.28.10
|
||||
github.com/spf13/cobra v1.8.1
|
||||
github.com/spf13/viper v1.19.0
|
||||
github.com/zalando/postgres-operator v1.13.0
|
||||
k8s.io/api v0.30.4
|
||||
k8s.io/apiextensions-apiserver v0.25.9
|
||||
k8s.io/apimachinery v0.28.10
|
||||
k8s.io/client-go v0.28.10
|
||||
k8s.io/apimachinery v0.30.4
|
||||
k8s.io/client-go v0.30.4
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.9.0 // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
|
||||
github.com/fsnotify/fsnotify v1.7.0 // indirect
|
||||
github.com/go-logr/logr v1.2.4 // indirect
|
||||
github.com/go-logr/logr v1.4.1 // indirect
|
||||
github.com/go-openapi/jsonpointer v0.19.6 // indirect
|
||||
github.com/go-openapi/jsonreference v0.20.2 // indirect
|
||||
github.com/go-openapi/swag v0.22.3 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang/protobuf v1.5.4 // indirect
|
||||
github.com/google/gnostic-models v0.6.8 // indirect
|
||||
github.com/google/go-cmp v0.5.9 // indirect
|
||||
github.com/google/gofuzz v1.2.0 // indirect
|
||||
github.com/google/uuid v1.4.0 // indirect
|
||||
github.com/gorilla/websocket v1.5.0 // indirect
|
||||
github.com/hashicorp/hcl v1.0.0 // indirect
|
||||
github.com/imdario/mergo v0.3.6 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
|
|
@ -40,7 +40,8 @@ require (
|
|||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d // indirect
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.1.0 // indirect
|
||||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
|
||||
github.com/sagikazarmark/locafero v0.4.0 // indirect
|
||||
github.com/sagikazarmark/slog-shim v0.1.0 // indirect
|
||||
github.com/sirupsen/logrus v1.9.3 // indirect
|
||||
|
|
@ -50,24 +51,24 @@ require (
|
|||
github.com/spf13/pflag v1.0.5 // indirect
|
||||
github.com/subosito/gotenv v1.6.0 // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
golang.org/x/crypto v0.23.0 // indirect
|
||||
golang.org/x/crypto v0.31.0 // indirect
|
||||
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3 // indirect
|
||||
golang.org/x/net v0.23.0 // indirect
|
||||
golang.org/x/oauth2 v0.15.0 // indirect
|
||||
golang.org/x/sys v0.20.0 // indirect
|
||||
golang.org/x/term v0.20.0 // indirect
|
||||
golang.org/x/text v0.15.0 // indirect
|
||||
golang.org/x/net v0.25.0 // indirect
|
||||
golang.org/x/oauth2 v0.18.0 // indirect
|
||||
golang.org/x/sys v0.28.0 // indirect
|
||||
golang.org/x/term v0.27.0 // indirect
|
||||
golang.org/x/text v0.21.0 // indirect
|
||||
golang.org/x/time v0.5.0 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/appengine v1.6.8 // indirect
|
||||
google.golang.org/protobuf v1.33.0 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/ini.v1 v1.67.0 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
k8s.io/klog/v2 v2.100.1 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 // indirect
|
||||
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 // indirect
|
||||
k8s.io/klog/v2 v2.120.1 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect
|
||||
k8s.io/utils v0.0.0-20230726121419-3b25d923346b // indirect
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
|
||||
sigs.k8s.io/yaml v1.3.0 // indirect
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,20 +1,19 @@
|
|||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
|
||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/emicklei/go-restful/v3 v3.9.0 h1:XwGDlfxEnQZzuopoqxwSEllNcCOM9DhhFyhFIIGKwxE=
|
||||
github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
|
||||
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
|
||||
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
|
||||
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
|
||||
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
|
||||
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
|
||||
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ=
|
||||
github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
|
||||
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
|
||||
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
|
||||
|
|
@ -25,13 +24,16 @@ github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEe
|
|||
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
|
||||
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
|
||||
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
|
||||
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
|
|
@ -40,6 +42,8 @@ github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLe
|
|||
github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4=
|
||||
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
|
||||
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
|
||||
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
|
||||
github.com/imdario/mergo v0.3.6 h1:xTNEAn+kxVO7dTZGu0CegyqKZmoWFI0rF8UxjlB2d28=
|
||||
|
|
@ -76,12 +80,14 @@ github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d h1:LznySqW8MqVeFh+p
|
|||
github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d/go.mod h1:u3hJ0kqCQu/cPpsu3RbCOPZ0d7V3IjPjv1adNRleM9I=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||
github.com/onsi/ginkgo/v2 v2.9.4 h1:xR7vG4IXt5RWx6FfIjyAtsoMAtnc3C/rFXBBd2AjZwE=
|
||||
github.com/onsi/ginkgo/v2 v2.9.4/go.mod h1:gCQYp2Q+kSoIj7ykSVb9nskRSsR6PUj4AiLywzIhbKM=
|
||||
github.com/onsi/gomega v1.27.6 h1:ENqfyGeS5AX/rlXDd/ETokDz93u0YufY1Pgxuy/PvWE=
|
||||
github.com/onsi/gomega v1.27.6/go.mod h1:PIQNjfQwkP3aQAH7lf7j87O/5FiNr+ZR8+ipb+qQlhg=
|
||||
github.com/pelletier/go-toml/v2 v2.1.0 h1:FnwAJ4oYMvbT/34k9zzHuZNrhlz48GB3/s6at6/MHO4=
|
||||
github.com/pelletier/go-toml/v2 v2.1.0/go.mod h1:tJU2Z3ZkXwnxa4DPO899bsyIoywizdUvyaeZurnPPDc=
|
||||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
|
||||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
|
||||
github.com/onsi/ginkgo/v2 v2.15.0 h1:79HwNRBAZHOEwrczrgSOPy+eFTTlIGELKy5as+ClttY=
|
||||
github.com/onsi/ginkgo/v2 v2.15.0/go.mod h1:HlxMHtYF57y6Dpf+mc5529KKmSq9h2FpCF+/ZkwUxKM=
|
||||
github.com/onsi/gomega v1.31.0 h1:54UJxxj6cPInHS3a35wm6BK/F9nHYueZ1NVujHDrnXE=
|
||||
github.com/onsi/gomega v1.31.0/go.mod h1:DW9aCi7U6Yi40wNVAvT6kzFnEVEI5n3DloYBiKiT6zk=
|
||||
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
|
||||
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
|
|
@ -100,15 +106,16 @@ github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
|
|||
github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
|
||||
github.com/spf13/cast v1.6.0 h1:GEiTHELF+vaR5dhz3VqZfFSzZjYbgeKDpBxQVS4GYJ0=
|
||||
github.com/spf13/cast v1.6.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
|
||||
github.com/spf13/cobra v1.8.0 h1:7aJaZx1B85qltLMc546zn58BxxfZdR/W22ej9CFoEf0=
|
||||
github.com/spf13/cobra v1.8.0/go.mod h1:WXLWApfZ71AjXPya3WOlMsY9yMs7YeiHhFVlvLyhcho=
|
||||
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
|
||||
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/viper v1.18.2 h1:LUXCnvUvSM6FXAsj6nnfc8Q2tp1dIgUfY9Kc8GsSOiQ=
|
||||
github.com/spf13/viper v1.18.2/go.mod h1:EKmWIqdnk5lOcmR72yw6hS+8OPYcwD0jteitLMVB+yk=
|
||||
github.com/spf13/viper v1.19.0 h1:RWq5SEjt8o25SROyN3z2OrDB9l7RPd3lwTWU8EcEdcI=
|
||||
github.com/spf13/viper v1.19.0/go.mod h1:GQUN9bilAbhU/jgc1bKs99f/suXKeUMct8Adx5+Ntkg=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
|
|
@ -121,58 +128,73 @@ github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8
|
|||
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/zalando/postgres-operator v1.12.0 h1:9C5u8UgrVQDRdzB3/T7kKWYKEf2vbF9EZHqtCRSgJtE=
|
||||
github.com/zalando/postgres-operator v1.12.0/go.mod h1:tKNY4pMjnr5BhuzGiGngf1SPJ7K1vVRCmMkfmV9KZoQ=
|
||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
github.com/zalando/postgres-operator v1.13.0 h1:T9Mb+ZRQyTxXbagIK66GLVGCwM3661aX2lOkNpax4s8=
|
||||
github.com/zalando/postgres-operator v1.13.0/go.mod h1:WiMEKzUny2lJHYle+7+D/5BhlvPn8prl76rEDYLsQAg=
|
||||
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
|
||||
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
|
||||
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
|
||||
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3 h1:hNQpMuAJe5CtcUqCXaWga3FHu+kQvCqcsoVaQgSV60o=
|
||||
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3/go.mod h1:idGWGoKP1toJGkd5/ig9ZLuPcZBC3ewk7SzmH0uou08=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
|
||||
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
|
||||
golang.org/x/oauth2 v0.15.0 h1:s8pnnxNVzjWyrvYdFUQq5llS1PX2zhPXmccZv99h7uQ=
|
||||
golang.org/x/oauth2 v0.15.0/go.mod h1:q48ptWNTY5XWf+JNten23lcvHpLJ0ZSxF5ttTHKVCAM=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac=
|
||||
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
|
||||
golang.org/x/oauth2 v0.18.0 h1:09qnuIAgzdx1XplqJvW6CQqMCtGZykZWcXzPMPUusvI=
|
||||
golang.org/x/oauth2 v0.18.0/go.mod h1:Wf7knwG0MPoWIMMBgFlEaSUDaKskp0dCfrlJRJXbBi8=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
|
||||
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/term v0.20.0 h1:VnkxpohqXaOBYJtBmEppKUG6mXpi+4O6purfc2+sMhw=
|
||||
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
|
||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
|
||||
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
|
||||
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
|
||||
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
|
||||
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
|
||||
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
|
||||
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
|
||||
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.17.0 h1:FvmRgNOcs3kOa+T20R1uhfP9F6HgG2mfxDv1vrx1Htc=
|
||||
golang.org/x/tools v0.17.0/go.mod h1:xsh6VxdV005rRVaS6SSAf9oiAqljS7UZUacMZ8Bnsps=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d h1:vU5i/LfpvrRCpgM/VPfJLg5KjxD3E+hfT1SH+d9zLwg=
|
||||
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
|
||||
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM=
|
||||
google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds=
|
||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
|
||||
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
|
|
@ -188,23 +210,23 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
|||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
k8s.io/api v0.28.10 h1:q1Y+h3F+siuwP/qCQuqgqGJjaIuQWN0yFE7z367E3Q0=
|
||||
k8s.io/api v0.28.10/go.mod h1:u6EzGdzmEC2vfhyw4sD89i7OIc/2v1EAwvd1t4chQac=
|
||||
k8s.io/api v0.30.4 h1:XASIELmW8w8q0i1Y4124LqPoWMycLjyQti/fdYHYjCs=
|
||||
k8s.io/api v0.30.4/go.mod h1:ZqniWRKu7WIeLijbbzetF4U9qZ03cg5IRwl8YVs8mX0=
|
||||
k8s.io/apiextensions-apiserver v0.25.9 h1:Pycd6lm2auABp9wKQHCFSEPG+NPdFSTJXPST6NJFzB8=
|
||||
k8s.io/apiextensions-apiserver v0.25.9/go.mod h1:ijGxmSG1GLOEaWhTuaEr0M7KUeia3mWCZa6FFQqpt1M=
|
||||
k8s.io/apimachinery v0.28.10 h1:cWonrYsJK3lbuf9IgMs5+L5Jzw6QR3ZGA3hzwG0HDeI=
|
||||
k8s.io/apimachinery v0.28.10/go.mod h1:zUG757HaKs6Dc3iGtKjzIpBfqTM4yiRsEe3/E7NX15o=
|
||||
k8s.io/client-go v0.28.10 h1:y+mvUei3+RU0rE7r2BZFA2ApTAsXSN1glGs4QfULLt4=
|
||||
k8s.io/client-go v0.28.10/go.mod h1:JLwjCWhQhvm1F4J+7YAr9WVhSRNmfkRofPWU43m8LZk=
|
||||
k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg=
|
||||
k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
|
||||
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 h1:LyMgNKD2P8Wn1iAwQU5OhxCKlKJy0sHc+PcDwFB24dQ=
|
||||
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9/go.mod h1:wZK2AVp1uHCp4VamDVgBP2COHZjqD1T68Rf0CM3YjSM=
|
||||
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 h1:qY1Ad8PODbnymg2pRbkyMT/ylpTrCM8P2RJ0yroCyIk=
|
||||
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
|
||||
k8s.io/apimachinery v0.30.4 h1:5QHQI2tInzr8LsT4kU/2+fSeibH1eIHswNx480cqIoY=
|
||||
k8s.io/apimachinery v0.30.4/go.mod h1:iexa2somDaxdnj7bha06bhb43Zpa6eWH8N8dbqVjTUc=
|
||||
k8s.io/client-go v0.30.4 h1:eculUe+HPQoPbixfwmaSZGsKcOf7D288tH6hDAdd+wY=
|
||||
k8s.io/client-go v0.30.4/go.mod h1:IBS0R/Mt0LHkNHF4E6n+SUDPG7+m2po6RZU7YHeOpzc=
|
||||
k8s.io/klog/v2 v2.120.1 h1:QXU6cPEOIslTGvZaXvFWiP9VKyeet3sawzTOvdXb4Vw=
|
||||
k8s.io/klog/v2 v2.120.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
|
||||
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag=
|
||||
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98=
|
||||
k8s.io/utils v0.0.0-20230726121419-3b25d923346b h1:sgn3ZU783SCgtaSJjpcVVlRqd6GSnlTLKgpAAttJvpI=
|
||||
k8s.io/utils v0.0.0-20230726121419-3b25d923346b/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
|
||||
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
|
||||
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
|
||||
|
|
|
|||
|
|
@ -25,11 +25,11 @@ RUN apt-get update \
|
|||
&& curl --silent https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
|
||||
&& apt-get update \
|
||||
&& apt-get install --no-install-recommends -y \
|
||||
postgresql-client-17 \
|
||||
postgresql-client-16 \
|
||||
postgresql-client-15 \
|
||||
postgresql-client-14 \
|
||||
postgresql-client-13 \
|
||||
postgresql-client-12 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ metadata:
|
|||
# "delete-date": "2020-08-31" # can only be deleted on that day if "delete-date "key is configured
|
||||
# "delete-clustername": "acid-test-cluster" # can only be deleted when name matches if "delete-clustername" key is configured
|
||||
spec:
|
||||
dockerImage: ghcr.io/zalando/spilo-16:3.2-p3
|
||||
dockerImage: ghcr.io/zalando/spilo-17:4.0-p2
|
||||
teamId: "acid"
|
||||
numberOfInstances: 2
|
||||
users: # Application/Robot users
|
||||
|
|
@ -48,7 +48,7 @@ spec:
|
|||
defaultRoles: true
|
||||
defaultUsers: false
|
||||
postgresql:
|
||||
version: "16"
|
||||
version: "17"
|
||||
parameters: # Expert section
|
||||
shared_buffers: "32MB"
|
||||
max_connections: "10"
|
||||
|
|
|
|||
|
|
@ -18,11 +18,11 @@ data:
|
|||
connection_pooler_default_memory_limit: 100Mi
|
||||
connection_pooler_default_memory_request: 100Mi
|
||||
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer:master-32"
|
||||
# connection_pooler_max_db_connections: 60
|
||||
# connection_pooler_mode: "transaction"
|
||||
# connection_pooler_number_of_instances: 2
|
||||
# connection_pooler_schema: "pooler"
|
||||
# connection_pooler_user: "pooler"
|
||||
connection_pooler_max_db_connections: "60"
|
||||
connection_pooler_mode: "transaction"
|
||||
connection_pooler_number_of_instances: "2"
|
||||
connection_pooler_schema: "pooler"
|
||||
connection_pooler_user: "pooler"
|
||||
crd_categories: "all"
|
||||
# custom_service_annotations: "keyx:valuez,keya:valuea"
|
||||
# custom_pod_annotations: "keya:valuea,keyb:valueb"
|
||||
|
|
@ -34,39 +34,41 @@ data:
|
|||
default_memory_request: 100Mi
|
||||
# delete_annotation_date_key: delete-date
|
||||
# delete_annotation_name_key: delete-clustername
|
||||
docker_image: ghcr.io/zalando/spilo-16:3.2-p3
|
||||
docker_image: ghcr.io/zalando/spilo-17:4.0-p2
|
||||
# downscaler_annotations: "deployment-time,downscaler/*"
|
||||
# enable_admin_role_for_users: "true"
|
||||
# enable_crd_registration: "true"
|
||||
# enable_cross_namespace_secret: "false"
|
||||
enable_admin_role_for_users: "true"
|
||||
enable_crd_registration: "true"
|
||||
enable_crd_validation: "true"
|
||||
enable_cross_namespace_secret: "false"
|
||||
enable_finalizers: "false"
|
||||
# enable_database_access: "true"
|
||||
enable_database_access: "true"
|
||||
enable_ebs_gp3_migration: "false"
|
||||
# enable_ebs_gp3_migration_max_size: "1000"
|
||||
# enable_init_containers: "true"
|
||||
# enable_lazy_spilo_upgrade: "false"
|
||||
enable_ebs_gp3_migration_max_size: "1000"
|
||||
enable_init_containers: "true"
|
||||
enable_lazy_spilo_upgrade: "false"
|
||||
enable_master_load_balancer: "false"
|
||||
enable_master_pooler_load_balancer: "false"
|
||||
enable_password_rotation: "false"
|
||||
enable_patroni_failsafe_mode: "false"
|
||||
enable_secrets_deletion: "true"
|
||||
enable_owner_references: "false"
|
||||
enable_persistent_volume_claim_deletion: "true"
|
||||
enable_pgversion_env_var: "true"
|
||||
# enable_pod_antiaffinity: "false"
|
||||
# enable_pod_disruption_budget: "true"
|
||||
# enable_postgres_team_crd: "false"
|
||||
# enable_postgres_team_crd_superusers: "false"
|
||||
enable_pod_antiaffinity: "false"
|
||||
enable_pod_disruption_budget: "true"
|
||||
enable_postgres_team_crd: "false"
|
||||
enable_postgres_team_crd_superusers: "false"
|
||||
enable_readiness_probe: "false"
|
||||
enable_replica_load_balancer: "false"
|
||||
enable_replica_pooler_load_balancer: "false"
|
||||
# enable_shm_volume: "true"
|
||||
# enable_sidecars: "true"
|
||||
enable_secrets_deletion: "true"
|
||||
enable_shm_volume: "true"
|
||||
enable_sidecars: "true"
|
||||
enable_spilo_wal_path_compat: "true"
|
||||
enable_team_id_clustername_prefix: "false"
|
||||
enable_team_member_deprecation: "false"
|
||||
# enable_team_superuser: "false"
|
||||
enable_team_superuser: "false"
|
||||
enable_teams_api: "false"
|
||||
# etcd_host: ""
|
||||
etcd_host: ""
|
||||
external_traffic_policy: "Cluster"
|
||||
# gcp_credentials: ""
|
||||
# ignored_annotations: ""
|
||||
|
|
@ -76,56 +78,55 @@ data:
|
|||
# inherited_annotations: owned-by
|
||||
# inherited_labels: application,environment
|
||||
# kube_iam_role: ""
|
||||
# kubernetes_use_configmaps: "false"
|
||||
kubernetes_use_configmaps: "false"
|
||||
# log_s3_bucket: ""
|
||||
# logical_backup_azure_storage_account_name: ""
|
||||
# logical_backup_azure_storage_container: ""
|
||||
# logical_backup_azure_storage_account_key: ""
|
||||
# logical_backup_cpu_limit: ""
|
||||
# logical_backup_cpu_request: ""
|
||||
logical_backup_docker_image: "ghcr.io/zalando/postgres-operator/logical-backup:v1.12.2"
|
||||
logical_backup_cronjob_environment_secret: ""
|
||||
logical_backup_docker_image: "ghcr.io/zalando/postgres-operator/logical-backup:v1.14.0"
|
||||
# logical_backup_google_application_credentials: ""
|
||||
logical_backup_job_prefix: "logical-backup-"
|
||||
# logical_backup_memory_limit: ""
|
||||
# logical_backup_memory_request: ""
|
||||
logical_backup_provider: "s3"
|
||||
# logical_backup_s3_access_key_id: ""
|
||||
logical_backup_s3_access_key_id: ""
|
||||
logical_backup_s3_bucket: "my-bucket-url"
|
||||
# logical_backup_s3_bucket_prefix: "spilo"
|
||||
# logical_backup_s3_region: ""
|
||||
# logical_backup_s3_endpoint: ""
|
||||
# logical_backup_s3_secret_access_key: ""
|
||||
logical_backup_s3_bucket_prefix: "spilo"
|
||||
logical_backup_s3_region: ""
|
||||
logical_backup_s3_endpoint: ""
|
||||
logical_backup_s3_secret_access_key: ""
|
||||
logical_backup_s3_sse: "AES256"
|
||||
# logical_backup_s3_retention_time: ""
|
||||
logical_backup_s3_retention_time: ""
|
||||
logical_backup_schedule: "30 00 * * *"
|
||||
# logical_backup_cronjob_environment_secret: ""
|
||||
major_version_upgrade_mode: "manual"
|
||||
# major_version_upgrade_team_allow_list: ""
|
||||
master_dns_name_format: "{cluster}.{namespace}.{hostedzone}"
|
||||
# master_legacy_dns_name_format: "{cluster}.{team}.{hostedzone}"
|
||||
# master_pod_move_timeout: 20m
|
||||
# max_instances: "-1"
|
||||
# min_instances: "-1"
|
||||
master_legacy_dns_name_format: "{cluster}.{team}.{hostedzone}"
|
||||
master_pod_move_timeout: 20m
|
||||
# max_cpu_request: "1"
|
||||
max_instances: "-1"
|
||||
# max_memory_request: 4Gi
|
||||
# min_cpu_limit: 250m
|
||||
# min_memory_limit: 250Mi
|
||||
# minimal_major_version: "12"
|
||||
min_cpu_limit: 250m
|
||||
min_instances: "-1"
|
||||
min_memory_limit: 250Mi
|
||||
minimal_major_version: "13"
|
||||
# node_readiness_label: "status:ready"
|
||||
# node_readiness_label_merge: "OR"
|
||||
# oauth_token_secret_name: postgresql-operator
|
||||
# pam_configuration: |
|
||||
# https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees
|
||||
# pam_role_name: zalandos
|
||||
oauth_token_secret_name: postgresql-operator
|
||||
pam_configuration: "https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees"
|
||||
pam_role_name: zalandos
|
||||
patroni_api_check_interval: "1s"
|
||||
patroni_api_check_timeout: "5s"
|
||||
# password_rotation_interval: "90"
|
||||
# password_rotation_user_retention: "180"
|
||||
password_rotation_interval: "90"
|
||||
password_rotation_user_retention: "180"
|
||||
pdb_master_label_selector: "true"
|
||||
pdb_name_format: "postgres-{cluster}-pdb"
|
||||
persistent_volume_claim_retention_policy: "when_deleted:retain,when_scaled:retain"
|
||||
# pod_antiaffinity_preferred_during_scheduling: "false"
|
||||
# pod_antiaffinity_topology_key: "kubernetes.io/hostname"
|
||||
pod_antiaffinity_preferred_during_scheduling: "false"
|
||||
pod_antiaffinity_topology_key: "kubernetes.io/hostname"
|
||||
pod_deletion_wait_timeout: 10m
|
||||
# pod_environment_configmap: "default/my-custom-config"
|
||||
# pod_environment_secret: "my-custom-secret"
|
||||
|
|
@ -133,17 +134,17 @@ data:
|
|||
pod_management_policy: "ordered_ready"
|
||||
# pod_priority_class_name: "postgres-pod-priority"
|
||||
pod_role_label: spilo-role
|
||||
# pod_service_account_definition: ""
|
||||
pod_service_account_definition: ""
|
||||
pod_service_account_name: "postgres-pod"
|
||||
# pod_service_account_role_binding_definition: ""
|
||||
pod_service_account_role_binding_definition: ""
|
||||
pod_terminate_grace_period: 5m
|
||||
# postgres_superuser_teams: "postgres_superusers"
|
||||
# protected_role_names: "admin,cron_admin"
|
||||
postgres_superuser_teams: "postgres_superusers"
|
||||
protected_role_names: "admin,cron_admin"
|
||||
ready_wait_interval: 3s
|
||||
ready_wait_timeout: 30s
|
||||
repair_period: 5m
|
||||
replica_dns_name_format: "{cluster}-repl.{namespace}.{hostedzone}"
|
||||
# replica_legacy_dns_name_format: "{cluster}-repl.{team}.{hostedzone}"
|
||||
replica_legacy_dns_name_format: "{cluster}-repl.{team}.{hostedzone}"
|
||||
replication_username: standby
|
||||
resource_check_interval: 3s
|
||||
resource_check_timeout: 10m
|
||||
|
|
@ -153,7 +154,7 @@ data:
|
|||
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
|
||||
share_pgsocket_with_sidecars: "false"
|
||||
# sidecar_docker_images: ""
|
||||
# set_memory_request_to_limit: "false"
|
||||
set_memory_request_to_limit: "false"
|
||||
spilo_allow_privilege_escalation: "true"
|
||||
# spilo_runasuser: 101
|
||||
# spilo_runasgroup: 103
|
||||
|
|
@ -161,10 +162,10 @@ data:
|
|||
spilo_privileged: "false"
|
||||
storage_resize_mode: "pvc"
|
||||
super_username: postgres
|
||||
# target_major_version: "16"
|
||||
# team_admin_role: "admin"
|
||||
# team_api_role_configuration: "log_statement:all"
|
||||
# teams_api_url: http://fake-teams-api.default.svc.cluster.local
|
||||
target_major_version: "17"
|
||||
team_admin_role: "admin"
|
||||
team_api_role_configuration: "log_statement:all"
|
||||
teams_api_url: http://fake-teams-api.default.svc.cluster.local
|
||||
# toleration: "key:db-only,operator:Exists,effect:NoSchedule"
|
||||
# wal_az_storage_account: ""
|
||||
# wal_gs_bucket: ""
|
||||
|
|
|
|||
|
|
@ -31,11 +31,21 @@ spec:
|
|||
version: "13"
|
||||
sidecars:
|
||||
- name: "exporter"
|
||||
image: "wrouesnel/postgres_exporter"
|
||||
image: "quay.io/prometheuscommunity/postgres-exporter:v0.15.0"
|
||||
ports:
|
||||
- name: exporter
|
||||
containerPort: 9187
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: DATA_SOURCE_URI
|
||||
value: ":5432/?sslmode=disable"
|
||||
- name: DATA_SOURCE_USER
|
||||
value: "postgres"
|
||||
- name: DATA_SOURCE_PASS
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: postgres.test-pg.credentials.postgresql.acid.zalan.do
|
||||
key: password
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
|
|
|
|||
|
|
@ -17,4 +17,4 @@ spec:
|
|||
preparedDatabases:
|
||||
bar: {}
|
||||
postgresql:
|
||||
version: "12"
|
||||
version: "13"
|
||||
|
|
@ -19,4 +19,4 @@ spec:
|
|||
preparedDatabases:
|
||||
bar: {}
|
||||
postgresql:
|
||||
version: "16"
|
||||
version: "17"
|
||||
|
|
|
|||
|
|
@ -94,6 +94,7 @@ rules:
|
|||
- create
|
||||
- delete
|
||||
- get
|
||||
- patch
|
||||
- update
|
||||
# to check nodes for node readiness label
|
||||
- apiGroups:
|
||||
|
|
@ -166,6 +167,7 @@ rules:
|
|||
- get
|
||||
- list
|
||||
- patch
|
||||
- update
|
||||
# to CRUD cron jobs for logical backups
|
||||
- apiGroups:
|
||||
- batch
|
||||
|
|
|
|||
|
|
@ -174,6 +174,7 @@ rules:
|
|||
- get
|
||||
- list
|
||||
- patch
|
||||
- update
|
||||
# to CRUD cron jobs for logical backups
|
||||
- apiGroups:
|
||||
- batch
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ spec:
|
|||
type: string
|
||||
docker_image:
|
||||
type: string
|
||||
default: "ghcr.io/zalando/spilo-16:3.2-p3"
|
||||
default: "ghcr.io/zalando/spilo-17:4.0-p2"
|
||||
enable_crd_registration:
|
||||
type: boolean
|
||||
default: true
|
||||
|
|
@ -158,17 +158,17 @@ spec:
|
|||
properties:
|
||||
major_version_upgrade_mode:
|
||||
type: string
|
||||
default: "off"
|
||||
default: "manual"
|
||||
major_version_upgrade_team_allow_list:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
minimal_major_version:
|
||||
type: string
|
||||
default: "12"
|
||||
default: "13"
|
||||
target_major_version:
|
||||
type: string
|
||||
default: "16"
|
||||
default: "17"
|
||||
kubernetes:
|
||||
type: object
|
||||
properties:
|
||||
|
|
@ -209,9 +209,9 @@ spec:
|
|||
enable_init_containers:
|
||||
type: boolean
|
||||
default: true
|
||||
enable_secrets_deletion:
|
||||
enable_owner_references:
|
||||
type: boolean
|
||||
default: true
|
||||
default: false
|
||||
enable_persistent_volume_claim_deletion:
|
||||
type: boolean
|
||||
default: true
|
||||
|
|
@ -224,6 +224,9 @@ spec:
|
|||
enable_readiness_probe:
|
||||
type: boolean
|
||||
default: false
|
||||
enable_secrets_deletion:
|
||||
type: boolean
|
||||
default: true
|
||||
enable_sidecars:
|
||||
type: boolean
|
||||
default: true
|
||||
|
|
@ -371,28 +374,28 @@ spec:
|
|||
properties:
|
||||
default_cpu_limit:
|
||||
type: string
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
|
||||
default_cpu_request:
|
||||
type: string
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
|
||||
default_memory_limit:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
|
||||
default_memory_request:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
|
||||
max_cpu_request:
|
||||
type: string
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
|
||||
max_memory_request:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
|
||||
min_cpu_limit:
|
||||
type: string
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
|
||||
min_memory_limit:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
|
||||
timeouts:
|
||||
type: object
|
||||
properties:
|
||||
|
|
@ -467,7 +470,6 @@ spec:
|
|||
type: string
|
||||
additional_secret_mount_path:
|
||||
type: string
|
||||
default: "/meta/credentials"
|
||||
aws_region:
|
||||
type: string
|
||||
default: "eu-central-1"
|
||||
|
|
@ -506,7 +508,7 @@ spec:
|
|||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
logical_backup_docker_image:
|
||||
type: string
|
||||
default: "ghcr.io/zalando/postgres-operator/logical-backup:v1.12.2"
|
||||
default: "ghcr.io/zalando/postgres-operator/logical-backup:v1.14.0"
|
||||
logical_backup_google_application_credentials:
|
||||
type: string
|
||||
logical_backup_job_prefix:
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ spec:
|
|||
serviceAccountName: postgres-operator
|
||||
containers:
|
||||
- name: postgres-operator
|
||||
image: ghcr.io/zalando/postgres-operator:v1.12.2
|
||||
image: ghcr.io/zalando/postgres-operator:v1.14.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
resources:
|
||||
requests:
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ kind: OperatorConfiguration
|
|||
metadata:
|
||||
name: postgresql-operator-default-configuration
|
||||
configuration:
|
||||
docker_image: ghcr.io/zalando/spilo-16:3.2-p3
|
||||
docker_image: ghcr.io/zalando/spilo-17:4.0-p2
|
||||
# enable_crd_registration: true
|
||||
# crd_categories:
|
||||
# - all
|
||||
|
|
@ -36,11 +36,11 @@ configuration:
|
|||
replication_username: standby
|
||||
super_username: postgres
|
||||
major_version_upgrade:
|
||||
major_version_upgrade_mode: "off"
|
||||
major_version_upgrade_mode: "manual"
|
||||
# major_version_upgrade_team_allow_list:
|
||||
# - acid
|
||||
minimal_major_version: "12"
|
||||
target_major_version: "16"
|
||||
minimal_major_version: "13"
|
||||
target_major_version: "17"
|
||||
kubernetes:
|
||||
# additional_pod_capabilities:
|
||||
# - "SYS_NICE"
|
||||
|
|
@ -59,11 +59,12 @@ configuration:
|
|||
# enable_cross_namespace_secret: "false"
|
||||
enable_finalizers: false
|
||||
enable_init_containers: true
|
||||
enable_secrets_deletion: true
|
||||
enable_owner_references: false
|
||||
enable_persistent_volume_claim_deletion: true
|
||||
enable_pod_antiaffinity: false
|
||||
enable_pod_disruption_budget: true
|
||||
enable_readiness_probe: false
|
||||
enable_secrets_deletion: true
|
||||
enable_sidecars: true
|
||||
# ignored_annotations:
|
||||
# - k8s.v1.cni.cncf.io/network-status
|
||||
|
|
@ -167,7 +168,7 @@ configuration:
|
|||
# logical_backup_cpu_request: ""
|
||||
# logical_backup_memory_limit: ""
|
||||
# logical_backup_memory_request: ""
|
||||
logical_backup_docker_image: "ghcr.io/zalando/postgres-operator/logical-backup:v1.12.2"
|
||||
logical_backup_docker_image: "ghcr.io/zalando/postgres-operator/logical-backup:v1.14.0"
|
||||
# logical_backup_google_application_credentials: ""
|
||||
logical_backup_job_prefix: "logical-backup-"
|
||||
logical_backup_provider: "s3"
|
||||
|
|
|
|||
|
|
@ -228,7 +228,7 @@ spec:
|
|||
type: array
|
||||
items:
|
||||
type: string
|
||||
pattern: '^\ *((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))-((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))\ *$'
|
||||
pattern: '^\ *((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))-((2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))\ *$'
|
||||
masterServiceAnnotations:
|
||||
type: object
|
||||
additionalProperties:
|
||||
|
|
@ -377,12 +377,11 @@ spec:
|
|||
version:
|
||||
type: string
|
||||
enum:
|
||||
- "11"
|
||||
- "12"
|
||||
- "13"
|
||||
- "14"
|
||||
- "15"
|
||||
- "16"
|
||||
- "17"
|
||||
parameters:
|
||||
type: object
|
||||
additionalProperties:
|
||||
|
|
@ -517,6 +516,9 @@ spec:
|
|||
type: string
|
||||
batchSize:
|
||||
type: integer
|
||||
cpu:
|
||||
type: string
|
||||
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
|
||||
database:
|
||||
type: string
|
||||
enableRecovery:
|
||||
|
|
@ -525,6 +527,9 @@ spec:
|
|||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
memory:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
tables:
|
||||
type: object
|
||||
additionalProperties:
|
||||
|
|
@ -536,6 +541,8 @@ spec:
|
|||
type: string
|
||||
idColumn:
|
||||
type: string
|
||||
ignoreRecovery:
|
||||
type: boolean
|
||||
payloadColumn:
|
||||
type: string
|
||||
recoveryEventType:
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ spec:
|
|||
size: 1Gi
|
||||
numberOfInstances: 1
|
||||
postgresql:
|
||||
version: "16"
|
||||
version: "17"
|
||||
# Make this a standby cluster and provide either the s3 bucket path of source cluster or the remote primary host for continuous streaming.
|
||||
standby:
|
||||
# s3_wal_path: "s3://mybucket/spilo/acid-minimal-cluster/abcd1234-2a4b-4b2a-8c9c-c1234defg567/wal/14/"
|
||||
|
|
|
|||
|
|
@ -599,12 +599,6 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
"version": {
|
||||
Type: "string",
|
||||
Enum: []apiextv1.JSON{
|
||||
{
|
||||
Raw: []byte(`"11"`),
|
||||
},
|
||||
{
|
||||
Raw: []byte(`"12"`),
|
||||
},
|
||||
{
|
||||
Raw: []byte(`"13"`),
|
||||
},
|
||||
|
|
@ -617,6 +611,9 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
{
|
||||
Raw: []byte(`"16"`),
|
||||
},
|
||||
{
|
||||
Raw: []byte(`"17"`),
|
||||
},
|
||||
},
|
||||
},
|
||||
"parameters": {
|
||||
|
|
@ -1208,7 +1205,8 @@ var OperatorConfigCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
Type: "boolean",
|
||||
},
|
||||
"enable_spilo_wal_path_compat": {
|
||||
Type: "boolean",
|
||||
Type: "boolean",
|
||||
Description: "deprecated",
|
||||
},
|
||||
"enable_team_id_clustername_prefix": {
|
||||
Type: "boolean",
|
||||
|
|
@ -1370,7 +1368,7 @@ var OperatorConfigCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
"enable_init_containers": {
|
||||
Type: "boolean",
|
||||
},
|
||||
"enable_secrets_deletion": {
|
||||
"enable_owner_references": {
|
||||
Type: "boolean",
|
||||
},
|
||||
"enable_persistent_volume_claim_deletion": {
|
||||
|
|
@ -1385,6 +1383,9 @@ var OperatorConfigCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
"enable_readiness_probe": {
|
||||
Type: "boolean",
|
||||
},
|
||||
"enable_secrets_deletion": {
|
||||
Type: "boolean",
|
||||
},
|
||||
"enable_sidecars": {
|
||||
Type: "boolean",
|
||||
},
|
||||
|
|
@ -1614,35 +1615,35 @@ var OperatorConfigCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
Properties: map[string]apiextv1.JSONSchemaProps{
|
||||
"default_cpu_limit": {
|
||||
Type: "string",
|
||||
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$",
|
||||
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$|^$",
|
||||
},
|
||||
"default_cpu_request": {
|
||||
Type: "string",
|
||||
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$",
|
||||
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$|^$",
|
||||
},
|
||||
"default_memory_limit": {
|
||||
Type: "string",
|
||||
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$",
|
||||
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$|^$",
|
||||
},
|
||||
"default_memory_request": {
|
||||
Type: "string",
|
||||
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$",
|
||||
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$|^$",
|
||||
},
|
||||
"max_cpu_request": {
|
||||
Type: "string",
|
||||
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$",
|
||||
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$|^$",
|
||||
},
|
||||
"max_memory_request": {
|
||||
Type: "string",
|
||||
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$",
|
||||
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$|^$",
|
||||
},
|
||||
"min_cpu_limit": {
|
||||
Type: "string",
|
||||
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$",
|
||||
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$|^$",
|
||||
},
|
||||
"min_memory_limit": {
|
||||
Type: "string",
|
||||
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$",
|
||||
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$|^$",
|
||||
},
|
||||
},
|
||||
},
|
||||
|
|
|
|||
|
|
@ -47,14 +47,15 @@ type PostgresUsersConfiguration struct {
|
|||
|
||||
// MajorVersionUpgradeConfiguration defines how to execute major version upgrades of Postgres.
|
||||
type MajorVersionUpgradeConfiguration struct {
|
||||
MajorVersionUpgradeMode string `json:"major_version_upgrade_mode" default:"off"` // off - no actions, manual - manifest triggers action, full - manifest and minimal version violation trigger upgrade
|
||||
MajorVersionUpgradeMode string `json:"major_version_upgrade_mode" default:"manual"` // off - no actions, manual - manifest triggers action, full - manifest and minimal version violation trigger upgrade
|
||||
MajorVersionUpgradeTeamAllowList []string `json:"major_version_upgrade_team_allow_list,omitempty"`
|
||||
MinimalMajorVersion string `json:"minimal_major_version" default:"12"`
|
||||
TargetMajorVersion string `json:"target_major_version" default:"16"`
|
||||
MinimalMajorVersion string `json:"minimal_major_version" default:"13"`
|
||||
TargetMajorVersion string `json:"target_major_version" default:"17"`
|
||||
}
|
||||
|
||||
// KubernetesMetaConfiguration defines k8s conf required for all Postgres clusters and the operator itself
|
||||
type KubernetesMetaConfiguration struct {
|
||||
EnableOwnerReferences *bool `json:"enable_owner_references,omitempty"`
|
||||
PodServiceAccountName string `json:"pod_service_account_name,omitempty"`
|
||||
// TODO: change it to the proper json
|
||||
PodServiceAccountDefinition string `json:"pod_service_account_definition,omitempty"`
|
||||
|
|
@ -159,7 +160,7 @@ type AWSGCPConfiguration struct {
|
|||
LogS3Bucket string `json:"log_s3_bucket,omitempty"`
|
||||
KubeIAMRole string `json:"kube_iam_role,omitempty"`
|
||||
AdditionalSecretMount string `json:"additional_secret_mount,omitempty"`
|
||||
AdditionalSecretMountPath string `json:"additional_secret_mount_path" default:"/meta/credentials"`
|
||||
AdditionalSecretMountPath string `json:"additional_secret_mount_path,omitempty"`
|
||||
EnableEBSGp3Migration bool `json:"enable_ebs_gp3_migration" default:"false"`
|
||||
EnableEBSGp3MigrationMaxSize int64 `json:"enable_ebs_gp3_migration_max_size" default:"1000"`
|
||||
}
|
||||
|
|
|
|||
|
|
@ -134,7 +134,7 @@ type Volume struct {
|
|||
Size string `json:"size"`
|
||||
StorageClass string `json:"storageClass,omitempty"`
|
||||
SubPath string `json:"subPath,omitempty"`
|
||||
IsSubPathExpr *bool `json:"isSubPathExpr,omitemtpy"`
|
||||
IsSubPathExpr *bool `json:"isSubPathExpr,omitempty"`
|
||||
Iops *int64 `json:"iops,omitempty"`
|
||||
Throughput *int64 `json:"throughput,omitempty"`
|
||||
VolumeType string `json:"type,omitempty"`
|
||||
|
|
@ -145,7 +145,7 @@ type AdditionalVolume struct {
|
|||
Name string `json:"name"`
|
||||
MountPath string `json:"mountPath"`
|
||||
SubPath string `json:"subPath,omitempty"`
|
||||
IsSubPathExpr *bool `json:"isSubPathExpr,omitemtpy"`
|
||||
IsSubPathExpr *bool `json:"isSubPathExpr,omitempty"`
|
||||
TargetContainers []string `json:"targetContainers"`
|
||||
VolumeSource v1.VolumeSource `json:"volumeSource"`
|
||||
}
|
||||
|
|
@ -221,6 +221,7 @@ type Sidecar struct {
|
|||
DockerImage string `json:"image,omitempty"`
|
||||
Ports []v1.ContainerPort `json:"ports,omitempty"`
|
||||
Env []v1.EnvVar `json:"env,omitempty"`
|
||||
Command []string `json:"command,omitempty"`
|
||||
}
|
||||
|
||||
// UserFlags defines flags (such as superuser, nologin) that could be assigned to individual users
|
||||
|
|
@ -298,6 +299,8 @@ type Stream struct {
|
|||
Tables map[string]StreamTable `json:"tables"`
|
||||
Filter map[string]*string `json:"filter,omitempty"`
|
||||
BatchSize *uint32 `json:"batchSize,omitempty"`
|
||||
CPU *string `json:"cpu,omitempty"`
|
||||
Memory *string `json:"memory,omitempty"`
|
||||
EnableRecovery *bool `json:"enableRecovery,omitempty"`
|
||||
}
|
||||
|
||||
|
|
@ -305,6 +308,7 @@ type Stream struct {
|
|||
type StreamTable struct {
|
||||
EventType string `json:"eventType"`
|
||||
RecoveryEventType string `json:"recoveryEventType,omitempty"`
|
||||
IgnoreRecovery *bool `json:"ignoreRecovery,omitempty"`
|
||||
IdColumn *string `json:"idColumn,omitempty"`
|
||||
PayloadColumn *string `json:"payloadColumn,omitempty"`
|
||||
}
|
||||
|
|
|
|||
|
|
@ -123,6 +123,8 @@ var maintenanceWindows = []struct {
|
|||
{"expect error as weekday is empty", []byte(`":00:00-10:00"`), MaintenanceWindow{}, errors.New(`could not parse weekday: incorrect weekday`)},
|
||||
{"expect error as maintenance window set seconds", []byte(`"Mon:00:00:00-10:00:00"`), MaintenanceWindow{}, errors.New(`incorrect maintenance window format`)},
|
||||
{"expect error as 'To' time set seconds", []byte(`"Mon:00:00-00:00:00"`), MaintenanceWindow{}, errors.New("could not parse end time: incorrect time format")},
|
||||
// ideally, should be implemented
|
||||
{"expect error as 'To' has a weekday", []byte(`"Mon:00:00-Fri:00:00"`), MaintenanceWindow{}, errors.New("could not parse end time: incorrect time format")},
|
||||
{"expect error as 'To' time is missing", []byte(`"Mon:00:00"`), MaintenanceWindow{}, errors.New("incorrect maintenance window format")}}
|
||||
|
||||
var postgresStatus = []struct {
|
||||
|
|
@ -217,7 +219,7 @@ var unmarshalCluster = []struct {
|
|||
"127.0.0.1/32"
|
||||
],
|
||||
"postgresql": {
|
||||
"version": "16",
|
||||
"version": "17",
|
||||
"parameters": {
|
||||
"shared_buffers": "32MB",
|
||||
"max_connections": "10",
|
||||
|
|
@ -277,7 +279,7 @@ var unmarshalCluster = []struct {
|
|||
},
|
||||
Spec: PostgresSpec{
|
||||
PostgresqlParam: PostgresqlParam{
|
||||
PgVersion: "16",
|
||||
PgVersion: "17",
|
||||
Parameters: map[string]string{
|
||||
"shared_buffers": "32MB",
|
||||
"max_connections": "10",
|
||||
|
|
@ -337,7 +339,7 @@ var unmarshalCluster = []struct {
|
|||
},
|
||||
Error: "",
|
||||
},
|
||||
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"acid-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"16","parameters":{"log_statement":"all","max_connections":"10","shared_buffers":"32MB"}},"pod_priority_class_name":"spilo-pod-priority","volume":{"size":"5Gi","storageClass":"SSD", "subPath": "subdir"},"enableShmVolume":false,"patroni":{"initdb":{"data-checksums":"true","encoding":"UTF8","locale":"en_US.UTF-8"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"],"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}}},"resources":{"requests":{"cpu":"10m","memory":"50Mi"},"limits":{"cpu":"300m","memory":"3000Mi"}},"teamId":"acid","allowedSourceRanges":["127.0.0.1/32"],"numberOfInstances":2,"users":{"zalando":["superuser","createdb"]},"maintenanceWindows":["Mon:01:00-06:00","Sat:00:00-04:00","05:00-05:15"],"clone":{"cluster":"acid-batman"}},"status":{"PostgresClusterStatus":""}}`),
|
||||
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"acid-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"17","parameters":{"log_statement":"all","max_connections":"10","shared_buffers":"32MB"}},"pod_priority_class_name":"spilo-pod-priority","volume":{"size":"5Gi","storageClass":"SSD", "subPath": "subdir"},"enableShmVolume":false,"patroni":{"initdb":{"data-checksums":"true","encoding":"UTF8","locale":"en_US.UTF-8"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"],"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}}},"resources":{"requests":{"cpu":"10m","memory":"50Mi"},"limits":{"cpu":"300m","memory":"3000Mi"}},"teamId":"acid","allowedSourceRanges":["127.0.0.1/32"],"numberOfInstances":2,"users":{"zalando":["superuser","createdb"]},"maintenanceWindows":["Mon:01:00-06:00","Sat:00:00-04:00","05:00-05:15"],"clone":{"cluster":"acid-batman"}},"status":{"PostgresClusterStatus":""}}`),
|
||||
err: nil},
|
||||
{
|
||||
about: "example with clone",
|
||||
|
|
@ -402,7 +404,7 @@ var postgresqlList = []struct {
|
|||
out PostgresqlList
|
||||
err error
|
||||
}{
|
||||
{"expect success", []byte(`{"apiVersion":"v1","items":[{"apiVersion":"acid.zalan.do/v1","kind":"Postgresql","metadata":{"labels":{"team":"acid"},"name":"acid-testcluster42","namespace":"default","resourceVersion":"30446957","selfLink":"/apis/acid.zalan.do/v1/namespaces/default/postgresqls/acid-testcluster42","uid":"857cd208-33dc-11e7-b20a-0699041e4b03"},"spec":{"allowedSourceRanges":["185.85.220.0/22"],"numberOfInstances":1,"postgresql":{"version":"16"},"teamId":"acid","volume":{"size":"10Gi"}},"status":{"PostgresClusterStatus":"Running"}}],"kind":"List","metadata":{},"resourceVersion":"","selfLink":""}`),
|
||||
{"expect success", []byte(`{"apiVersion":"v1","items":[{"apiVersion":"acid.zalan.do/v1","kind":"Postgresql","metadata":{"labels":{"team":"acid"},"name":"acid-testcluster42","namespace":"default","resourceVersion":"30446957","selfLink":"/apis/acid.zalan.do/v1/namespaces/default/postgresqls/acid-testcluster42","uid":"857cd208-33dc-11e7-b20a-0699041e4b03"},"spec":{"allowedSourceRanges":["185.85.220.0/22"],"numberOfInstances":1,"postgresql":{"version":"17"},"teamId":"acid","volume":{"size":"10Gi"}},"status":{"PostgresClusterStatus":"Running"}}],"kind":"List","metadata":{},"resourceVersion":"","selfLink":""}`),
|
||||
PostgresqlList{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: "List",
|
||||
|
|
@ -423,7 +425,7 @@ var postgresqlList = []struct {
|
|||
},
|
||||
Spec: PostgresSpec{
|
||||
ClusterName: "testcluster42",
|
||||
PostgresqlParam: PostgresqlParam{PgVersion: "16"},
|
||||
PostgresqlParam: PostgresqlParam{PgVersion: "17"},
|
||||
Volume: Volume{Size: "10Gi"},
|
||||
TeamID: "acid",
|
||||
AllowedSourceRanges: []string{"185.85.220.0/22"},
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
// +build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
@ -197,6 +197,11 @@ func (in *ConnectionPoolerConfiguration) DeepCopy() *ConnectionPoolerConfigurati
|
|||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *KubernetesMetaConfiguration) DeepCopyInto(out *KubernetesMetaConfiguration) {
|
||||
*out = *in
|
||||
if in.EnableOwnerReferences != nil {
|
||||
in, out := &in.EnableOwnerReferences, &out.EnableOwnerReferences
|
||||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
if in.SpiloAllowPrivilegeEscalation != nil {
|
||||
in, out := &in.SpiloAllowPrivilegeEscalation, &out.SpiloAllowPrivilegeEscalation
|
||||
*out = new(bool)
|
||||
|
|
@ -1318,6 +1323,11 @@ func (in *Sidecar) DeepCopyInto(out *Sidecar) {
|
|||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
if in.Command != nil {
|
||||
in, out := &in.Command, &out.Command
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
|
|
@ -1377,6 +1387,16 @@ func (in *Stream) DeepCopyInto(out *Stream) {
|
|||
*out = new(uint32)
|
||||
**out = **in
|
||||
}
|
||||
if in.CPU != nil {
|
||||
in, out := &in.CPU, &out.CPU
|
||||
*out = new(string)
|
||||
**out = **in
|
||||
}
|
||||
if in.Memory != nil {
|
||||
in, out := &in.Memory, &out.Memory
|
||||
*out = new(string)
|
||||
**out = **in
|
||||
}
|
||||
if in.EnableRecovery != nil {
|
||||
in, out := &in.EnableRecovery, &out.EnableRecovery
|
||||
*out = new(bool)
|
||||
|
|
@ -1398,6 +1418,11 @@ func (in *Stream) DeepCopy() *Stream {
|
|||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *StreamTable) DeepCopyInto(out *StreamTable) {
|
||||
*out = *in
|
||||
if in.IgnoreRecovery != nil {
|
||||
in, out := &in.IgnoreRecovery, &out.IgnoreRecovery
|
||||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
if in.IdColumn != nil {
|
||||
in, out := &in.IdColumn, &out.IdColumn
|
||||
*out = new(string)
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ package cluster
|
|||
// Postgres CustomResourceDefinition object i.e. Spilo
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
|
@ -15,6 +14,7 @@ import (
|
|||
|
||||
"github.com/sirupsen/logrus"
|
||||
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
|
||||
zalandov1 "github.com/zalando/postgres-operator/pkg/apis/zalando.org/v1"
|
||||
|
||||
"github.com/zalando/postgres-operator/pkg/generated/clientset/versioned/scheme"
|
||||
"github.com/zalando/postgres-operator/pkg/spec"
|
||||
|
|
@ -30,7 +30,6 @@ import (
|
|||
appsv1 "k8s.io/api/apps/v1"
|
||||
batchv1 "k8s.io/api/batch/v1"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
apipolicyv1 "k8s.io/api/policy/v1"
|
||||
policyv1 "k8s.io/api/policy/v1"
|
||||
rbacv1 "k8s.io/api/rbac/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
|
@ -60,13 +59,18 @@ type Config struct {
|
|||
}
|
||||
|
||||
type kubeResources struct {
|
||||
Services map[PostgresRole]*v1.Service
|
||||
Endpoints map[PostgresRole]*v1.Endpoints
|
||||
Secrets map[types.UID]*v1.Secret
|
||||
Statefulset *appsv1.StatefulSet
|
||||
PodDisruptionBudget *policyv1.PodDisruptionBudget
|
||||
Services map[PostgresRole]*v1.Service
|
||||
Endpoints map[PostgresRole]*v1.Endpoints
|
||||
PatroniEndpoints map[string]*v1.Endpoints
|
||||
PatroniConfigMaps map[string]*v1.ConfigMap
|
||||
Secrets map[types.UID]*v1.Secret
|
||||
Statefulset *appsv1.StatefulSet
|
||||
VolumeClaims map[types.UID]*v1.PersistentVolumeClaim
|
||||
PrimaryPodDisruptionBudget *policyv1.PodDisruptionBudget
|
||||
CriticalOpPodDisruptionBudget *policyv1.PodDisruptionBudget
|
||||
LogicalBackupJob *batchv1.CronJob
|
||||
Streams map[string]*zalandov1.FabricEventStream
|
||||
//Pods are treated separately
|
||||
//PVCs are treated separately
|
||||
}
|
||||
|
||||
// Cluster describes postgresql cluster
|
||||
|
|
@ -102,10 +106,17 @@ type Cluster struct {
|
|||
}
|
||||
|
||||
type compareStatefulsetResult struct {
|
||||
match bool
|
||||
replace bool
|
||||
rollingUpdate bool
|
||||
reasons []string
|
||||
match bool
|
||||
replace bool
|
||||
rollingUpdate bool
|
||||
reasons []string
|
||||
deletedPodAnnotations []string
|
||||
}
|
||||
|
||||
type compareLogicalBackupJobResult struct {
|
||||
match bool
|
||||
reasons []string
|
||||
deletedPodAnnotations []string
|
||||
}
|
||||
|
||||
// New creates a new cluster. This function should be called from a controller.
|
||||
|
|
@ -132,9 +143,13 @@ func New(cfg Config, kubeClient k8sutil.KubernetesClient, pgSpec acidv1.Postgres
|
|||
systemUsers: make(map[string]spec.PgUser),
|
||||
podSubscribers: make(map[spec.NamespacedName]chan PodEvent),
|
||||
kubeResources: kubeResources{
|
||||
Secrets: make(map[types.UID]*v1.Secret),
|
||||
Services: make(map[PostgresRole]*v1.Service),
|
||||
Endpoints: make(map[PostgresRole]*v1.Endpoints)},
|
||||
Secrets: make(map[types.UID]*v1.Secret),
|
||||
Services: make(map[PostgresRole]*v1.Service),
|
||||
Endpoints: make(map[PostgresRole]*v1.Endpoints),
|
||||
PatroniEndpoints: make(map[string]*v1.Endpoints),
|
||||
PatroniConfigMaps: make(map[string]*v1.ConfigMap),
|
||||
VolumeClaims: make(map[types.UID]*v1.PersistentVolumeClaim),
|
||||
Streams: make(map[string]*zalandov1.FabricEventStream)},
|
||||
userSyncStrategy: users.DefaultUserSyncStrategy{
|
||||
PasswordEncryption: passwordEncryption,
|
||||
RoleDeletionSuffix: cfg.OpConfig.RoleDeletionSuffix,
|
||||
|
|
@ -353,14 +368,10 @@ func (c *Cluster) Create() (err error) {
|
|||
c.logger.Infof("secrets have been successfully created")
|
||||
c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "Secrets", "The secrets have been successfully created")
|
||||
|
||||
if c.PodDisruptionBudget != nil {
|
||||
return fmt.Errorf("pod disruption budget already exists in the cluster")
|
||||
if err = c.createPodDisruptionBudgets(); err != nil {
|
||||
return fmt.Errorf("could not create pod disruption budgets: %v", err)
|
||||
}
|
||||
pdb, err := c.createPodDisruptionBudget()
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not create pod disruption budget: %v", err)
|
||||
}
|
||||
c.logger.Infof("pod disruption budget %q has been successfully created", util.NameFromMeta(pdb.ObjectMeta))
|
||||
c.logger.Info("pod disruption budgets have been successfully created")
|
||||
|
||||
if c.Statefulset != nil {
|
||||
return fmt.Errorf("statefulset already exists in the cluster")
|
||||
|
|
@ -381,6 +392,16 @@ func (c *Cluster) Create() (err error) {
|
|||
c.logger.Infof("pods are ready")
|
||||
c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "StatefulSet", "Pods are ready")
|
||||
|
||||
// sync volume may already transition volumes to gp3, if iops/throughput or type is specified
|
||||
if err = c.syncVolumes(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// sync resources created by Patroni
|
||||
if err = c.syncPatroniResources(); err != nil {
|
||||
c.logger.Warnf("Patroni resources not yet synced: %v", err)
|
||||
}
|
||||
|
||||
// create database objects unless we are running without pods or disabled
|
||||
// that feature explicitly
|
||||
if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&c.Spec) <= 0 || c.Spec.StandbyCluster != nil) {
|
||||
|
|
@ -406,10 +427,6 @@ func (c *Cluster) Create() (err error) {
|
|||
c.logger.Info("a k8s cron job for logical backup has been successfully created")
|
||||
}
|
||||
|
||||
if err := c.listResources(); err != nil {
|
||||
c.logger.Errorf("could not list resources: %v", err)
|
||||
}
|
||||
|
||||
// Create connection pooler deployment and services if necessary. Since we
|
||||
// need to perform some operations with the database itself (e.g. install
|
||||
// lookup function), do it as the last step, when everything is available.
|
||||
|
|
@ -434,10 +451,15 @@ func (c *Cluster) Create() (err error) {
|
|||
}
|
||||
}
|
||||
|
||||
if err := c.listResources(); err != nil {
|
||||
c.logger.Errorf("could not list resources: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compareStatefulsetResult {
|
||||
deletedPodAnnotations := []string{}
|
||||
reasons := make([]string, 0)
|
||||
var match, needsRollUpdate, needsReplace bool
|
||||
|
||||
|
|
@ -447,7 +469,12 @@ func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compa
|
|||
match = false
|
||||
reasons = append(reasons, "new statefulset's number of replicas does not match the current one")
|
||||
}
|
||||
if changed, reason := c.compareAnnotations(c.Statefulset.Annotations, statefulSet.Annotations); changed {
|
||||
if !reflect.DeepEqual(c.Statefulset.OwnerReferences, statefulSet.OwnerReferences) {
|
||||
match = false
|
||||
needsReplace = true
|
||||
reasons = append(reasons, "new statefulset's ownerReferences do not match")
|
||||
}
|
||||
if changed, reason := c.compareAnnotations(c.Statefulset.Annotations, statefulSet.Annotations, nil); changed {
|
||||
match = false
|
||||
needsReplace = true
|
||||
reasons = append(reasons, "new statefulset's annotations do not match: "+reason)
|
||||
|
|
@ -521,7 +548,7 @@ func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compa
|
|||
}
|
||||
}
|
||||
|
||||
if changed, reason := c.compareAnnotations(c.Statefulset.Spec.Template.Annotations, statefulSet.Spec.Template.Annotations); changed {
|
||||
if changed, reason := c.compareAnnotations(c.Statefulset.Spec.Template.Annotations, statefulSet.Spec.Template.Annotations, &deletedPodAnnotations); changed {
|
||||
match = false
|
||||
needsReplace = true
|
||||
reasons = append(reasons, "new statefulset's pod template metadata annotations does not match "+reason)
|
||||
|
|
@ -543,9 +570,9 @@ func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compa
|
|||
reasons = append(reasons, fmt.Sprintf("new statefulset's name for volume %d does not match the current one", i))
|
||||
continue
|
||||
}
|
||||
if changed, reason := c.compareAnnotations(c.Statefulset.Spec.VolumeClaimTemplates[i].Annotations, statefulSet.Spec.VolumeClaimTemplates[i].Annotations); changed {
|
||||
if changed, reason := c.compareAnnotations(c.Statefulset.Spec.VolumeClaimTemplates[i].Annotations, statefulSet.Spec.VolumeClaimTemplates[i].Annotations, nil); changed {
|
||||
needsReplace = true
|
||||
reasons = append(reasons, fmt.Sprintf("new statefulset's annotations for volume %q does not match the current one: %s", name, reason))
|
||||
reasons = append(reasons, fmt.Sprintf("new statefulset's annotations for volume %q do not match the current ones: %s", name, reason))
|
||||
}
|
||||
if !reflect.DeepEqual(c.Statefulset.Spec.VolumeClaimTemplates[i].Spec, statefulSet.Spec.VolumeClaimTemplates[i].Spec) {
|
||||
name := c.Statefulset.Spec.VolumeClaimTemplates[i].Name
|
||||
|
|
@ -581,7 +608,7 @@ func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compa
|
|||
match = false
|
||||
}
|
||||
|
||||
return &compareStatefulsetResult{match: match, reasons: reasons, rollingUpdate: needsRollUpdate, replace: needsReplace}
|
||||
return &compareStatefulsetResult{match: match, reasons: reasons, rollingUpdate: needsRollUpdate, replace: needsReplace, deletedPodAnnotations: deletedPodAnnotations}
|
||||
}
|
||||
|
||||
type containerCondition func(a, b v1.Container) bool
|
||||
|
|
@ -676,7 +703,7 @@ func compareEnv(a, b []v1.EnvVar) bool {
|
|||
if len(a) != len(b) {
|
||||
return false
|
||||
}
|
||||
equal := true
|
||||
var equal bool
|
||||
for _, enva := range a {
|
||||
hasmatch := false
|
||||
for _, envb := range b {
|
||||
|
|
@ -783,7 +810,7 @@ func volumeMountExists(mount v1.VolumeMount, mounts []v1.VolumeMount) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
func (c *Cluster) compareAnnotations(old, new map[string]string) (bool, string) {
|
||||
func (c *Cluster) compareAnnotations(old, new map[string]string, removedList *[]string) (bool, string) {
|
||||
reason := ""
|
||||
ignoredAnnotations := make(map[string]bool)
|
||||
for _, ignore := range c.OpConfig.IgnoredAnnotations {
|
||||
|
|
@ -796,6 +823,9 @@ func (c *Cluster) compareAnnotations(old, new map[string]string) (bool, string)
|
|||
}
|
||||
if _, ok := new[key]; !ok {
|
||||
reason += fmt.Sprintf(" Removed %q.", key)
|
||||
if removedList != nil {
|
||||
*removedList = append(*removedList, key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -831,53 +861,65 @@ func (c *Cluster) compareServices(old, new *v1.Service) (bool, string) {
|
|||
}
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(old.ObjectMeta.OwnerReferences, new.ObjectMeta.OwnerReferences) {
|
||||
return false, "new service's owner references do not match the current ones"
|
||||
}
|
||||
|
||||
return true, ""
|
||||
}
|
||||
|
||||
func (c *Cluster) compareLogicalBackupJob(cur, new *batchv1.CronJob) (match bool, reason string) {
|
||||
func (c *Cluster) compareLogicalBackupJob(cur, new *batchv1.CronJob) *compareLogicalBackupJobResult {
|
||||
deletedPodAnnotations := []string{}
|
||||
reasons := make([]string, 0)
|
||||
match := true
|
||||
|
||||
if cur.Spec.Schedule != new.Spec.Schedule {
|
||||
return false, fmt.Sprintf("new job's schedule %q does not match the current one %q",
|
||||
new.Spec.Schedule, cur.Spec.Schedule)
|
||||
match = false
|
||||
reasons = append(reasons, fmt.Sprintf("new job's schedule %q does not match the current one %q", new.Spec.Schedule, cur.Spec.Schedule))
|
||||
}
|
||||
|
||||
newImage := new.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Image
|
||||
curImage := cur.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Image
|
||||
if newImage != curImage {
|
||||
return false, fmt.Sprintf("new job's image %q does not match the current one %q",
|
||||
newImage, curImage)
|
||||
match = false
|
||||
reasons = append(reasons, fmt.Sprintf("new job's image %q does not match the current one %q", newImage, curImage))
|
||||
}
|
||||
|
||||
newPodAnnotation := new.Spec.JobTemplate.Spec.Template.Annotations
|
||||
curPodAnnotation := cur.Spec.JobTemplate.Spec.Template.Annotations
|
||||
if changed, reason := c.compareAnnotations(curPodAnnotation, newPodAnnotation); changed {
|
||||
return false, fmt.Sprintf("new job's pod template metadata annotations does not match " + reason)
|
||||
if changed, reason := c.compareAnnotations(curPodAnnotation, newPodAnnotation, &deletedPodAnnotations); changed {
|
||||
match = false
|
||||
reasons = append(reasons, fmt.Sprint("new job's pod template metadata annotations do not match "+reason))
|
||||
}
|
||||
|
||||
newPgVersion := getPgVersion(new)
|
||||
curPgVersion := getPgVersion(cur)
|
||||
if newPgVersion != curPgVersion {
|
||||
return false, fmt.Sprintf("new job's env PG_VERSION %q does not match the current one %q",
|
||||
newPgVersion, curPgVersion)
|
||||
match = false
|
||||
reasons = append(reasons, fmt.Sprintf("new job's env PG_VERSION %q does not match the current one %q", newPgVersion, curPgVersion))
|
||||
}
|
||||
|
||||
needsReplace := false
|
||||
reasons := make([]string, 0)
|
||||
needsReplace, reasons = c.compareContainers("cronjob container", cur.Spec.JobTemplate.Spec.Template.Spec.Containers, new.Spec.JobTemplate.Spec.Template.Spec.Containers, needsReplace, reasons)
|
||||
contReasons := make([]string, 0)
|
||||
needsReplace, contReasons = c.compareContainers("cronjob container", cur.Spec.JobTemplate.Spec.Template.Spec.Containers, new.Spec.JobTemplate.Spec.Template.Spec.Containers, needsReplace, contReasons)
|
||||
if needsReplace {
|
||||
return false, fmt.Sprintf("logical backup container specs do not match: %v", strings.Join(reasons, `', '`))
|
||||
match = false
|
||||
reasons = append(reasons, fmt.Sprintf("logical backup container specs do not match: %v", strings.Join(contReasons, `', '`)))
|
||||
}
|
||||
|
||||
return true, ""
|
||||
return &compareLogicalBackupJobResult{match: match, reasons: reasons, deletedPodAnnotations: deletedPodAnnotations}
|
||||
}
|
||||
|
||||
func (c *Cluster) comparePodDisruptionBudget(cur, new *apipolicyv1.PodDisruptionBudget) (bool, string) {
|
||||
func (c *Cluster) comparePodDisruptionBudget(cur, new *policyv1.PodDisruptionBudget) (bool, string) {
|
||||
//TODO: improve comparison
|
||||
if match := reflect.DeepEqual(new.Spec, cur.Spec); !match {
|
||||
return false, "new PDB spec does not match the current one"
|
||||
if !reflect.DeepEqual(new.Spec, cur.Spec) {
|
||||
return false, "new PDB's spec does not match the current one"
|
||||
}
|
||||
if changed, reason := c.compareAnnotations(cur.Annotations, new.Annotations); changed {
|
||||
return false, "new PDB's annotations does not match the current one:" + reason
|
||||
if !reflect.DeepEqual(new.ObjectMeta.OwnerReferences, cur.ObjectMeta.OwnerReferences) {
|
||||
return false, "new PDB's owner references do not match the current ones"
|
||||
}
|
||||
if changed, reason := c.compareAnnotations(cur.Annotations, new.Annotations, nil); changed {
|
||||
return false, "new PDB's annotations do not match the current ones:" + reason
|
||||
}
|
||||
return true, ""
|
||||
}
|
||||
|
|
@ -960,6 +1002,12 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
Conditions: c.Postgresql.Status.Conditions,
|
||||
}
|
||||
c.KubeClient.SetPostgresCRDStatus(c.clusterName(), ClusterStatus, "")
|
||||
|
||||
if !isInMaintenanceWindow(newSpec.Spec.MaintenanceWindows) {
|
||||
// do not apply any major version related changes yet
|
||||
newSpec.Spec.PostgresqlParam.PgVersion = oldSpec.Spec.PostgresqlParam.PgVersion
|
||||
}
|
||||
|
||||
c.setSpec(newSpec)
|
||||
|
||||
defer func() {
|
||||
|
|
@ -1011,6 +1059,12 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
updateFailed = true
|
||||
}
|
||||
|
||||
// Patroni service and endpoints / config maps
|
||||
if err := c.syncPatroniResources(); err != nil {
|
||||
c.logger.Errorf("could not sync services: %v", err)
|
||||
updateFailed = true
|
||||
}
|
||||
|
||||
// Users
|
||||
func() {
|
||||
// check if users need to be synced during update
|
||||
|
|
@ -1027,20 +1081,27 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
// only when streams were not specified in oldSpec but in newSpec
|
||||
needStreamUser := len(oldSpec.Spec.Streams) == 0 && len(newSpec.Spec.Streams) > 0
|
||||
|
||||
annotationsChanged, _ := c.compareAnnotations(oldSpec.Annotations, newSpec.Annotations)
|
||||
|
||||
initUsers := !sameUsers || !sameRotatedUsers || needPoolerUser || needStreamUser
|
||||
if initUsers {
|
||||
c.logger.Debugf("initialize users")
|
||||
|
||||
// if inherited annotations differ secrets have to be synced on update
|
||||
newAnnotations := c.annotationsSet(nil)
|
||||
oldAnnotations := make(map[string]string)
|
||||
for _, secret := range c.Secrets {
|
||||
oldAnnotations = secret.ObjectMeta.Annotations
|
||||
break
|
||||
}
|
||||
annotationsChanged, _ := c.compareAnnotations(oldAnnotations, newAnnotations, nil)
|
||||
|
||||
if initUsers || annotationsChanged {
|
||||
c.logger.Debug("initialize users")
|
||||
if err := c.initUsers(); err != nil {
|
||||
c.logger.Errorf("could not init users - skipping sync of secrets and databases: %v", err)
|
||||
userInitFailed = true
|
||||
updateFailed = true
|
||||
return
|
||||
}
|
||||
}
|
||||
if initUsers || annotationsChanged {
|
||||
c.logger.Debugf("syncing secrets")
|
||||
|
||||
c.logger.Debug("syncing secrets")
|
||||
//TODO: mind the secrets of the deleted/new users
|
||||
if err := c.syncSecrets(); err != nil {
|
||||
c.logger.Errorf("could not sync secrets: %v", err)
|
||||
|
|
@ -1071,9 +1132,9 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
}
|
||||
}
|
||||
|
||||
// pod disruption budget
|
||||
if err := c.syncPodDisruptionBudget(true); err != nil {
|
||||
c.logger.Errorf("could not sync pod disruption budget: %v", err)
|
||||
// pod disruption budgets
|
||||
if err := c.syncPodDisruptionBudgets(true); err != nil {
|
||||
c.logger.Errorf("could not sync pod disruption budgets: %v", err)
|
||||
updateFailed = true
|
||||
}
|
||||
|
||||
|
|
@ -1082,7 +1143,7 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
|
||||
// create if it did not exist
|
||||
if !oldSpec.Spec.EnableLogicalBackup && newSpec.Spec.EnableLogicalBackup {
|
||||
c.logger.Debugf("creating backup cron job")
|
||||
c.logger.Debug("creating backup cron job")
|
||||
if err := c.createLogicalBackupJob(); err != nil {
|
||||
c.logger.Errorf("could not create a k8s cron job for logical backups: %v", err)
|
||||
updateFailed = true
|
||||
|
|
@ -1092,7 +1153,7 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
|
||||
// delete if no longer needed
|
||||
if oldSpec.Spec.EnableLogicalBackup && !newSpec.Spec.EnableLogicalBackup {
|
||||
c.logger.Debugf("deleting backup cron job")
|
||||
c.logger.Debug("deleting backup cron job")
|
||||
if err := c.deleteLogicalBackupJob(); err != nil {
|
||||
c.logger.Errorf("could not delete a k8s cron job for logical backups: %v", err)
|
||||
updateFailed = true
|
||||
|
|
@ -1101,11 +1162,7 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
|
||||
}
|
||||
|
||||
// apply schedule changes
|
||||
// this is the only parameter of logical backups a user can overwrite in the cluster manifest
|
||||
if (oldSpec.Spec.EnableLogicalBackup && newSpec.Spec.EnableLogicalBackup) &&
|
||||
(newSpec.Spec.LogicalBackupSchedule != oldSpec.Spec.LogicalBackupSchedule) {
|
||||
c.logger.Debugf("updating schedule of the backup cron job")
|
||||
if oldSpec.Spec.EnableLogicalBackup && newSpec.Spec.EnableLogicalBackup {
|
||||
if err := c.syncLogicalBackupJob(); err != nil {
|
||||
c.logger.Errorf("could not sync logical backup jobs: %v", err)
|
||||
updateFailed = true
|
||||
|
|
@ -1116,7 +1173,7 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
|
||||
// Roles and Databases
|
||||
if !userInitFailed && !(c.databaseAccessDisabled() || c.getNumberOfInstances(&c.Spec) <= 0 || c.Spec.StandbyCluster != nil) {
|
||||
c.logger.Debugf("syncing roles")
|
||||
c.logger.Debug("syncing roles")
|
||||
if err := c.syncRoles(); err != nil {
|
||||
c.logger.Errorf("could not sync roles: %v", err)
|
||||
updateFailed = true
|
||||
|
|
@ -1150,6 +1207,7 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
|
||||
// streams
|
||||
if len(newSpec.Spec.Streams) > 0 || len(oldSpec.Spec.Streams) != len(newSpec.Spec.Streams) {
|
||||
c.logger.Debug("syncing streams")
|
||||
if err := c.syncStreams(); err != nil {
|
||||
c.logger.Errorf("could not sync streams: %v", err)
|
||||
updateFailed = true
|
||||
|
|
@ -1222,14 +1280,13 @@ func (c *Cluster) Delete() error {
|
|||
c.logger.Info("not deleting secrets because disabled in configuration")
|
||||
}
|
||||
|
||||
if err := c.deletePodDisruptionBudget(); err != nil {
|
||||
if err := c.deletePodDisruptionBudgets(); err != nil {
|
||||
anyErrors = true
|
||||
c.logger.Warningf("could not delete pod disruption budget: %v", err)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not delete pod disruption budget: %v", err)
|
||||
c.logger.Warningf("could not delete pod disruption budgets: %v", err)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not delete pod disruption budgets: %v", err)
|
||||
}
|
||||
|
||||
for _, role := range []PostgresRole{Master, Replica} {
|
||||
|
||||
if !c.patroniKubernetesUseConfigMaps() {
|
||||
if err := c.deleteEndpoint(role); err != nil {
|
||||
anyErrors = true
|
||||
|
|
@ -1245,10 +1302,10 @@ func (c *Cluster) Delete() error {
|
|||
}
|
||||
}
|
||||
|
||||
if err := c.deletePatroniClusterObjects(); err != nil {
|
||||
if err := c.deletePatroniResources(); err != nil {
|
||||
anyErrors = true
|
||||
c.logger.Warningf("could not remove leftover patroni objects; %v", err)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not remove leftover patroni objects; %v", err)
|
||||
c.logger.Warningf("could not delete all Patroni resources: %v", err)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not delete all Patroni resources: %v", err)
|
||||
}
|
||||
|
||||
// Delete connection pooler objects anyway, even if it's not mentioned in the
|
||||
|
|
@ -1412,18 +1469,18 @@ func (c *Cluster) initPreparedDatabaseRoles() error {
|
|||
preparedSchemas = map[string]acidv1.PreparedSchema{"data": {DefaultRoles: util.True()}}
|
||||
}
|
||||
|
||||
var searchPath strings.Builder
|
||||
searchPath.WriteString(constants.DefaultSearchPath)
|
||||
searchPathArr := []string{constants.DefaultSearchPath}
|
||||
for preparedSchemaName := range preparedSchemas {
|
||||
searchPath.WriteString(", " + preparedSchemaName)
|
||||
searchPathArr = append(searchPathArr, fmt.Sprintf("%q", preparedSchemaName))
|
||||
}
|
||||
searchPath := strings.Join(searchPathArr, ", ")
|
||||
|
||||
// default roles per database
|
||||
if err := c.initDefaultRoles(defaultRoles, "admin", preparedDbName, searchPath.String(), preparedDB.SecretNamespace); err != nil {
|
||||
if err := c.initDefaultRoles(defaultRoles, "admin", preparedDbName, searchPath, preparedDB.SecretNamespace); err != nil {
|
||||
return fmt.Errorf("could not initialize default roles for database %s: %v", preparedDbName, err)
|
||||
}
|
||||
if preparedDB.DefaultUsers {
|
||||
if err := c.initDefaultRoles(defaultUsers, "admin", preparedDbName, searchPath.String(), preparedDB.SecretNamespace); err != nil {
|
||||
if err := c.initDefaultRoles(defaultUsers, "admin", preparedDbName, searchPath, preparedDB.SecretNamespace); err != nil {
|
||||
return fmt.Errorf("could not initialize default roles for database %s: %v", preparedDbName, err)
|
||||
}
|
||||
}
|
||||
|
|
@ -1434,14 +1491,16 @@ func (c *Cluster) initPreparedDatabaseRoles() error {
|
|||
if err := c.initDefaultRoles(defaultRoles,
|
||||
preparedDbName+constants.OwnerRoleNameSuffix,
|
||||
preparedDbName+"_"+preparedSchemaName,
|
||||
constants.DefaultSearchPath+", "+preparedSchemaName, preparedDB.SecretNamespace); err != nil {
|
||||
fmt.Sprintf("%s, %q", constants.DefaultSearchPath, preparedSchemaName),
|
||||
preparedDB.SecretNamespace); err != nil {
|
||||
return fmt.Errorf("could not initialize default roles for database schema %s: %v", preparedSchemaName, err)
|
||||
}
|
||||
if preparedSchema.DefaultUsers {
|
||||
if err := c.initDefaultRoles(defaultUsers,
|
||||
preparedDbName+constants.OwnerRoleNameSuffix,
|
||||
preparedDbName+"_"+preparedSchemaName,
|
||||
constants.DefaultSearchPath+", "+preparedSchemaName, preparedDB.SecretNamespace); err != nil {
|
||||
fmt.Sprintf("%s, %q", constants.DefaultSearchPath, preparedSchemaName),
|
||||
preparedDB.SecretNamespace); err != nil {
|
||||
return fmt.Errorf("could not initialize default users for database schema %s: %v", preparedSchemaName, err)
|
||||
}
|
||||
}
|
||||
|
|
@ -1723,16 +1782,17 @@ func (c *Cluster) GetCurrentProcess() Process {
|
|||
// GetStatus provides status of the cluster
|
||||
func (c *Cluster) GetStatus() *ClusterStatus {
|
||||
status := &ClusterStatus{
|
||||
Cluster: c.Name,
|
||||
Namespace: c.Namespace,
|
||||
Team: c.Spec.TeamID,
|
||||
Status: c.Status,
|
||||
Spec: c.Spec,
|
||||
MasterService: c.GetServiceMaster(),
|
||||
ReplicaService: c.GetServiceReplica(),
|
||||
StatefulSet: c.GetStatefulSet(),
|
||||
PodDisruptionBudget: c.GetPodDisruptionBudget(),
|
||||
CurrentProcess: c.GetCurrentProcess(),
|
||||
Cluster: c.Name,
|
||||
Namespace: c.Namespace,
|
||||
Team: c.Spec.TeamID,
|
||||
Status: c.Status,
|
||||
Spec: c.Spec,
|
||||
MasterService: c.GetServiceMaster(),
|
||||
ReplicaService: c.GetServiceReplica(),
|
||||
StatefulSet: c.GetStatefulSet(),
|
||||
PrimaryPodDisruptionBudget: c.GetPrimaryPodDisruptionBudget(),
|
||||
CriticalOpPodDisruptionBudget: c.GetCriticalOpPodDisruptionBudget(),
|
||||
CurrentProcess: c.GetCurrentProcess(),
|
||||
|
||||
Error: fmt.Errorf("error: %s", c.Error),
|
||||
}
|
||||
|
|
@ -1745,18 +1805,58 @@ func (c *Cluster) GetStatus() *ClusterStatus {
|
|||
return status
|
||||
}
|
||||
|
||||
// Switchover does a switchover (via Patroni) to a candidate pod
|
||||
func (c *Cluster) Switchover(curMaster *v1.Pod, candidate spec.NamespacedName) error {
|
||||
func (c *Cluster) GetSwitchoverSchedule() string {
|
||||
var possibleSwitchover, schedule time.Time
|
||||
|
||||
now := time.Now().UTC()
|
||||
for _, window := range c.Spec.MaintenanceWindows {
|
||||
// in the best case it is possible today
|
||||
possibleSwitchover = time.Date(now.Year(), now.Month(), now.Day(), window.StartTime.Hour(), window.StartTime.Minute(), 0, 0, time.UTC)
|
||||
if window.Everyday {
|
||||
if now.After(possibleSwitchover) {
|
||||
// we are already past the time for today, try tomorrow
|
||||
possibleSwitchover = possibleSwitchover.AddDate(0, 0, 1)
|
||||
}
|
||||
} else {
|
||||
if now.Weekday() != window.Weekday {
|
||||
// get closest possible time for this window
|
||||
possibleSwitchover = possibleSwitchover.AddDate(0, 0, int((7+window.Weekday-now.Weekday())%7))
|
||||
} else if now.After(possibleSwitchover) {
|
||||
// we are already past the time for today, try next week
|
||||
possibleSwitchover = possibleSwitchover.AddDate(0, 0, 7)
|
||||
}
|
||||
}
|
||||
|
||||
if (schedule == time.Time{}) || possibleSwitchover.Before(schedule) {
|
||||
schedule = possibleSwitchover
|
||||
}
|
||||
}
|
||||
return schedule.Format("2006-01-02T15:04+00")
|
||||
}
|
||||
|
||||
// Switchover does a switchover (via Patroni) to a candidate pod
|
||||
func (c *Cluster) Switchover(curMaster *v1.Pod, candidate spec.NamespacedName, scheduled bool) error {
|
||||
var err error
|
||||
c.logger.Debugf("switching over from %q to %q", curMaster.Name, candidate)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switching over from %q to %q", curMaster.Name, candidate)
|
||||
|
||||
stopCh := make(chan struct{})
|
||||
ch := c.registerPodSubscriber(candidate)
|
||||
defer c.unregisterPodSubscriber(candidate)
|
||||
defer close(stopCh)
|
||||
|
||||
if err = c.patroni.Switchover(curMaster, candidate.Name); err == nil {
|
||||
var scheduled_at string
|
||||
if scheduled {
|
||||
scheduled_at = c.GetSwitchoverSchedule()
|
||||
} else {
|
||||
c.logger.Debugf("switching over from %q to %q", curMaster.Name, candidate)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switching over from %q to %q", curMaster.Name, candidate)
|
||||
scheduled_at = ""
|
||||
}
|
||||
|
||||
if err = c.patroni.Switchover(curMaster, candidate.Name, scheduled_at); err == nil {
|
||||
if scheduled {
|
||||
c.logger.Infof("switchover from %q to %q is scheduled at %s", curMaster.Name, candidate, scheduled_at)
|
||||
return nil
|
||||
}
|
||||
c.logger.Debugf("successfully switched over from %q to %q", curMaster.Name, candidate)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Successfully switched over from %q to %q", curMaster.Name, candidate)
|
||||
_, err = c.waitForPodLabel(ch, stopCh, nil)
|
||||
|
|
@ -1764,6 +1864,9 @@ func (c *Cluster) Switchover(curMaster *v1.Pod, candidate spec.NamespacedName) e
|
|||
err = fmt.Errorf("could not get master pod label: %v", err)
|
||||
}
|
||||
} else {
|
||||
if scheduled {
|
||||
return fmt.Errorf("could not schedule switchover: %v", err)
|
||||
}
|
||||
err = fmt.Errorf("could not switch over from %q to %q: %v", curMaster.Name, candidate, err)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switchover from %q to %q FAILED: %v", curMaster.Name, candidate, err)
|
||||
}
|
||||
|
|
@ -1780,96 +1883,3 @@ func (c *Cluster) Lock() {
|
|||
func (c *Cluster) Unlock() {
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
type simpleActionWithResult func()
|
||||
|
||||
type clusterObjectGet func(name string) (spec.NamespacedName, error)
|
||||
|
||||
type clusterObjectDelete func(name string) error
|
||||
|
||||
func (c *Cluster) deletePatroniClusterObjects() error {
|
||||
// TODO: figure out how to remove leftover patroni objects in other cases
|
||||
var actionsList []simpleActionWithResult
|
||||
|
||||
if !c.patroniUsesKubernetes() {
|
||||
c.logger.Infof("not cleaning up Etcd Patroni objects on cluster delete")
|
||||
}
|
||||
|
||||
actionsList = append(actionsList, c.deletePatroniClusterServices)
|
||||
if c.patroniKubernetesUseConfigMaps() {
|
||||
actionsList = append(actionsList, c.deletePatroniClusterConfigMaps)
|
||||
} else {
|
||||
actionsList = append(actionsList, c.deletePatroniClusterEndpoints)
|
||||
}
|
||||
|
||||
c.logger.Debugf("removing leftover Patroni objects (endpoints / services and configmaps)")
|
||||
for _, deleter := range actionsList {
|
||||
deleter()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteClusterObject(
|
||||
get clusterObjectGet,
|
||||
del clusterObjectDelete,
|
||||
objType string,
|
||||
clusterName string,
|
||||
logger *logrus.Entry) {
|
||||
for _, suffix := range patroniObjectSuffixes {
|
||||
name := fmt.Sprintf("%s-%s", clusterName, suffix)
|
||||
|
||||
namespacedName, err := get(name)
|
||||
if err == nil {
|
||||
logger.Debugf("deleting %s %q",
|
||||
objType, namespacedName)
|
||||
|
||||
if err = del(name); err != nil {
|
||||
logger.Warningf("could not delete %s %q: %v",
|
||||
objType, namespacedName, err)
|
||||
}
|
||||
|
||||
} else if !k8sutil.ResourceNotFound(err) {
|
||||
logger.Warningf("could not fetch %s %q: %v",
|
||||
objType, namespacedName, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePatroniClusterServices() {
|
||||
get := func(name string) (spec.NamespacedName, error) {
|
||||
svc, err := c.KubeClient.Services(c.Namespace).Get(context.TODO(), name, metav1.GetOptions{})
|
||||
return util.NameFromMeta(svc.ObjectMeta), err
|
||||
}
|
||||
|
||||
deleteServiceFn := func(name string) error {
|
||||
return c.KubeClient.Services(c.Namespace).Delete(context.TODO(), name, c.deleteOptions)
|
||||
}
|
||||
|
||||
deleteClusterObject(get, deleteServiceFn, "service", c.Name, c.logger)
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePatroniClusterEndpoints() {
|
||||
get := func(name string) (spec.NamespacedName, error) {
|
||||
ep, err := c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), name, metav1.GetOptions{})
|
||||
return util.NameFromMeta(ep.ObjectMeta), err
|
||||
}
|
||||
|
||||
deleteEndpointFn := func(name string) error {
|
||||
return c.KubeClient.Endpoints(c.Namespace).Delete(context.TODO(), name, c.deleteOptions)
|
||||
}
|
||||
|
||||
deleteClusterObject(get, deleteEndpointFn, "endpoint", c.Name, c.logger)
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePatroniClusterConfigMaps() {
|
||||
get := func(name string) (spec.NamespacedName, error) {
|
||||
cm, err := c.KubeClient.ConfigMaps(c.Namespace).Get(context.TODO(), name, metav1.GetOptions{})
|
||||
return util.NameFromMeta(cm.ObjectMeta), err
|
||||
}
|
||||
|
||||
deleteConfigMapFn := func(name string) error {
|
||||
return c.KubeClient.ConfigMaps(c.Namespace).Delete(context.TODO(), name, c.deleteOptions)
|
||||
}
|
||||
|
||||
deleteClusterObject(get, deleteConfigMapFn, "configmap", c.Name, c.logger)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -71,11 +71,11 @@ var cl = New(
|
|||
Spec: acidv1.PostgresSpec{
|
||||
EnableConnectionPooler: util.True(),
|
||||
Streams: []acidv1.Stream{
|
||||
acidv1.Stream{
|
||||
{
|
||||
ApplicationId: "test-app",
|
||||
Database: "test_db",
|
||||
Tables: map[string]acidv1.StreamTable{
|
||||
"test_table": acidv1.StreamTable{
|
||||
"test_table": {
|
||||
EventType: "test-app.test",
|
||||
},
|
||||
},
|
||||
|
|
@ -95,6 +95,7 @@ func TestCreate(t *testing.T) {
|
|||
|
||||
client := k8sutil.KubernetesClient{
|
||||
DeploymentsGetter: clientSet.AppsV1(),
|
||||
CronJobsGetter: clientSet.BatchV1(),
|
||||
EndpointsGetter: clientSet.CoreV1(),
|
||||
PersistentVolumeClaimsGetter: clientSet.CoreV1(),
|
||||
PodDisruptionBudgetsGetter: clientSet.PolicyV1(),
|
||||
|
|
@ -111,6 +112,7 @@ func TestCreate(t *testing.T) {
|
|||
Namespace: clusterNamespace,
|
||||
},
|
||||
Spec: acidv1.PostgresSpec{
|
||||
EnableLogicalBackup: true,
|
||||
Volume: acidv1.Volume{
|
||||
Size: "1Gi",
|
||||
},
|
||||
|
|
@ -1363,6 +1365,23 @@ func TestCompareServices(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
serviceWithOwnerReference := newService(
|
||||
map[string]string{
|
||||
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
|
||||
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
|
||||
},
|
||||
v1.ServiceTypeClusterIP,
|
||||
[]string{"128.141.0.0/16", "137.138.0.0/16"})
|
||||
|
||||
ownerRef := metav1.OwnerReference{
|
||||
APIVersion: "acid.zalan.do/v1",
|
||||
Controller: boolToPointer(true),
|
||||
Kind: "Postgresql",
|
||||
Name: "clstr",
|
||||
}
|
||||
|
||||
serviceWithOwnerReference.ObjectMeta.OwnerReferences = append(serviceWithOwnerReference.ObjectMeta.OwnerReferences, ownerRef)
|
||||
|
||||
tests := []struct {
|
||||
about string
|
||||
current *v1.Service
|
||||
|
|
@ -1445,6 +1464,18 @@ func TestCompareServices(t *testing.T) {
|
|||
match: false,
|
||||
reason: `new service's LoadBalancerSourceRange does not match the current one`,
|
||||
},
|
||||
{
|
||||
about: "new service doesn't have owner references",
|
||||
current: newService(
|
||||
map[string]string{
|
||||
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
|
||||
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
|
||||
},
|
||||
v1.ServiceTypeClusterIP,
|
||||
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
|
||||
new: serviceWithOwnerReference,
|
||||
match: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
|
|
@ -1475,7 +1506,7 @@ func newCronJob(image, schedule string, vars []v1.EnvVar, mounts []v1.VolumeMoun
|
|||
Template: v1.PodTemplateSpec{
|
||||
Spec: v1.PodSpec{
|
||||
Containers: []v1.Container{
|
||||
v1.Container{
|
||||
{
|
||||
Name: "logical-backup",
|
||||
Image: image,
|
||||
Env: vars,
|
||||
|
|
@ -1649,12 +1680,20 @@ func TestCompareLogicalBackupJob(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
match, reason := cluster.compareLogicalBackupJob(currentCronJob, desiredCronJob)
|
||||
if match != tt.match {
|
||||
t.Errorf("%s - unexpected match result %t when comparing cronjobs %#v and %#v", t.Name(), match, currentCronJob, desiredCronJob)
|
||||
} else {
|
||||
if !strings.HasPrefix(reason, tt.reason) {
|
||||
t.Errorf("%s - expected reason prefix %s, found %s", t.Name(), tt.reason, reason)
|
||||
cmp := cluster.compareLogicalBackupJob(currentCronJob, desiredCronJob)
|
||||
if cmp.match != tt.match {
|
||||
t.Errorf("%s - unexpected match result %t when comparing cronjobs %#v and %#v", t.Name(), cmp.match, currentCronJob, desiredCronJob)
|
||||
} else if !cmp.match {
|
||||
found := false
|
||||
for _, reason := range cmp.reasons {
|
||||
if strings.HasPrefix(reason, tt.reason) {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
found = false
|
||||
}
|
||||
if !found {
|
||||
t.Errorf("%s - expected reason prefix %s, not found in %#v", t.Name(), tt.reason, cmp.reasons)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
|
@ -2026,3 +2065,91 @@ func TestCompareVolumeMounts(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetSwitchoverSchedule(t *testing.T) {
|
||||
now := time.Now()
|
||||
|
||||
futureTimeStart := now.Add(1 * time.Hour)
|
||||
futureWindowTimeStart := futureTimeStart.Format("15:04")
|
||||
futureWindowTimeEnd := now.Add(2 * time.Hour).Format("15:04")
|
||||
pastTimeStart := now.Add(-2 * time.Hour)
|
||||
pastWindowTimeStart := pastTimeStart.Format("15:04")
|
||||
pastWindowTimeEnd := now.Add(-1 * time.Hour).Format("15:04")
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
windows []acidv1.MaintenanceWindow
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "everyday maintenance windows is later today",
|
||||
windows: []acidv1.MaintenanceWindow{
|
||||
{
|
||||
Everyday: true,
|
||||
StartTime: mustParseTime(futureWindowTimeStart),
|
||||
EndTime: mustParseTime(futureWindowTimeEnd),
|
||||
},
|
||||
},
|
||||
expected: futureTimeStart.Format("2006-01-02T15:04+00"),
|
||||
},
|
||||
{
|
||||
name: "everyday maintenance window is tomorrow",
|
||||
windows: []acidv1.MaintenanceWindow{
|
||||
{
|
||||
Everyday: true,
|
||||
StartTime: mustParseTime(pastWindowTimeStart),
|
||||
EndTime: mustParseTime(pastWindowTimeEnd),
|
||||
},
|
||||
},
|
||||
expected: pastTimeStart.AddDate(0, 0, 1).Format("2006-01-02T15:04+00"),
|
||||
},
|
||||
{
|
||||
name: "weekday maintenance windows is later today",
|
||||
windows: []acidv1.MaintenanceWindow{
|
||||
{
|
||||
Weekday: now.Weekday(),
|
||||
StartTime: mustParseTime(futureWindowTimeStart),
|
||||
EndTime: mustParseTime(futureWindowTimeEnd),
|
||||
},
|
||||
},
|
||||
expected: futureTimeStart.Format("2006-01-02T15:04+00"),
|
||||
},
|
||||
{
|
||||
name: "weekday maintenance windows is passed for today",
|
||||
windows: []acidv1.MaintenanceWindow{
|
||||
{
|
||||
Weekday: now.Weekday(),
|
||||
StartTime: mustParseTime(pastWindowTimeStart),
|
||||
EndTime: mustParseTime(pastWindowTimeEnd),
|
||||
},
|
||||
},
|
||||
expected: pastTimeStart.AddDate(0, 0, 7).Format("2006-01-02T15:04+00"),
|
||||
},
|
||||
{
|
||||
name: "choose the earliest window",
|
||||
windows: []acidv1.MaintenanceWindow{
|
||||
{
|
||||
Weekday: now.AddDate(0, 0, 2).Weekday(),
|
||||
StartTime: mustParseTime(futureWindowTimeStart),
|
||||
EndTime: mustParseTime(futureWindowTimeEnd),
|
||||
},
|
||||
{
|
||||
Everyday: true,
|
||||
StartTime: mustParseTime(pastWindowTimeStart),
|
||||
EndTime: mustParseTime(pastWindowTimeEnd),
|
||||
},
|
||||
},
|
||||
expected: pastTimeStart.AddDate(0, 0, 1).Format("2006-01-02T15:04+00"),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
cluster.Spec.MaintenanceWindows = tt.windows
|
||||
schedule := cluster.GetSwitchoverSchedule()
|
||||
if schedule != tt.expected {
|
||||
t.Errorf("Expected GetSwitchoverSchedule to return %s, returned: %s", tt.expected, schedule)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@ package cluster
|
|||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
|
|
@ -590,7 +592,7 @@ func (c *Cluster) deleteConnectionPooler(role PostgresRole) (err error) {
|
|||
// Lack of connection pooler objects is not a fatal error, just log it if
|
||||
// it was present before in the manifest
|
||||
if c.ConnectionPooler[role] == nil || role == "" {
|
||||
c.logger.Debugf("no connection pooler to delete")
|
||||
c.logger.Debug("no connection pooler to delete")
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
@ -621,7 +623,7 @@ func (c *Cluster) deleteConnectionPooler(role PostgresRole) (err error) {
|
|||
// Repeat the same for the service object
|
||||
service := c.ConnectionPooler[role].Service
|
||||
if service == nil {
|
||||
c.logger.Debugf("no connection pooler service object to delete")
|
||||
c.logger.Debug("no connection pooler service object to delete")
|
||||
} else {
|
||||
|
||||
err = c.KubeClient.
|
||||
|
|
@ -654,7 +656,7 @@ func (c *Cluster) deleteConnectionPoolerSecret() (err error) {
|
|||
if err != nil {
|
||||
c.logger.Debugf("could not get connection pooler secret %s: %v", secretName, err)
|
||||
} else {
|
||||
if err = c.deleteSecret(secret.UID, *secret); err != nil {
|
||||
if err = c.deleteSecret(secret.UID); err != nil {
|
||||
return fmt.Errorf("could not delete pooler secret: %v", err)
|
||||
}
|
||||
}
|
||||
|
|
@ -663,11 +665,19 @@ func (c *Cluster) deleteConnectionPoolerSecret() (err error) {
|
|||
|
||||
// Perform actual patching of a connection pooler deployment, assuming that all
|
||||
// the check were already done before.
|
||||
func updateConnectionPoolerDeployment(KubeClient k8sutil.KubernetesClient, newDeployment *appsv1.Deployment) (*appsv1.Deployment, error) {
|
||||
func updateConnectionPoolerDeployment(KubeClient k8sutil.KubernetesClient, newDeployment *appsv1.Deployment, doUpdate bool) (*appsv1.Deployment, error) {
|
||||
if newDeployment == nil {
|
||||
return nil, fmt.Errorf("there is no connection pooler in the cluster")
|
||||
}
|
||||
|
||||
if doUpdate {
|
||||
updatedDeployment, err := KubeClient.Deployments(newDeployment.Namespace).Update(context.TODO(), newDeployment, metav1.UpdateOptions{})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not update pooler deployment to match desired state: %v", err)
|
||||
}
|
||||
return updatedDeployment, nil
|
||||
}
|
||||
|
||||
patchData, err := specPatch(newDeployment.Spec)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not form patch for the connection pooler deployment: %v", err)
|
||||
|
|
@ -751,6 +761,7 @@ func (c *Cluster) needSyncConnectionPoolerDefaults(Config *Config, spec *acidv1.
|
|||
if spec == nil {
|
||||
spec = &acidv1.ConnectionPooler{}
|
||||
}
|
||||
|
||||
if spec.NumberOfInstances == nil &&
|
||||
*deployment.Spec.Replicas != *config.NumberOfInstances {
|
||||
|
||||
|
|
@ -967,6 +978,7 @@ func (c *Cluster) syncConnectionPoolerWorker(oldSpec, newSpec *acidv1.Postgresql
|
|||
err error
|
||||
)
|
||||
|
||||
updatedPodAnnotations := map[string]*string{}
|
||||
syncReason := make([]string, 0)
|
||||
deployment, err = c.KubeClient.
|
||||
Deployments(c.Namespace).
|
||||
|
|
@ -1014,25 +1026,48 @@ func (c *Cluster) syncConnectionPoolerWorker(oldSpec, newSpec *acidv1.Postgresql
|
|||
newConnectionPooler = &acidv1.ConnectionPooler{}
|
||||
}
|
||||
|
||||
var specSync bool
|
||||
var specSync, updateDeployment bool
|
||||
var specReason []string
|
||||
|
||||
if !reflect.DeepEqual(deployment.ObjectMeta.OwnerReferences, c.ownerReferences()) {
|
||||
c.logger.Info("new connection pooler owner references do not match the current ones")
|
||||
updateDeployment = true
|
||||
}
|
||||
|
||||
if oldSpec != nil {
|
||||
specSync, specReason = needSyncConnectionPoolerSpecs(oldConnectionPooler, newConnectionPooler, c.logger)
|
||||
syncReason = append(syncReason, specReason...)
|
||||
}
|
||||
|
||||
newPodAnnotations := c.annotationsSet(c.generatePodAnnotations(&c.Spec))
|
||||
if changed, reason := c.compareAnnotations(deployment.Spec.Template.Annotations, newPodAnnotations); changed {
|
||||
deletedPodAnnotations := []string{}
|
||||
if changed, reason := c.compareAnnotations(deployment.Spec.Template.Annotations, newPodAnnotations, &deletedPodAnnotations); changed {
|
||||
specSync = true
|
||||
syncReason = append(syncReason, []string{"new connection pooler's pod template annotations do not match the current one: " + reason}...)
|
||||
syncReason = append(syncReason, []string{"new connection pooler's pod template annotations do not match the current ones: " + reason}...)
|
||||
|
||||
for _, anno := range deletedPodAnnotations {
|
||||
updatedPodAnnotations[anno] = nil
|
||||
}
|
||||
templateMetadataReq := map[string]map[string]map[string]map[string]map[string]*string{
|
||||
"spec": {"template": {"metadata": {"annotations": updatedPodAnnotations}}}}
|
||||
patch, err := json.Marshal(templateMetadataReq)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not marshal ObjectMeta for %s connection pooler's pod template: %v", role, err)
|
||||
}
|
||||
deployment, err = c.KubeClient.Deployments(c.Namespace).Patch(context.TODO(),
|
||||
deployment.Name, types.StrategicMergePatchType, patch, metav1.PatchOptions{}, "")
|
||||
if err != nil {
|
||||
c.logger.Errorf("failed to patch %s connection pooler's pod template: %v", role, err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
deployment.Spec.Template.Annotations = newPodAnnotations
|
||||
}
|
||||
|
||||
defaultsSync, defaultsReason := c.needSyncConnectionPoolerDefaults(&c.Config, newConnectionPooler, deployment)
|
||||
syncReason = append(syncReason, defaultsReason...)
|
||||
|
||||
if specSync || defaultsSync {
|
||||
if specSync || defaultsSync || updateDeployment {
|
||||
c.logger.Infof("update connection pooler deployment %s, reason: %+v",
|
||||
c.connectionPoolerName(role), syncReason)
|
||||
newDeployment, err = c.generateConnectionPoolerDeployment(c.ConnectionPooler[role])
|
||||
|
|
@ -1040,7 +1075,7 @@ func (c *Cluster) syncConnectionPoolerWorker(oldSpec, newSpec *acidv1.Postgresql
|
|||
return syncReason, fmt.Errorf("could not generate deployment for connection pooler: %v", err)
|
||||
}
|
||||
|
||||
deployment, err = updateConnectionPoolerDeployment(c.KubeClient, newDeployment)
|
||||
deployment, err = updateConnectionPoolerDeployment(c.KubeClient, newDeployment, updateDeployment)
|
||||
|
||||
if err != nil {
|
||||
return syncReason, err
|
||||
|
|
@ -1049,7 +1084,7 @@ func (c *Cluster) syncConnectionPoolerWorker(oldSpec, newSpec *acidv1.Postgresql
|
|||
}
|
||||
|
||||
newAnnotations := c.AnnotationsToPropagate(c.annotationsSet(nil)) // including the downscaling annotations
|
||||
if changed, _ := c.compareAnnotations(deployment.Annotations, newAnnotations); changed {
|
||||
if changed, _ := c.compareAnnotations(deployment.Annotations, newAnnotations, nil); changed {
|
||||
deployment, err = patchConnectionPoolerAnnotations(c.KubeClient, deployment, newAnnotations)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
@ -1083,14 +1118,20 @@ func (c *Cluster) syncConnectionPoolerWorker(oldSpec, newSpec *acidv1.Postgresql
|
|||
if err != nil {
|
||||
return nil, fmt.Errorf("could not delete pooler pod: %v", err)
|
||||
}
|
||||
} else if changed, _ := c.compareAnnotations(pod.Annotations, deployment.Spec.Template.Annotations); changed {
|
||||
patchData, err := metaAnnotationsPatch(deployment.Spec.Template.Annotations)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not form patch for pooler's pod annotations: %v", err)
|
||||
} else if changed, _ := c.compareAnnotations(pod.Annotations, deployment.Spec.Template.Annotations, nil); changed {
|
||||
metadataReq := map[string]map[string]map[string]*string{"metadata": {}}
|
||||
|
||||
for anno, val := range deployment.Spec.Template.Annotations {
|
||||
updatedPodAnnotations[anno] = &val
|
||||
}
|
||||
_, err = c.KubeClient.Pods(pod.Namespace).Patch(context.TODO(), pod.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
metadataReq["metadata"]["annotations"] = updatedPodAnnotations
|
||||
patch, err := json.Marshal(metadataReq)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not patch annotations for pooler's pod %q: %v", pod.Name, err)
|
||||
return nil, fmt.Errorf("could not marshal ObjectMeta for %s connection pooler's pods: %v", role, err)
|
||||
}
|
||||
_, err = c.KubeClient.Pods(pod.Namespace).Patch(context.TODO(), pod.Name, types.StrategicMergePatchType, patch, metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not patch annotations for %s connection pooler's pod %q: %v", role, pod.Name, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1103,7 +1144,6 @@ func (c *Cluster) syncConnectionPoolerWorker(oldSpec, newSpec *acidv1.Postgresql
|
|||
return syncReason, fmt.Errorf("could not update %s service to match desired state: %v", role, err)
|
||||
}
|
||||
c.ConnectionPooler[role].Service = newService
|
||||
c.logger.Infof("%s service %q is in the desired state now", role, util.NameFromMeta(desiredSvc.ObjectMeta))
|
||||
return NoSync, nil
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -969,7 +969,7 @@ func TestPoolerTLS(t *testing.T) {
|
|||
TLS: &acidv1.TLSDescription{
|
||||
SecretName: tlsSecretName, CAFile: "ca.crt"},
|
||||
AdditionalVolumes: []acidv1.AdditionalVolume{
|
||||
acidv1.AdditionalVolume{
|
||||
{
|
||||
Name: tlsSecretName,
|
||||
MountPath: mountPath,
|
||||
VolumeSource: v1.VolumeSource{
|
||||
|
|
@ -1077,6 +1077,9 @@ func TestConnectionPoolerServiceSpec(t *testing.T) {
|
|||
ConnectionPoolerDefaultMemoryRequest: "100Mi",
|
||||
ConnectionPoolerDefaultMemoryLimit: "100Mi",
|
||||
},
|
||||
Resources: config.Resources{
|
||||
EnableOwnerReferences: util.True(),
|
||||
},
|
||||
},
|
||||
}, k8sutil.KubernetesClient{}, acidv1.Postgresql{}, logger, eventRecorder)
|
||||
cluster.Statefulset = &appsv1.StatefulSet{
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ const (
|
|||
createExtensionSQL = `CREATE EXTENSION IF NOT EXISTS "%s" SCHEMA "%s"`
|
||||
alterExtensionSQL = `ALTER EXTENSION "%s" SET SCHEMA "%s"`
|
||||
|
||||
getPublicationsSQL = `SELECT p.pubname, string_agg(pt.schemaname || '.' || pt.tablename, ', ' ORDER BY pt.schemaname, pt.tablename)
|
||||
getPublicationsSQL = `SELECT p.pubname, COALESCE(string_agg(pt.schemaname || '.' || pt.tablename, ', ' ORDER BY pt.schemaname, pt.tablename), '') AS pubtables
|
||||
FROM pg_publication p
|
||||
LEFT JOIN pg_publication_tables pt ON pt.pubname = p.pubname
|
||||
WHERE p.pubowner = 'postgres'::regrole
|
||||
|
|
@ -111,7 +111,7 @@ func (c *Cluster) pgConnectionString(dbname string) string {
|
|||
|
||||
func (c *Cluster) databaseAccessDisabled() bool {
|
||||
if !c.OpConfig.EnableDBAccess {
|
||||
c.logger.Debugf("database access is disabled")
|
||||
c.logger.Debug("database access is disabled")
|
||||
}
|
||||
|
||||
return !c.OpConfig.EnableDBAccess
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ import (
|
|||
"github.com/zalando/postgres-operator/pkg/util/constants"
|
||||
)
|
||||
|
||||
//ExecCommand executes arbitrary command inside the pod
|
||||
// ExecCommand executes arbitrary command inside the pod
|
||||
func (c *Cluster) ExecCommand(podName *spec.NamespacedName, command ...string) (string, error) {
|
||||
c.setProcessName("executing command %q", strings.Join(command, " "))
|
||||
|
||||
|
|
@ -59,7 +59,7 @@ func (c *Cluster) ExecCommand(podName *spec.NamespacedName, command ...string) (
|
|||
return "", fmt.Errorf("failed to init executor: %v", err)
|
||||
}
|
||||
|
||||
err = exec.Stream(remotecommand.StreamOptions{
|
||||
err = exec.StreamWithContext(context.TODO(), remotecommand.StreamOptions{
|
||||
Stdout: &execOut,
|
||||
Stderr: &execErr,
|
||||
Tty: false,
|
||||
|
|
|
|||
|
|
@ -47,11 +47,6 @@ const (
|
|||
operatorPort = 8080
|
||||
)
|
||||
|
||||
type pgUser struct {
|
||||
Password string `json:"password"`
|
||||
Options []string `json:"options"`
|
||||
}
|
||||
|
||||
type patroniDCS struct {
|
||||
TTL uint32 `json:"ttl,omitempty"`
|
||||
LoopWait uint32 `json:"loop_wait,omitempty"`
|
||||
|
|
@ -79,19 +74,13 @@ func (c *Cluster) statefulSetName() string {
|
|||
return c.Name
|
||||
}
|
||||
|
||||
func (c *Cluster) endpointName(role PostgresRole) string {
|
||||
name := c.Name
|
||||
if role == Replica {
|
||||
name = fmt.Sprintf("%s-%s", name, "repl")
|
||||
}
|
||||
|
||||
return name
|
||||
}
|
||||
|
||||
func (c *Cluster) serviceName(role PostgresRole) string {
|
||||
name := c.Name
|
||||
if role == Replica {
|
||||
switch role {
|
||||
case Replica:
|
||||
name = fmt.Sprintf("%s-%s", name, "repl")
|
||||
case Patroni:
|
||||
name = fmt.Sprintf("%s-%s", name, "config")
|
||||
}
|
||||
|
||||
return name
|
||||
|
|
@ -120,10 +109,15 @@ func (c *Cluster) servicePort(role PostgresRole) int32 {
|
|||
return pgPort
|
||||
}
|
||||
|
||||
func (c *Cluster) podDisruptionBudgetName() string {
|
||||
func (c *Cluster) PrimaryPodDisruptionBudgetName() string {
|
||||
return c.OpConfig.PDBNameFormat.Format("cluster", c.Name)
|
||||
}
|
||||
|
||||
func (c *Cluster) criticalOpPodDisruptionBudgetName() string {
|
||||
pdbTemplate := config.StringTemplate("postgres-{cluster}-critical-op-pdb")
|
||||
return pdbTemplate.Format("cluster", c.Name)
|
||||
}
|
||||
|
||||
func makeDefaultResources(config *config.Config) acidv1.Resources {
|
||||
|
||||
defaultRequests := acidv1.ResourceDescription{
|
||||
|
|
@ -750,7 +744,7 @@ func (c *Cluster) generateSidecarContainers(sidecars []acidv1.Sidecar,
|
|||
}
|
||||
|
||||
// adds common fields to sidecars
|
||||
func patchSidecarContainers(in []v1.Container, volumeMounts []v1.VolumeMount, superUserName string, credentialsSecretName string, logger *logrus.Entry) []v1.Container {
|
||||
func patchSidecarContainers(in []v1.Container, volumeMounts []v1.VolumeMount, superUserName string, credentialsSecretName string) []v1.Container {
|
||||
result := []v1.Container{}
|
||||
|
||||
for _, container := range in {
|
||||
|
|
@ -1016,6 +1010,9 @@ func (c *Cluster) generateSpiloPodEnvVars(
|
|||
|
||||
if c.patroniUsesKubernetes() {
|
||||
envVars = append(envVars, v1.EnvVar{Name: "DCS_ENABLE_KUBERNETES_API", Value: "true"})
|
||||
if c.OpConfig.EnablePodDisruptionBudget != nil && *c.OpConfig.EnablePodDisruptionBudget {
|
||||
envVars = append(envVars, v1.EnvVar{Name: "KUBERNETES_BOOTSTRAP_LABELS", Value: "{\"critical-operation\":\"true\"}"})
|
||||
}
|
||||
} else {
|
||||
envVars = append(envVars, v1.EnvVar{Name: "ETCD_HOST", Value: c.OpConfig.EtcdHost})
|
||||
}
|
||||
|
|
@ -1233,6 +1230,7 @@ func getSidecarContainer(sidecar acidv1.Sidecar, index int, resources *v1.Resour
|
|||
Resources: *resources,
|
||||
Env: sidecar.Env,
|
||||
Ports: sidecar.Ports,
|
||||
Command: sidecar.Command,
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -1455,7 +1453,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
|
|||
containerName, containerName)
|
||||
}
|
||||
|
||||
sidecarContainers = patchSidecarContainers(sidecarContainers, volumeMounts, c.OpConfig.SuperUsername, c.credentialSecretName(c.OpConfig.SuperUsername), c.logger)
|
||||
sidecarContainers = patchSidecarContainers(sidecarContainers, volumeMounts, c.OpConfig.SuperUsername, c.credentialSecretName(c.OpConfig.SuperUsername))
|
||||
|
||||
tolerationSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration)
|
||||
effectivePodPriorityClassName := util.Coalesce(spec.PodPriorityClassName, c.OpConfig.PodPriorityClassName)
|
||||
|
|
@ -1530,10 +1528,11 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
|
|||
|
||||
statefulSet := &appsv1.StatefulSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: c.statefulSetName(),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.labelsSet(true),
|
||||
Annotations: c.AnnotationsToPropagate(c.annotationsSet(nil)),
|
||||
Name: c.statefulSetName(),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.labelsSet(true),
|
||||
Annotations: c.AnnotationsToPropagate(c.annotationsSet(nil)),
|
||||
OwnerReferences: c.ownerReferences(),
|
||||
},
|
||||
Spec: appsv1.StatefulSetSpec{
|
||||
Replicas: &numberOfInstances,
|
||||
|
|
@ -1608,7 +1607,7 @@ func (c *Cluster) generatePodAnnotations(spec *acidv1.PostgresSpec) map[string]s
|
|||
for k, v := range c.OpConfig.CustomPodAnnotations {
|
||||
annotations[k] = v
|
||||
}
|
||||
if spec != nil || spec.PodAnnotations != nil {
|
||||
if spec.PodAnnotations != nil {
|
||||
for k, v := range spec.PodAnnotations {
|
||||
annotations[k] = v
|
||||
}
|
||||
|
|
@ -1869,7 +1868,7 @@ func (c *Cluster) generatePersistentVolumeClaimTemplate(volumeSize, volumeStorag
|
|||
},
|
||||
Spec: v1.PersistentVolumeClaimSpec{
|
||||
AccessModes: []v1.PersistentVolumeAccessMode{v1.ReadWriteOnce},
|
||||
Resources: v1.ResourceRequirements{
|
||||
Resources: v1.VolumeResourceRequirements{
|
||||
Requests: v1.ResourceList{
|
||||
v1.ResourceStorage: quantity,
|
||||
},
|
||||
|
|
@ -1885,18 +1884,16 @@ func (c *Cluster) generatePersistentVolumeClaimTemplate(volumeSize, volumeStorag
|
|||
|
||||
func (c *Cluster) generateUserSecrets() map[string]*v1.Secret {
|
||||
secrets := make(map[string]*v1.Secret, len(c.pgUsers)+len(c.systemUsers))
|
||||
namespace := c.Namespace
|
||||
for username, pgUser := range c.pgUsers {
|
||||
//Skip users with no password i.e. human users (they'll be authenticated using pam)
|
||||
secret := c.generateSingleUserSecret(pgUser.Namespace, pgUser)
|
||||
secret := c.generateSingleUserSecret(pgUser)
|
||||
if secret != nil {
|
||||
secrets[username] = secret
|
||||
}
|
||||
namespace = pgUser.Namespace
|
||||
}
|
||||
/* special case for the system user */
|
||||
for _, systemUser := range c.systemUsers {
|
||||
secret := c.generateSingleUserSecret(namespace, systemUser)
|
||||
secret := c.generateSingleUserSecret(systemUser)
|
||||
if secret != nil {
|
||||
secrets[systemUser.Name] = secret
|
||||
}
|
||||
|
|
@ -1905,7 +1902,7 @@ func (c *Cluster) generateUserSecrets() map[string]*v1.Secret {
|
|||
return secrets
|
||||
}
|
||||
|
||||
func (c *Cluster) generateSingleUserSecret(namespace string, pgUser spec.PgUser) *v1.Secret {
|
||||
func (c *Cluster) generateSingleUserSecret(pgUser spec.PgUser) *v1.Secret {
|
||||
//Skip users with no password i.e. human users (they'll be authenticated using pam)
|
||||
if pgUser.Password == "" {
|
||||
if pgUser.Origin != spec.RoleOriginTeamsAPI {
|
||||
|
|
@ -1929,12 +1926,21 @@ func (c *Cluster) generateSingleUserSecret(namespace string, pgUser spec.PgUser)
|
|||
lbls = c.connectionPoolerLabels("", false).MatchLabels
|
||||
}
|
||||
|
||||
// if secret lives in another namespace we cannot set ownerReferences
|
||||
var ownerReferences []metav1.OwnerReference
|
||||
if c.Config.OpConfig.EnableCrossNamespaceSecret && strings.Contains(username, ".") {
|
||||
ownerReferences = nil
|
||||
} else {
|
||||
ownerReferences = c.ownerReferences()
|
||||
}
|
||||
|
||||
secret := v1.Secret{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: c.credentialSecretName(username),
|
||||
Namespace: pgUser.Namespace,
|
||||
Labels: lbls,
|
||||
Annotations: c.annotationsSet(nil),
|
||||
Name: c.credentialSecretName(username),
|
||||
Namespace: pgUser.Namespace,
|
||||
Labels: lbls,
|
||||
Annotations: c.annotationsSet(nil),
|
||||
OwnerReferences: ownerReferences,
|
||||
},
|
||||
Type: v1.SecretTypeOpaque,
|
||||
Data: map[string][]byte{
|
||||
|
|
@ -1992,10 +1998,11 @@ func (c *Cluster) generateService(role PostgresRole, spec *acidv1.PostgresSpec)
|
|||
|
||||
service := &v1.Service{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: c.serviceName(role),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.roleLabelsSet(true, role),
|
||||
Annotations: c.annotationsSet(c.generateServiceAnnotations(role, spec)),
|
||||
Name: c.serviceName(role),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.roleLabelsSet(true, role),
|
||||
Annotations: c.annotationsSet(c.generateServiceAnnotations(role, spec)),
|
||||
OwnerReferences: c.ownerReferences(),
|
||||
},
|
||||
Spec: serviceSpec,
|
||||
}
|
||||
|
|
@ -2061,10 +2068,11 @@ func (c *Cluster) getCustomServiceAnnotations(role PostgresRole, spec *acidv1.Po
|
|||
func (c *Cluster) generateEndpoint(role PostgresRole, subsets []v1.EndpointSubset) *v1.Endpoints {
|
||||
endpoints := &v1.Endpoints{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: c.endpointName(role),
|
||||
Namespace: c.Namespace,
|
||||
Annotations: c.annotationsSet(nil),
|
||||
Labels: c.roleLabelsSet(true, role),
|
||||
Name: c.serviceName(role),
|
||||
Namespace: c.Namespace,
|
||||
Annotations: c.annotationsSet(nil),
|
||||
Labels: c.roleLabelsSet(true, role),
|
||||
OwnerReferences: c.ownerReferences(),
|
||||
},
|
||||
}
|
||||
if len(subsets) > 0 {
|
||||
|
|
@ -2207,7 +2215,7 @@ func (c *Cluster) generateStandbyEnvironment(description *acidv1.StandbyDescript
|
|||
return result
|
||||
}
|
||||
|
||||
func (c *Cluster) generatePodDisruptionBudget() *policyv1.PodDisruptionBudget {
|
||||
func (c *Cluster) generatePrimaryPodDisruptionBudget() *policyv1.PodDisruptionBudget {
|
||||
minAvailable := intstr.FromInt(1)
|
||||
pdbEnabled := c.OpConfig.EnablePodDisruptionBudget
|
||||
pdbMasterLabelSelector := c.OpConfig.PDBMasterLabelSelector
|
||||
|
|
@ -2225,10 +2233,40 @@ func (c *Cluster) generatePodDisruptionBudget() *policyv1.PodDisruptionBudget {
|
|||
|
||||
return &policyv1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: c.podDisruptionBudgetName(),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.labelsSet(true),
|
||||
Annotations: c.annotationsSet(nil),
|
||||
Name: c.PrimaryPodDisruptionBudgetName(),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.labelsSet(true),
|
||||
Annotations: c.annotationsSet(nil),
|
||||
OwnerReferences: c.ownerReferences(),
|
||||
},
|
||||
Spec: policyv1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: &minAvailable,
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: labels,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Cluster) generateCriticalOpPodDisruptionBudget() *policyv1.PodDisruptionBudget {
|
||||
minAvailable := intstr.FromInt32(c.Spec.NumberOfInstances)
|
||||
pdbEnabled := c.OpConfig.EnablePodDisruptionBudget
|
||||
|
||||
// if PodDisruptionBudget is disabled or if there are no DB pods, set the budget to 0.
|
||||
if (pdbEnabled != nil && !(*pdbEnabled)) || c.Spec.NumberOfInstances <= 0 {
|
||||
minAvailable = intstr.FromInt(0)
|
||||
}
|
||||
|
||||
labels := c.labelsSet(false)
|
||||
labels["critical-operation"] = "true"
|
||||
|
||||
return &policyv1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: c.criticalOpPodDisruptionBudgetName(),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.labelsSet(true),
|
||||
Annotations: c.annotationsSet(nil),
|
||||
OwnerReferences: c.ownerReferences(),
|
||||
},
|
||||
Spec: policyv1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: &minAvailable,
|
||||
|
|
@ -2361,10 +2399,11 @@ func (c *Cluster) generateLogicalBackupJob() (*batchv1.CronJob, error) {
|
|||
|
||||
cronJob := &batchv1.CronJob{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: c.getLogicalBackupJobName(),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.labelsSet(true),
|
||||
Annotations: c.annotationsSet(nil),
|
||||
Name: c.getLogicalBackupJobName(),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.labelsSet(true),
|
||||
Annotations: c.annotationsSet(nil),
|
||||
OwnerReferences: c.ownerReferences(),
|
||||
},
|
||||
Spec: batchv1.CronJobSpec{
|
||||
Schedule: schedule,
|
||||
|
|
@ -2478,7 +2517,9 @@ func (c *Cluster) generateLogicalBackupPodEnvVars() []v1.EnvVar {
|
|||
}
|
||||
|
||||
case "gcs":
|
||||
envVars = append(envVars, v1.EnvVar{Name: "LOGICAL_BACKUP_GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.LogicalBackup.LogicalBackupGoogleApplicationCredentials})
|
||||
if c.OpConfig.LogicalBackup.LogicalBackupGoogleApplicationCredentials != "" {
|
||||
envVars = append(envVars, v1.EnvVar{Name: "LOGICAL_BACKUP_GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.LogicalBackup.LogicalBackupGoogleApplicationCredentials})
|
||||
}
|
||||
|
||||
case "az":
|
||||
envVars = appendEnvVars(envVars, []v1.EnvVar{
|
||||
|
|
@ -2489,11 +2530,11 @@ func (c *Cluster) generateLogicalBackupPodEnvVars() []v1.EnvVar {
|
|||
{
|
||||
Name: "LOGICAL_BACKUP_AZURE_STORAGE_CONTAINER",
|
||||
Value: c.OpConfig.LogicalBackup.LogicalBackupAzureStorageContainer,
|
||||
},
|
||||
{
|
||||
Name: "LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_KEY",
|
||||
Value: c.OpConfig.LogicalBackup.LogicalBackupAzureStorageAccountKey,
|
||||
}}...)
|
||||
|
||||
if c.OpConfig.LogicalBackup.LogicalBackupAzureStorageAccountKey != "" {
|
||||
envVars = append(envVars, v1.EnvVar{Name: "LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_KEY", Value: c.OpConfig.LogicalBackup.LogicalBackupAzureStorageAccountKey})
|
||||
}
|
||||
}
|
||||
|
||||
return envVars
|
||||
|
|
@ -2519,22 +2560,26 @@ func (c *Cluster) getLogicalBackupJobName() (jobName string) {
|
|||
// survived, we can't delete an object because it will affect the functioning
|
||||
// cluster).
|
||||
func (c *Cluster) ownerReferences() []metav1.OwnerReference {
|
||||
controller := true
|
||||
|
||||
if c.Statefulset == nil {
|
||||
c.logger.Warning("Cannot get owner reference, no statefulset")
|
||||
return []metav1.OwnerReference{}
|
||||
currentOwnerReferences := c.ObjectMeta.OwnerReferences
|
||||
if c.OpConfig.EnableOwnerReferences == nil || !*c.OpConfig.EnableOwnerReferences {
|
||||
return currentOwnerReferences
|
||||
}
|
||||
|
||||
return []metav1.OwnerReference{
|
||||
{
|
||||
UID: c.Statefulset.ObjectMeta.UID,
|
||||
APIVersion: "apps/v1",
|
||||
Kind: "StatefulSet",
|
||||
Name: c.Statefulset.ObjectMeta.Name,
|
||||
Controller: &controller,
|
||||
},
|
||||
for _, ownerRef := range currentOwnerReferences {
|
||||
if ownerRef.UID == c.Postgresql.ObjectMeta.UID {
|
||||
return currentOwnerReferences
|
||||
}
|
||||
}
|
||||
|
||||
controllerReference := metav1.OwnerReference{
|
||||
UID: c.Postgresql.ObjectMeta.UID,
|
||||
APIVersion: acidv1.SchemeGroupVersion.Identifier(),
|
||||
Kind: acidv1.PostgresCRDResourceKind,
|
||||
Name: c.Postgresql.ObjectMeta.Name,
|
||||
Controller: util.True(),
|
||||
}
|
||||
|
||||
return append(currentOwnerReferences, controllerReference)
|
||||
}
|
||||
|
||||
func ensurePath(file string, defaultDir string, defaultFile string) string {
|
||||
|
|
|
|||
|
|
@ -72,18 +72,18 @@ func TestGenerateSpiloJSONConfiguration(t *testing.T) {
|
|||
}{
|
||||
{
|
||||
subtest: "Patroni default configuration",
|
||||
pgParam: &acidv1.PostgresqlParam{PgVersion: "16"},
|
||||
pgParam: &acidv1.PostgresqlParam{PgVersion: "17"},
|
||||
patroni: &acidv1.Patroni{},
|
||||
opConfig: &config.Config{
|
||||
Auth: config.Auth{
|
||||
PamRoleName: "zalandos",
|
||||
},
|
||||
},
|
||||
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/16/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{}}}`,
|
||||
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/17/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{}}}`,
|
||||
},
|
||||
{
|
||||
subtest: "Patroni configured",
|
||||
pgParam: &acidv1.PostgresqlParam{PgVersion: "16"},
|
||||
pgParam: &acidv1.PostgresqlParam{PgVersion: "17"},
|
||||
patroni: &acidv1.Patroni{
|
||||
InitDB: map[string]string{
|
||||
"encoding": "UTF8",
|
||||
|
|
@ -102,38 +102,38 @@ func TestGenerateSpiloJSONConfiguration(t *testing.T) {
|
|||
FailsafeMode: util.True(),
|
||||
},
|
||||
opConfig: &config.Config{},
|
||||
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/16/bin","pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"synchronous_mode":true,"synchronous_mode_strict":true,"synchronous_node_count":1,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}},"failsafe_mode":true}}}`,
|
||||
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/17/bin","pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"synchronous_mode":true,"synchronous_mode_strict":true,"synchronous_node_count":1,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}},"failsafe_mode":true}}}`,
|
||||
},
|
||||
{
|
||||
subtest: "Patroni failsafe_mode configured globally",
|
||||
pgParam: &acidv1.PostgresqlParam{PgVersion: "16"},
|
||||
pgParam: &acidv1.PostgresqlParam{PgVersion: "17"},
|
||||
patroni: &acidv1.Patroni{},
|
||||
opConfig: &config.Config{
|
||||
EnablePatroniFailsafeMode: util.True(),
|
||||
},
|
||||
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/16/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":true}}}`,
|
||||
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/17/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":true}}}`,
|
||||
},
|
||||
{
|
||||
subtest: "Patroni failsafe_mode configured globally, disabled for cluster",
|
||||
pgParam: &acidv1.PostgresqlParam{PgVersion: "16"},
|
||||
pgParam: &acidv1.PostgresqlParam{PgVersion: "17"},
|
||||
patroni: &acidv1.Patroni{
|
||||
FailsafeMode: util.False(),
|
||||
},
|
||||
opConfig: &config.Config{
|
||||
EnablePatroniFailsafeMode: util.True(),
|
||||
},
|
||||
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/16/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":false}}}`,
|
||||
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/17/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":false}}}`,
|
||||
},
|
||||
{
|
||||
subtest: "Patroni failsafe_mode disabled globally, configured for cluster",
|
||||
pgParam: &acidv1.PostgresqlParam{PgVersion: "16"},
|
||||
pgParam: &acidv1.PostgresqlParam{PgVersion: "17"},
|
||||
patroni: &acidv1.Patroni{
|
||||
FailsafeMode: util.True(),
|
||||
},
|
||||
opConfig: &config.Config{
|
||||
EnablePatroniFailsafeMode: util.False(),
|
||||
},
|
||||
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/16/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":true}}}`,
|
||||
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/17/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":true}}}`,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
|
|
@ -164,15 +164,15 @@ func TestExtractPgVersionFromBinPath(t *testing.T) {
|
|||
},
|
||||
{
|
||||
subTest: "test current bin path against hard coded template",
|
||||
binPath: "/usr/lib/postgresql/16/bin",
|
||||
binPath: "/usr/lib/postgresql/17/bin",
|
||||
template: pgBinariesLocationTemplate,
|
||||
expected: "16",
|
||||
expected: "17",
|
||||
},
|
||||
{
|
||||
subTest: "test alternative bin path against a matching template",
|
||||
binPath: "/usr/pgsql-16/bin",
|
||||
binPath: "/usr/pgsql-17/bin",
|
||||
template: "/usr/pgsql-%v/bin",
|
||||
expected: "16",
|
||||
expected: "17",
|
||||
},
|
||||
}
|
||||
|
||||
|
|
@ -1451,9 +1451,9 @@ func TestNodeAffinity(t *testing.T) {
|
|||
nodeAff := &v1.NodeAffinity{
|
||||
RequiredDuringSchedulingIgnoredDuringExecution: &v1.NodeSelector{
|
||||
NodeSelectorTerms: []v1.NodeSelectorTerm{
|
||||
v1.NodeSelectorTerm{
|
||||
{
|
||||
MatchExpressions: []v1.NodeSelectorRequirement{
|
||||
v1.NodeSelectorRequirement{
|
||||
{
|
||||
Key: "test-label",
|
||||
Operator: v1.NodeSelectorOpIn,
|
||||
Values: []string{
|
||||
|
|
@ -1566,22 +1566,28 @@ func TestPodAffinity(t *testing.T) {
|
|||
}
|
||||
|
||||
func testDeploymentOwnerReference(cluster *Cluster, deployment *appsv1.Deployment) error {
|
||||
if len(deployment.ObjectMeta.OwnerReferences) == 0 {
|
||||
return nil
|
||||
}
|
||||
owner := deployment.ObjectMeta.OwnerReferences[0]
|
||||
|
||||
if owner.Name != cluster.Statefulset.ObjectMeta.Name {
|
||||
return fmt.Errorf("Ownere reference is incorrect, got %s, expected %s",
|
||||
owner.Name, cluster.Statefulset.ObjectMeta.Name)
|
||||
if owner.Name != cluster.Postgresql.ObjectMeta.Name {
|
||||
return fmt.Errorf("Owner reference is incorrect, got %s, expected %s",
|
||||
owner.Name, cluster.Postgresql.ObjectMeta.Name)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func testServiceOwnerReference(cluster *Cluster, service *v1.Service, role PostgresRole) error {
|
||||
if len(service.ObjectMeta.OwnerReferences) == 0 {
|
||||
return nil
|
||||
}
|
||||
owner := service.ObjectMeta.OwnerReferences[0]
|
||||
|
||||
if owner.Name != cluster.Statefulset.ObjectMeta.Name {
|
||||
return fmt.Errorf("Ownere reference is incorrect, got %s, expected %s",
|
||||
owner.Name, cluster.Statefulset.ObjectMeta.Name)
|
||||
if owner.Name != cluster.Postgresql.ObjectMeta.Name {
|
||||
return fmt.Errorf("Owner reference is incorrect, got %s, expected %s",
|
||||
owner.Name, cluster.Postgresql.ObjectMeta.Name)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -1667,7 +1673,7 @@ func TestTLS(t *testing.T) {
|
|||
TLS: &acidv1.TLSDescription{
|
||||
SecretName: tlsSecretName, CAFile: "ca.crt"},
|
||||
AdditionalVolumes: []acidv1.AdditionalVolume{
|
||||
acidv1.AdditionalVolume{
|
||||
{
|
||||
Name: tlsSecretName,
|
||||
MountPath: mountPath,
|
||||
VolumeSource: v1.VolumeSource{
|
||||
|
|
@ -2142,7 +2148,7 @@ func TestSidecars(t *testing.T) {
|
|||
|
||||
spec = acidv1.PostgresSpec{
|
||||
PostgresqlParam: acidv1.PostgresqlParam{
|
||||
PgVersion: "16",
|
||||
PgVersion: "17",
|
||||
Parameters: map[string]string{
|
||||
"max_connections": "100",
|
||||
},
|
||||
|
|
@ -2156,17 +2162,17 @@ func TestSidecars(t *testing.T) {
|
|||
Size: "1G",
|
||||
},
|
||||
Sidecars: []acidv1.Sidecar{
|
||||
acidv1.Sidecar{
|
||||
{
|
||||
Name: "cluster-specific-sidecar",
|
||||
},
|
||||
acidv1.Sidecar{
|
||||
{
|
||||
Name: "cluster-specific-sidecar-with-resources",
|
||||
Resources: &acidv1.Resources{
|
||||
ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("210m"), Memory: k8sutil.StringToPointer("0.8Gi")},
|
||||
ResourceLimits: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("510m"), Memory: k8sutil.StringToPointer("1.4Gi")},
|
||||
},
|
||||
},
|
||||
acidv1.Sidecar{
|
||||
{
|
||||
Name: "replace-sidecar",
|
||||
DockerImage: "override-image",
|
||||
},
|
||||
|
|
@ -2194,11 +2200,11 @@ func TestSidecars(t *testing.T) {
|
|||
"deprecated-global-sidecar": "image:123",
|
||||
},
|
||||
SidecarContainers: []v1.Container{
|
||||
v1.Container{
|
||||
{
|
||||
Name: "global-sidecar",
|
||||
},
|
||||
// will be replaced by a cluster specific sidecar with the same name
|
||||
v1.Container{
|
||||
{
|
||||
Name: "replace-sidecar",
|
||||
Image: "replaced-image",
|
||||
},
|
||||
|
|
@ -2253,7 +2259,7 @@ func TestSidecars(t *testing.T) {
|
|||
},
|
||||
}
|
||||
mounts := []v1.VolumeMount{
|
||||
v1.VolumeMount{
|
||||
{
|
||||
Name: "pgdata",
|
||||
MountPath: "/home/postgres/pgdata",
|
||||
},
|
||||
|
|
@ -2320,13 +2326,81 @@ func TestSidecars(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestGeneratePodDisruptionBudget(t *testing.T) {
|
||||
testName := "Test PodDisruptionBudget spec generation"
|
||||
|
||||
hasName := func(pdbName string) func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
|
||||
return func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
|
||||
if pdbName != podDisruptionBudget.ObjectMeta.Name {
|
||||
return fmt.Errorf("PodDisruptionBudget name is incorrect, got %s, expected %s",
|
||||
podDisruptionBudget.ObjectMeta.Name, pdbName)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
hasMinAvailable := func(expectedMinAvailable int) func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
|
||||
return func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
|
||||
actual := podDisruptionBudget.Spec.MinAvailable.IntVal
|
||||
if actual != int32(expectedMinAvailable) {
|
||||
return fmt.Errorf("PodDisruptionBudget MinAvailable is incorrect, got %d, expected %d",
|
||||
actual, expectedMinAvailable)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
testLabelsAndSelectors := func(isPrimary bool) func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
|
||||
return func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
|
||||
masterLabelSelectorDisabled := cluster.OpConfig.PDBMasterLabelSelector != nil && !*cluster.OpConfig.PDBMasterLabelSelector
|
||||
if podDisruptionBudget.ObjectMeta.Namespace != "myapp" {
|
||||
return fmt.Errorf("Object Namespace incorrect.")
|
||||
}
|
||||
expectedLabels := map[string]string{"team": "myapp", "cluster-name": "myapp-database"}
|
||||
if !reflect.DeepEqual(podDisruptionBudget.Labels, expectedLabels) {
|
||||
return fmt.Errorf("Labels incorrect, got %#v, expected %#v", podDisruptionBudget.Labels, expectedLabels)
|
||||
}
|
||||
if !masterLabelSelectorDisabled {
|
||||
if isPrimary {
|
||||
expectedLabels := &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"}}
|
||||
if !reflect.DeepEqual(podDisruptionBudget.Spec.Selector, expectedLabels) {
|
||||
return fmt.Errorf("MatchLabels incorrect, got %#v, expected %#v", podDisruptionBudget.Spec.Selector, expectedLabels)
|
||||
}
|
||||
} else {
|
||||
expectedLabels := &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"cluster-name": "myapp-database", "critical-operation": "true"}}
|
||||
if !reflect.DeepEqual(podDisruptionBudget.Spec.Selector, expectedLabels) {
|
||||
return fmt.Errorf("MatchLabels incorrect, got %#v, expected %#v", podDisruptionBudget.Spec.Selector, expectedLabels)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
testPodDisruptionBudgetOwnerReference := func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
|
||||
if len(podDisruptionBudget.ObjectMeta.OwnerReferences) == 0 {
|
||||
return nil
|
||||
}
|
||||
owner := podDisruptionBudget.ObjectMeta.OwnerReferences[0]
|
||||
|
||||
if owner.Name != cluster.Postgresql.ObjectMeta.Name {
|
||||
return fmt.Errorf("Owner reference is incorrect, got %s, expected %s",
|
||||
owner.Name, cluster.Postgresql.ObjectMeta.Name)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
c *Cluster
|
||||
out policyv1.PodDisruptionBudget
|
||||
scenario string
|
||||
spec *Cluster
|
||||
check []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error
|
||||
}{
|
||||
// With multiple instances.
|
||||
{
|
||||
New(
|
||||
scenario: "With multiple instances",
|
||||
spec: New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
|
|
@ -2334,23 +2408,16 @@ func TestGeneratePodDisruptionBudget(t *testing.T) {
|
|||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger,
|
||||
eventRecorder),
|
||||
policyv1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "postgres-myapp-database-pdb",
|
||||
Namespace: "myapp",
|
||||
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||
},
|
||||
Spec: policyv1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: util.ToIntStr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||
},
|
||||
},
|
||||
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
|
||||
testPodDisruptionBudgetOwnerReference,
|
||||
hasName("postgres-myapp-database-pdb"),
|
||||
hasMinAvailable(1),
|
||||
testLabelsAndSelectors(true),
|
||||
},
|
||||
},
|
||||
// With zero instances.
|
||||
{
|
||||
New(
|
||||
scenario: "With zero instances",
|
||||
spec: New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
|
|
@ -2358,23 +2425,16 @@ func TestGeneratePodDisruptionBudget(t *testing.T) {
|
|||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 0}},
|
||||
logger,
|
||||
eventRecorder),
|
||||
policyv1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "postgres-myapp-database-pdb",
|
||||
Namespace: "myapp",
|
||||
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||
},
|
||||
Spec: policyv1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: util.ToIntStr(0),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||
},
|
||||
},
|
||||
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
|
||||
testPodDisruptionBudgetOwnerReference,
|
||||
hasName("postgres-myapp-database-pdb"),
|
||||
hasMinAvailable(0),
|
||||
testLabelsAndSelectors(true),
|
||||
},
|
||||
},
|
||||
// With PodDisruptionBudget disabled.
|
||||
{
|
||||
New(
|
||||
scenario: "With PodDisruptionBudget disabled",
|
||||
spec: New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.False()}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
|
|
@ -2382,23 +2442,16 @@ func TestGeneratePodDisruptionBudget(t *testing.T) {
|
|||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger,
|
||||
eventRecorder),
|
||||
policyv1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "postgres-myapp-database-pdb",
|
||||
Namespace: "myapp",
|
||||
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||
},
|
||||
Spec: policyv1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: util.ToIntStr(0),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||
},
|
||||
},
|
||||
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
|
||||
testPodDisruptionBudgetOwnerReference,
|
||||
hasName("postgres-myapp-database-pdb"),
|
||||
hasMinAvailable(0),
|
||||
testLabelsAndSelectors(true),
|
||||
},
|
||||
},
|
||||
// With non-default PDBNameFormat and PodDisruptionBudget explicitly enabled.
|
||||
{
|
||||
New(
|
||||
scenario: "With non-default PDBNameFormat and PodDisruptionBudget explicitly enabled",
|
||||
spec: New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-databass-budget", EnablePodDisruptionBudget: util.True()}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
|
|
@ -2406,50 +2459,143 @@ func TestGeneratePodDisruptionBudget(t *testing.T) {
|
|||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger,
|
||||
eventRecorder),
|
||||
policyv1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "postgres-myapp-database-databass-budget",
|
||||
Namespace: "myapp",
|
||||
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||
},
|
||||
Spec: policyv1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: util.ToIntStr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||
},
|
||||
},
|
||||
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
|
||||
testPodDisruptionBudgetOwnerReference,
|
||||
hasName("postgres-myapp-database-databass-budget"),
|
||||
hasMinAvailable(1),
|
||||
testLabelsAndSelectors(true),
|
||||
},
|
||||
},
|
||||
// With PDBMasterLabelSelector disabled.
|
||||
{
|
||||
New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", PDBMasterLabelSelector: util.False()}},
|
||||
scenario: "With PDBMasterLabelSelector disabled",
|
||||
spec: New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.True(), PDBMasterLabelSelector: util.False()}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger,
|
||||
eventRecorder),
|
||||
policyv1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "postgres-myapp-database-pdb",
|
||||
Namespace: "myapp",
|
||||
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||
},
|
||||
Spec: policyv1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: util.ToIntStr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"cluster-name": "myapp-database"},
|
||||
},
|
||||
},
|
||||
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
|
||||
testPodDisruptionBudgetOwnerReference,
|
||||
hasName("postgres-myapp-database-pdb"),
|
||||
hasMinAvailable(1),
|
||||
testLabelsAndSelectors(true),
|
||||
},
|
||||
},
|
||||
{
|
||||
scenario: "With OwnerReference enabled",
|
||||
spec: New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role", EnableOwnerReferences: util.True()}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.True()}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger,
|
||||
eventRecorder),
|
||||
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
|
||||
testPodDisruptionBudgetOwnerReference,
|
||||
hasName("postgres-myapp-database-pdb"),
|
||||
hasMinAvailable(1),
|
||||
testLabelsAndSelectors(true),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
result := tt.c.generatePodDisruptionBudget()
|
||||
if !reflect.DeepEqual(*result, tt.out) {
|
||||
t.Errorf("Expected PodDisruptionBudget: %#v, got %#v", tt.out, *result)
|
||||
result := tt.spec.generatePrimaryPodDisruptionBudget()
|
||||
for _, check := range tt.check {
|
||||
err := check(tt.spec, result)
|
||||
if err != nil {
|
||||
t.Errorf("%s [%s]: PodDisruptionBudget spec is incorrect, %+v",
|
||||
testName, tt.scenario, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
testCriticalOp := []struct {
|
||||
scenario string
|
||||
spec *Cluster
|
||||
check []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error
|
||||
}{
|
||||
{
|
||||
scenario: "With multiple instances",
|
||||
spec: New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger,
|
||||
eventRecorder),
|
||||
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
|
||||
testPodDisruptionBudgetOwnerReference,
|
||||
hasName("postgres-myapp-database-critical-op-pdb"),
|
||||
hasMinAvailable(3),
|
||||
testLabelsAndSelectors(false),
|
||||
},
|
||||
},
|
||||
{
|
||||
scenario: "With zero instances",
|
||||
spec: New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 0}},
|
||||
logger,
|
||||
eventRecorder),
|
||||
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
|
||||
testPodDisruptionBudgetOwnerReference,
|
||||
hasName("postgres-myapp-database-critical-op-pdb"),
|
||||
hasMinAvailable(0),
|
||||
testLabelsAndSelectors(false),
|
||||
},
|
||||
},
|
||||
{
|
||||
scenario: "With PodDisruptionBudget disabled",
|
||||
spec: New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.False()}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger,
|
||||
eventRecorder),
|
||||
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
|
||||
testPodDisruptionBudgetOwnerReference,
|
||||
hasName("postgres-myapp-database-critical-op-pdb"),
|
||||
hasMinAvailable(0),
|
||||
testLabelsAndSelectors(false),
|
||||
},
|
||||
},
|
||||
{
|
||||
scenario: "With OwnerReference enabled",
|
||||
spec: New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role", EnableOwnerReferences: util.True()}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.True()}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger,
|
||||
eventRecorder),
|
||||
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
|
||||
testPodDisruptionBudgetOwnerReference,
|
||||
hasName("postgres-myapp-database-critical-op-pdb"),
|
||||
hasMinAvailable(3),
|
||||
testLabelsAndSelectors(false),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range testCriticalOp {
|
||||
result := tt.spec.generateCriticalOpPodDisruptionBudget()
|
||||
for _, check := range tt.check {
|
||||
err := check(tt.spec, result)
|
||||
if err != nil {
|
||||
t.Errorf("%s [%s]: PodDisruptionBudget spec is incorrect, %+v",
|
||||
testName, tt.scenario, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -2468,17 +2614,17 @@ func TestGenerateService(t *testing.T) {
|
|||
Size: "1G",
|
||||
},
|
||||
Sidecars: []acidv1.Sidecar{
|
||||
acidv1.Sidecar{
|
||||
{
|
||||
Name: "cluster-specific-sidecar",
|
||||
},
|
||||
acidv1.Sidecar{
|
||||
{
|
||||
Name: "cluster-specific-sidecar-with-resources",
|
||||
Resources: &acidv1.Resources{
|
||||
ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("210m"), Memory: k8sutil.StringToPointer("0.8Gi")},
|
||||
ResourceLimits: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("510m"), Memory: k8sutil.StringToPointer("1.4Gi")},
|
||||
},
|
||||
},
|
||||
acidv1.Sidecar{
|
||||
{
|
||||
Name: "replace-sidecar",
|
||||
DockerImage: "override-image",
|
||||
},
|
||||
|
|
@ -2507,11 +2653,11 @@ func TestGenerateService(t *testing.T) {
|
|||
"deprecated-global-sidecar": "image:123",
|
||||
},
|
||||
SidecarContainers: []v1.Container{
|
||||
v1.Container{
|
||||
{
|
||||
Name: "global-sidecar",
|
||||
},
|
||||
// will be replaced by a cluster specific sidecar with the same name
|
||||
v1.Container{
|
||||
{
|
||||
Name: "replace-sidecar",
|
||||
Image: "replaced-image",
|
||||
},
|
||||
|
|
@ -2606,27 +2752,27 @@ func newLBFakeClient() (k8sutil.KubernetesClient, *fake.Clientset) {
|
|||
|
||||
func getServices(serviceType v1.ServiceType, sourceRanges []string, extTrafficPolicy, clusterName string) []v1.ServiceSpec {
|
||||
return []v1.ServiceSpec{
|
||||
v1.ServiceSpec{
|
||||
{
|
||||
ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy),
|
||||
LoadBalancerSourceRanges: sourceRanges,
|
||||
Ports: []v1.ServicePort{{Name: "postgresql", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}},
|
||||
Type: serviceType,
|
||||
},
|
||||
v1.ServiceSpec{
|
||||
{
|
||||
ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy),
|
||||
LoadBalancerSourceRanges: sourceRanges,
|
||||
Ports: []v1.ServicePort{{Name: clusterName + "-pooler", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}},
|
||||
Selector: map[string]string{"connection-pooler": clusterName + "-pooler"},
|
||||
Type: serviceType,
|
||||
},
|
||||
v1.ServiceSpec{
|
||||
{
|
||||
ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy),
|
||||
LoadBalancerSourceRanges: sourceRanges,
|
||||
Ports: []v1.ServicePort{{Name: "postgresql", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}},
|
||||
Selector: map[string]string{"spilo-role": "replica", "application": "spilo", "cluster-name": clusterName},
|
||||
Type: serviceType,
|
||||
},
|
||||
v1.ServiceSpec{
|
||||
{
|
||||
ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy),
|
||||
LoadBalancerSourceRanges: sourceRanges,
|
||||
Ports: []v1.ServicePort{{Name: clusterName + "-pooler-repl", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}},
|
||||
|
|
@ -2846,7 +2992,7 @@ func TestGenerateResourceRequirements(t *testing.T) {
|
|||
},
|
||||
Spec: acidv1.PostgresSpec{
|
||||
Sidecars: []acidv1.Sidecar{
|
||||
acidv1.Sidecar{
|
||||
{
|
||||
Name: sidecarName,
|
||||
},
|
||||
},
|
||||
|
|
@ -2945,6 +3091,44 @@ func TestGenerateResourceRequirements(t *testing.T) {
|
|||
ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("100m"), Memory: k8sutil.StringToPointer("100Mi")},
|
||||
},
|
||||
},
|
||||
{
|
||||
subTest: "test generation of resources when min limits are all set to zero",
|
||||
config: config.Config{
|
||||
Resources: config.Resources{
|
||||
ClusterLabels: map[string]string{"application": "spilo"},
|
||||
ClusterNameLabel: clusterNameLabel,
|
||||
DefaultCPURequest: "0",
|
||||
DefaultCPULimit: "0",
|
||||
MaxCPURequest: "0",
|
||||
MinCPULimit: "0",
|
||||
DefaultMemoryRequest: "0",
|
||||
DefaultMemoryLimit: "0",
|
||||
MaxMemoryRequest: "0",
|
||||
MinMemoryLimit: "0",
|
||||
PodRoleLabel: "spilo-role",
|
||||
},
|
||||
PodManagementPolicy: "ordered_ready",
|
||||
SetMemoryRequestToLimit: false,
|
||||
},
|
||||
pgSpec: acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: clusterName,
|
||||
Namespace: namespace,
|
||||
},
|
||||
Spec: acidv1.PostgresSpec{
|
||||
Resources: &acidv1.Resources{
|
||||
ResourceLimits: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("5m"), Memory: k8sutil.StringToPointer("5Mi")},
|
||||
},
|
||||
TeamID: "acid",
|
||||
Volume: acidv1.Volume{
|
||||
Size: "1G",
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedResources: acidv1.Resources{
|
||||
ResourceLimits: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("5m"), Memory: k8sutil.StringToPointer("5Mi")},
|
||||
},
|
||||
},
|
||||
{
|
||||
subTest: "test matchLimitsWithRequestsIfSmaller",
|
||||
config: config.Config{
|
||||
|
|
@ -3047,7 +3231,7 @@ func TestGenerateResourceRequirements(t *testing.T) {
|
|||
},
|
||||
Spec: acidv1.PostgresSpec{
|
||||
Sidecars: []acidv1.Sidecar{
|
||||
acidv1.Sidecar{
|
||||
{
|
||||
Name: sidecarName,
|
||||
Resources: &acidv1.Resources{
|
||||
ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("10m"), Memory: k8sutil.StringToPointer("10Mi")},
|
||||
|
|
@ -3136,7 +3320,7 @@ func TestGenerateResourceRequirements(t *testing.T) {
|
|||
},
|
||||
Spec: acidv1.PostgresSpec{
|
||||
Sidecars: []acidv1.Sidecar{
|
||||
acidv1.Sidecar{
|
||||
{
|
||||
Name: sidecarName,
|
||||
Resources: &acidv1.Resources{
|
||||
ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("10m"), Memory: k8sutil.StringToPointer("10Mi")},
|
||||
|
|
@ -3541,6 +3725,11 @@ func TestGenerateLogicalBackupJob(t *testing.T) {
|
|||
cluster.Spec.LogicalBackupSchedule = tt.specSchedule
|
||||
cronJob, err := cluster.generateLogicalBackupJob()
|
||||
assert.NoError(t, err)
|
||||
|
||||
if !reflect.DeepEqual(cronJob.ObjectMeta.OwnerReferences, cluster.ownerReferences()) {
|
||||
t.Errorf("%s - %s: expected owner references %#v, got %#v", t.Name(), tt.subTest, cluster.ownerReferences(), cronJob.ObjectMeta.OwnerReferences)
|
||||
}
|
||||
|
||||
if cronJob.Spec.Schedule != tt.expectedSchedule {
|
||||
t.Errorf("%s - %s: expected schedule %s, got %s", t.Name(), tt.subTest, tt.expectedSchedule, cronJob.Spec.Schedule)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,24 +1,34 @@
|
|||
package cluster
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/Masterminds/semver"
|
||||
"github.com/zalando/postgres-operator/pkg/spec"
|
||||
"github.com/zalando/postgres-operator/pkg/util"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
)
|
||||
|
||||
// VersionMap Map of version numbers
|
||||
var VersionMap = map[string]int{
|
||||
"11": 110000,
|
||||
"12": 120000,
|
||||
"13": 130000,
|
||||
"14": 140000,
|
||||
"15": 150000,
|
||||
"16": 160000,
|
||||
"17": 170000,
|
||||
}
|
||||
|
||||
const (
|
||||
majorVersionUpgradeSuccessAnnotation = "last-major-upgrade-success"
|
||||
majorVersionUpgradeFailureAnnotation = "last-major-upgrade-failure"
|
||||
)
|
||||
|
||||
// IsBiggerPostgresVersion Compare two Postgres version numbers
|
||||
func IsBiggerPostgresVersion(old string, new string) bool {
|
||||
oldN := VersionMap[old]
|
||||
|
|
@ -35,7 +45,7 @@ func (c *Cluster) GetDesiredMajorVersionAsInt() int {
|
|||
func (c *Cluster) GetDesiredMajorVersion() string {
|
||||
|
||||
if c.Config.OpConfig.MajorVersionUpgradeMode == "full" {
|
||||
// e.g. current is 12, minimal is 12 allowing 12 to 16 clusters, everything below is upgraded
|
||||
// e.g. current is 13, minimal is 13 allowing 13 to 17 clusters, everything below is upgraded
|
||||
if IsBiggerPostgresVersion(c.Spec.PgVersion, c.Config.OpConfig.MinimalMajorVersion) {
|
||||
c.logger.Infof("overwriting configured major version %s to %s", c.Spec.PgVersion, c.Config.OpConfig.TargetMajorVersion)
|
||||
return c.Config.OpConfig.TargetMajorVersion
|
||||
|
|
@ -55,6 +65,63 @@ func (c *Cluster) isUpgradeAllowedForTeam(owningTeam string) bool {
|
|||
return util.SliceContains(allowedTeams, owningTeam)
|
||||
}
|
||||
|
||||
func (c *Cluster) annotatePostgresResource(isSuccess bool) error {
|
||||
annotations := make(map[string]string)
|
||||
currentTime := metav1.Now().Format("2006-01-02T15:04:05Z")
|
||||
if isSuccess {
|
||||
annotations[majorVersionUpgradeSuccessAnnotation] = currentTime
|
||||
} else {
|
||||
annotations[majorVersionUpgradeFailureAnnotation] = currentTime
|
||||
}
|
||||
patchData, err := metaAnnotationsPatch(annotations)
|
||||
if err != nil {
|
||||
c.logger.Errorf("could not form patch for %s postgresql resource: %v", c.Name, err)
|
||||
return err
|
||||
}
|
||||
_, err = c.KubeClient.Postgresqls(c.Namespace).Patch(context.Background(), c.Name, types.MergePatchType, patchData, metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
c.logger.Errorf("failed to patch annotations to postgresql resource: %v", err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) removeFailuresAnnotation() error {
|
||||
annotationToRemove := []map[string]string{
|
||||
{
|
||||
"op": "remove",
|
||||
"path": fmt.Sprintf("/metadata/annotations/%s", majorVersionUpgradeFailureAnnotation),
|
||||
},
|
||||
}
|
||||
removePatch, err := json.Marshal(annotationToRemove)
|
||||
if err != nil {
|
||||
c.logger.Errorf("could not form removal patch for %s postgresql resource: %v", c.Name, err)
|
||||
return err
|
||||
}
|
||||
_, err = c.KubeClient.Postgresqls(c.Namespace).Patch(context.Background(), c.Name, types.JSONPatchType, removePatch, metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
c.logger.Errorf("failed to remove annotations from postgresql resource: %v", err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) criticalOperationLabel(pods []v1.Pod, value *string) error {
|
||||
metadataReq := map[string]map[string]map[string]*string{"metadata": {"labels": {"critical-operation": value}}}
|
||||
|
||||
patchReq, err := json.Marshal(metadataReq)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not marshal ObjectMeta: %v", err)
|
||||
}
|
||||
for _, pod := range pods {
|
||||
_, err = c.KubeClient.Pods(c.Namespace).Patch(context.TODO(), pod.Name, types.StrategicMergePatchType, patchReq, metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
/*
|
||||
Execute upgrade when mode is set to manual or full or when the owning team is allowed for upgrade (and mode is "off").
|
||||
|
||||
|
|
@ -70,6 +137,10 @@ func (c *Cluster) majorVersionUpgrade() error {
|
|||
desiredVersion := c.GetDesiredMajorVersionAsInt()
|
||||
|
||||
if c.currentMajorVersion >= desiredVersion {
|
||||
if _, exists := c.ObjectMeta.Annotations[majorVersionUpgradeFailureAnnotation]; exists { // if failure annotation exists, remove it
|
||||
c.removeFailuresAnnotation()
|
||||
c.logger.Infof("removing failure annotation as the cluster is already up to date")
|
||||
}
|
||||
c.logger.Infof("cluster version up to date. current: %d, min desired: %d", c.currentMajorVersion, desiredVersion)
|
||||
return nil
|
||||
}
|
||||
|
|
@ -80,59 +151,137 @@ func (c *Cluster) majorVersionUpgrade() error {
|
|||
}
|
||||
|
||||
allRunning := true
|
||||
isStandbyCluster := false
|
||||
|
||||
var masterPod *v1.Pod
|
||||
|
||||
for i, pod := range pods {
|
||||
ps, _ := c.patroni.GetMemberData(&pod)
|
||||
|
||||
if ps.Role == "standby_leader" {
|
||||
isStandbyCluster = true
|
||||
c.currentMajorVersion = ps.ServerVersion
|
||||
break
|
||||
}
|
||||
|
||||
if ps.State != "running" {
|
||||
allRunning = false
|
||||
c.logger.Infof("identified non running pod, potentially skipping major version upgrade")
|
||||
}
|
||||
|
||||
if ps.Role == "master" {
|
||||
if ps.Role == "master" || ps.Role == "primary" {
|
||||
masterPod = &pods[i]
|
||||
c.currentMajorVersion = ps.ServerVersion
|
||||
}
|
||||
}
|
||||
|
||||
// Recheck version with newest data from Patroni
|
||||
if c.currentMajorVersion >= desiredVersion {
|
||||
c.logger.Infof("recheck cluster version is already up to date. current: %d, min desired: %d", c.currentMajorVersion, desiredVersion)
|
||||
if masterPod == nil {
|
||||
c.logger.Infof("no master in the cluster, skipping major version upgrade")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Recheck version with newest data from Patroni
|
||||
if c.currentMajorVersion >= desiredVersion {
|
||||
if _, exists := c.ObjectMeta.Annotations[majorVersionUpgradeFailureAnnotation]; exists { // if failure annotation exists, remove it
|
||||
c.removeFailuresAnnotation()
|
||||
c.logger.Infof("removing failure annotation as the cluster is already up to date")
|
||||
}
|
||||
c.logger.Infof("recheck cluster version is already up to date. current: %d, min desired: %d", c.currentMajorVersion, desiredVersion)
|
||||
return nil
|
||||
} else if isStandbyCluster {
|
||||
c.logger.Warnf("skipping major version upgrade for %s/%s standby cluster. Re-deploy standby cluster with the required Postgres version specified", c.Namespace, c.Name)
|
||||
return nil
|
||||
}
|
||||
|
||||
if _, exists := c.ObjectMeta.Annotations[majorVersionUpgradeFailureAnnotation]; exists {
|
||||
c.logger.Infof("last major upgrade failed, skipping upgrade")
|
||||
return nil
|
||||
}
|
||||
|
||||
if !isInMaintenanceWindow(c.Spec.MaintenanceWindows) {
|
||||
c.logger.Infof("skipping major version upgrade, not in maintenance window")
|
||||
return nil
|
||||
}
|
||||
|
||||
members, err := c.patroni.GetClusterMembers(masterPod)
|
||||
if err != nil {
|
||||
c.logger.Error("could not get cluster members data from Patroni API, skipping major version upgrade")
|
||||
return err
|
||||
}
|
||||
patroniData, err := c.patroni.GetMemberData(masterPod)
|
||||
if err != nil {
|
||||
c.logger.Error("could not get members data from Patroni API, skipping major version upgrade")
|
||||
return err
|
||||
}
|
||||
patroniVer, err := semver.NewVersion(patroniData.Patroni.Version)
|
||||
if err != nil {
|
||||
c.logger.Error("error parsing Patroni version")
|
||||
patroniVer, _ = semver.NewVersion("3.0.4")
|
||||
}
|
||||
verConstraint, _ := semver.NewConstraint(">= 3.0.4")
|
||||
checkStreaming, _ := verConstraint.Validate(patroniVer)
|
||||
|
||||
for _, member := range members {
|
||||
if PostgresRole(member.Role) == Leader {
|
||||
continue
|
||||
}
|
||||
if checkStreaming && member.State != "streaming" {
|
||||
c.logger.Infof("skipping major version upgrade, replica %s is not streaming from primary", member.Name)
|
||||
return nil
|
||||
}
|
||||
if member.Lag > 16*1024*1024 {
|
||||
c.logger.Infof("skipping major version upgrade, replication lag on member %s is too high", member.Name)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
isUpgradeSuccess := true
|
||||
numberOfPods := len(pods)
|
||||
if allRunning && masterPod != nil {
|
||||
c.logger.Infof("healthy cluster ready to upgrade, current: %d desired: %d", c.currentMajorVersion, desiredVersion)
|
||||
if c.currentMajorVersion < desiredVersion {
|
||||
defer func() error {
|
||||
if err = c.criticalOperationLabel(pods, nil); err != nil {
|
||||
return fmt.Errorf("failed to remove critical-operation label: %s", err)
|
||||
}
|
||||
return nil
|
||||
}()
|
||||
val := "true"
|
||||
if err = c.criticalOperationLabel(pods, &val); err != nil {
|
||||
return fmt.Errorf("failed to assign critical-operation label: %s", err)
|
||||
}
|
||||
|
||||
podName := &spec.NamespacedName{Namespace: masterPod.Namespace, Name: masterPod.Name}
|
||||
c.logger.Infof("triggering major version upgrade on pod %s of %d pods", masterPod.Name, numberOfPods)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Major Version Upgrade", "starting major version upgrade on pod %s of %d pods", masterPod.Name, numberOfPods)
|
||||
upgradeCommand := fmt.Sprintf("set -o pipefail && /usr/bin/python3 /scripts/inplace_upgrade.py %d 2>&1 | tee last_upgrade.log", numberOfPods)
|
||||
|
||||
c.logger.Debugf("checking if the spilo image runs with root or non-root (check for user id=0)")
|
||||
c.logger.Debug("checking if the spilo image runs with root or non-root (check for user id=0)")
|
||||
resultIdCheck, errIdCheck := c.ExecCommand(podName, "/bin/bash", "-c", "/usr/bin/id -u")
|
||||
if errIdCheck != nil {
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Major Version Upgrade", "checking user id to run upgrade from %d to %d FAILED: %v", c.currentMajorVersion, desiredVersion, errIdCheck)
|
||||
}
|
||||
|
||||
resultIdCheck = strings.TrimSuffix(resultIdCheck, "\n")
|
||||
var result string
|
||||
var result, scriptErrMsg string
|
||||
if resultIdCheck != "0" {
|
||||
c.logger.Infof("user id was identified as: %s, hence default user is non-root already", resultIdCheck)
|
||||
result, err = c.ExecCommand(podName, "/bin/bash", "-c", upgradeCommand)
|
||||
scriptErrMsg, _ = c.ExecCommand(podName, "/bin/bash", "-c", "tail -n 1 last_upgrade.log")
|
||||
} else {
|
||||
c.logger.Infof("user id was identified as: %s, using su to reach the postgres user", resultIdCheck)
|
||||
result, err = c.ExecCommand(podName, "/bin/su", "postgres", "-c", upgradeCommand)
|
||||
scriptErrMsg, _ = c.ExecCommand(podName, "/bin/bash", "-c", "tail -n 1 last_upgrade.log")
|
||||
}
|
||||
if err != nil {
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Major Version Upgrade", "upgrade from %d to %d FAILED: %v", c.currentMajorVersion, desiredVersion, err)
|
||||
return err
|
||||
isUpgradeSuccess = false
|
||||
c.annotatePostgresResource(isUpgradeSuccess)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Major Version Upgrade", "upgrade from %d to %d FAILED: %v", c.currentMajorVersion, desiredVersion, scriptErrMsg)
|
||||
return fmt.Errorf(scriptErrMsg)
|
||||
}
|
||||
c.logger.Infof("upgrade action triggered and command completed: %s", result[:100])
|
||||
|
||||
c.annotatePostgresResource(isUpgradeSuccess)
|
||||
c.logger.Infof("upgrade action triggered and command completed: %s", result[:100])
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Major Version Upgrade", "upgrade from %d to %d finished", c.currentMajorVersion, desiredVersion)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ func (c *Cluster) markRollingUpdateFlagForPod(pod *v1.Pod, msg string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
c.logger.Debugf("mark rolling update annotation for %s: reason %s", pod.Name, msg)
|
||||
c.logger.Infof("mark rolling update annotation for %s: reason %s", pod.Name, msg)
|
||||
flag := make(map[string]string)
|
||||
flag[rollingUpdatePodAnnotationKey] = strconv.FormatBool(true)
|
||||
|
||||
|
|
@ -110,7 +110,7 @@ func (c *Cluster) getRollingUpdateFlagFromPod(pod *v1.Pod) (flag bool) {
|
|||
}
|
||||
|
||||
func (c *Cluster) deletePods() error {
|
||||
c.logger.Debugln("deleting pods")
|
||||
c.logger.Debug("deleting pods")
|
||||
pods, err := c.listPods()
|
||||
if err != nil {
|
||||
return err
|
||||
|
|
@ -127,9 +127,9 @@ func (c *Cluster) deletePods() error {
|
|||
}
|
||||
}
|
||||
if len(pods) > 0 {
|
||||
c.logger.Debugln("pods have been deleted")
|
||||
c.logger.Debug("pods have been deleted")
|
||||
} else {
|
||||
c.logger.Debugln("no pods to delete")
|
||||
c.logger.Debug("no pods to delete")
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -230,7 +230,7 @@ func (c *Cluster) MigrateMasterPod(podName spec.NamespacedName) error {
|
|||
return fmt.Errorf("could not get node %q: %v", oldMaster.Spec.NodeName, err)
|
||||
}
|
||||
if !eol {
|
||||
c.logger.Debugf("no action needed: master pod is already on a live node")
|
||||
c.logger.Debug("no action needed: master pod is already on a live node")
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
@ -280,11 +280,16 @@ func (c *Cluster) MigrateMasterPod(podName spec.NamespacedName) error {
|
|||
return fmt.Errorf("could not move pod: %v", err)
|
||||
}
|
||||
|
||||
scheduleSwitchover := false
|
||||
if !isInMaintenanceWindow(c.Spec.MaintenanceWindows) {
|
||||
c.logger.Infof("postponing switchover, not in maintenance window")
|
||||
scheduleSwitchover = true
|
||||
}
|
||||
err = retryutil.Retry(1*time.Minute, 5*time.Minute,
|
||||
func() (bool, error) {
|
||||
err := c.Switchover(oldMaster, masterCandidateName)
|
||||
err := c.Switchover(oldMaster, masterCandidateName, scheduleSwitchover)
|
||||
if err != nil {
|
||||
c.logger.Errorf("could not failover to pod %q: %v", masterCandidateName, err)
|
||||
c.logger.Errorf("could not switchover to pod %q: %v", masterCandidateName, err)
|
||||
return false, nil
|
||||
}
|
||||
return true, nil
|
||||
|
|
@ -445,7 +450,7 @@ func (c *Cluster) recreatePods(pods []v1.Pod, switchoverCandidates []spec.Namesp
|
|||
// do not recreate master now so it will keep the update flag and switchover will be retried on next sync
|
||||
return fmt.Errorf("skipping switchover: %v", err)
|
||||
}
|
||||
if err := c.Switchover(masterPod, masterCandidate); err != nil {
|
||||
if err := c.Switchover(masterPod, masterCandidate, false); err != nil {
|
||||
return fmt.Errorf("could not perform switch over: %v", err)
|
||||
}
|
||||
} else if newMasterPod == nil && len(replicas) == 0 {
|
||||
|
|
@ -480,6 +485,9 @@ func (c *Cluster) getSwitchoverCandidate(master *v1.Pod) (spec.NamespacedName, e
|
|||
if PostgresRole(member.Role) == SyncStandby {
|
||||
syncCandidates = append(syncCandidates, member)
|
||||
}
|
||||
if PostgresRole(member.Role) != Leader && PostgresRole(member.Role) != StandbyLeader && slices.Contains([]string{"running", "streaming", "in archive recovery"}, member.State) {
|
||||
candidates = append(candidates, member)
|
||||
}
|
||||
}
|
||||
|
||||
// if synchronous mode is enabled and no SyncStandy was found
|
||||
|
|
@ -489,6 +497,12 @@ func (c *Cluster) getSwitchoverCandidate(master *v1.Pod) (spec.NamespacedName, e
|
|||
return false, nil
|
||||
}
|
||||
|
||||
// retry also in asynchronous mode when no replica candidate was found
|
||||
if !c.Spec.Patroni.SynchronousMode && len(candidates) == 0 {
|
||||
c.logger.Warnf("no replica candidate found - retrying fetching cluster members")
|
||||
return false, nil
|
||||
}
|
||||
|
||||
return true, nil
|
||||
},
|
||||
)
|
||||
|
|
@ -502,24 +516,12 @@ func (c *Cluster) getSwitchoverCandidate(master *v1.Pod) (spec.NamespacedName, e
|
|||
return syncCandidates[i].Lag < syncCandidates[j].Lag
|
||||
})
|
||||
return spec.NamespacedName{Namespace: master.Namespace, Name: syncCandidates[0].Name}, nil
|
||||
} else {
|
||||
// in asynchronous mode find running replicas
|
||||
for _, member := range members {
|
||||
if PostgresRole(member.Role) == Leader || PostgresRole(member.Role) == StandbyLeader {
|
||||
continue
|
||||
}
|
||||
|
||||
if slices.Contains([]string{"running", "streaming", "in archive recovery"}, member.State) {
|
||||
candidates = append(candidates, member)
|
||||
}
|
||||
}
|
||||
|
||||
if len(candidates) > 0 {
|
||||
sort.Slice(candidates, func(i, j int) bool {
|
||||
return candidates[i].Lag < candidates[j].Lag
|
||||
})
|
||||
return spec.NamespacedName{Namespace: master.Namespace, Name: candidates[0].Name}, nil
|
||||
}
|
||||
}
|
||||
if len(candidates) > 0 {
|
||||
sort.Slice(candidates, func(i, j int) bool {
|
||||
return candidates[i].Lag < candidates[j].Lag
|
||||
})
|
||||
return spec.NamespacedName{Namespace: master.Namespace, Name: candidates[0].Name}, nil
|
||||
}
|
||||
|
||||
return spec.NamespacedName{}, fmt.Errorf("no switchover candidate found")
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@ func TestGetSwitchoverCandidate(t *testing.T) {
|
|||
expectedError: nil,
|
||||
},
|
||||
{
|
||||
subtest: "choose first replica when lag is equal evrywhere",
|
||||
subtest: "choose first replica when lag is equal everywhere",
|
||||
clusterJson: `{"members": [{"name": "acid-test-cluster-0", "role": "leader", "state": "running", "api_url": "http://192.168.100.1:8008/patroni", "host": "192.168.100.1", "port": 5432, "timeline": 1}, {"name": "acid-test-cluster-1", "role": "replica", "state": "streaming", "api_url": "http://192.168.100.2:8008/patroni", "host": "192.168.100.2", "port": 5432, "timeline": 1, "lag": 5}, {"name": "acid-test-cluster-2", "role": "replica", "state": "running", "api_url": "http://192.168.100.3:8008/patroni", "host": "192.168.100.3", "port": 5432, "timeline": 1, "lag": 5}]}`,
|
||||
syncModeEnabled: false,
|
||||
expectedCandidate: spec.NamespacedName{Namespace: namespace, Name: "acid-test-cluster-1"},
|
||||
|
|
@ -73,7 +73,7 @@ func TestGetSwitchoverCandidate(t *testing.T) {
|
|||
clusterJson: `{"members": [{"name": "acid-test-cluster-0", "role": "leader", "state": "running", "api_url": "http://192.168.100.1:8008/patroni", "host": "192.168.100.1", "port": 5432, "timeline": 2}, {"name": "acid-test-cluster-1", "role": "replica", "state": "starting", "api_url": "http://192.168.100.2:8008/patroni", "host": "192.168.100.2", "port": 5432, "timeline": 2}]}`,
|
||||
syncModeEnabled: false,
|
||||
expectedCandidate: spec.NamespacedName{},
|
||||
expectedError: fmt.Errorf("no switchover candidate found"),
|
||||
expectedError: fmt.Errorf("failed to get Patroni cluster members: unexpected end of JSON input"),
|
||||
},
|
||||
{
|
||||
subtest: "replicas with different status",
|
||||
|
|
|
|||
|
|
@ -23,28 +23,49 @@ const (
|
|||
)
|
||||
|
||||
func (c *Cluster) listResources() error {
|
||||
if c.PodDisruptionBudget != nil {
|
||||
c.logger.Infof("found pod disruption budget: %q (uid: %q)", util.NameFromMeta(c.PodDisruptionBudget.ObjectMeta), c.PodDisruptionBudget.UID)
|
||||
if c.PrimaryPodDisruptionBudget != nil {
|
||||
c.logger.Infof("found primary pod disruption budget: %q (uid: %q)", util.NameFromMeta(c.PrimaryPodDisruptionBudget.ObjectMeta), c.PrimaryPodDisruptionBudget.UID)
|
||||
}
|
||||
|
||||
if c.CriticalOpPodDisruptionBudget != nil {
|
||||
c.logger.Infof("found pod disruption budget for critical operations: %q (uid: %q)", util.NameFromMeta(c.CriticalOpPodDisruptionBudget.ObjectMeta), c.CriticalOpPodDisruptionBudget.UID)
|
||||
|
||||
}
|
||||
|
||||
if c.Statefulset != nil {
|
||||
c.logger.Infof("found statefulset: %q (uid: %q)", util.NameFromMeta(c.Statefulset.ObjectMeta), c.Statefulset.UID)
|
||||
}
|
||||
|
||||
for _, obj := range c.Secrets {
|
||||
c.logger.Infof("found secret: %q (uid: %q) namesapce: %s", util.NameFromMeta(obj.ObjectMeta), obj.UID, obj.ObjectMeta.Namespace)
|
||||
for appId, stream := range c.Streams {
|
||||
c.logger.Infof("found stream: %q with application id %q (uid: %q)", util.NameFromMeta(stream.ObjectMeta), appId, stream.UID)
|
||||
}
|
||||
|
||||
if !c.patroniKubernetesUseConfigMaps() {
|
||||
for role, endpoint := range c.Endpoints {
|
||||
c.logger.Infof("found %s endpoint: %q (uid: %q)", role, util.NameFromMeta(endpoint.ObjectMeta), endpoint.UID)
|
||||
}
|
||||
if c.LogicalBackupJob != nil {
|
||||
c.logger.Infof("found logical backup job: %q (uid: %q)", util.NameFromMeta(c.LogicalBackupJob.ObjectMeta), c.LogicalBackupJob.UID)
|
||||
}
|
||||
|
||||
for uid, secret := range c.Secrets {
|
||||
c.logger.Infof("found secret: %q (uid: %q) namespace: %s", util.NameFromMeta(secret.ObjectMeta), uid, secret.ObjectMeta.Namespace)
|
||||
}
|
||||
|
||||
for role, service := range c.Services {
|
||||
c.logger.Infof("found %s service: %q (uid: %q)", role, util.NameFromMeta(service.ObjectMeta), service.UID)
|
||||
}
|
||||
|
||||
for role, endpoint := range c.Endpoints {
|
||||
c.logger.Infof("found %s endpoint: %q (uid: %q)", role, util.NameFromMeta(endpoint.ObjectMeta), endpoint.UID)
|
||||
}
|
||||
|
||||
if c.patroniKubernetesUseConfigMaps() {
|
||||
for suffix, configmap := range c.PatroniConfigMaps {
|
||||
c.logger.Infof("found %s Patroni config map: %q (uid: %q)", suffix, util.NameFromMeta(configmap.ObjectMeta), configmap.UID)
|
||||
}
|
||||
} else {
|
||||
for suffix, endpoint := range c.PatroniEndpoints {
|
||||
c.logger.Infof("found %s Patroni endpoint: %q (uid: %q)", suffix, util.NameFromMeta(endpoint.ObjectMeta), endpoint.UID)
|
||||
}
|
||||
}
|
||||
|
||||
pods, err := c.listPods()
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not get the list of pods: %v", err)
|
||||
|
|
@ -54,13 +75,17 @@ func (c *Cluster) listResources() error {
|
|||
c.logger.Infof("found pod: %q (uid: %q)", util.NameFromMeta(obj.ObjectMeta), obj.UID)
|
||||
}
|
||||
|
||||
pvcs, err := c.listPersistentVolumeClaims()
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not get the list of PVCs: %v", err)
|
||||
for uid, pvc := range c.VolumeClaims {
|
||||
c.logger.Infof("found persistent volume claim: %q (uid: %q)", util.NameFromMeta(pvc.ObjectMeta), uid)
|
||||
}
|
||||
|
||||
for _, obj := range pvcs {
|
||||
c.logger.Infof("found PVC: %q (uid: %q)", util.NameFromMeta(obj.ObjectMeta), obj.UID)
|
||||
for role, poolerObjs := range c.ConnectionPooler {
|
||||
if poolerObjs.Deployment != nil {
|
||||
c.logger.Infof("found %s pooler deployment: %q (uid: %q) ", role, util.NameFromMeta(poolerObjs.Deployment.ObjectMeta), poolerObjs.Deployment.UID)
|
||||
}
|
||||
if poolerObjs.Service != nil {
|
||||
c.logger.Infof("found %s pooler service: %q (uid: %q) ", role, util.NameFromMeta(poolerObjs.Service.ObjectMeta), poolerObjs.Service.UID)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -142,8 +167,8 @@ func (c *Cluster) preScaleDown(newStatefulSet *appsv1.StatefulSet) error {
|
|||
return fmt.Errorf("pod %q does not belong to cluster", podName)
|
||||
}
|
||||
|
||||
if err := c.patroni.Switchover(&masterPod[0], masterCandidatePod.Name); err != nil {
|
||||
return fmt.Errorf("could not failover: %v", err)
|
||||
if err := c.patroni.Switchover(&masterPod[0], masterCandidatePod.Name, ""); err != nil {
|
||||
return fmt.Errorf("could not switchover: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -162,7 +187,7 @@ func (c *Cluster) updateStatefulSet(newStatefulSet *appsv1.StatefulSet) error {
|
|||
c.logger.Warningf("could not scale down: %v", err)
|
||||
}
|
||||
}
|
||||
c.logger.Debugf("updating statefulset")
|
||||
c.logger.Debug("updating statefulset")
|
||||
|
||||
patchData, err := specPatch(newStatefulSet.Spec)
|
||||
if err != nil {
|
||||
|
|
@ -193,7 +218,7 @@ func (c *Cluster) replaceStatefulSet(newStatefulSet *appsv1.StatefulSet) error {
|
|||
}
|
||||
|
||||
statefulSetName := util.NameFromMeta(c.Statefulset.ObjectMeta)
|
||||
c.logger.Debugf("replacing statefulset")
|
||||
c.logger.Debug("replacing statefulset")
|
||||
|
||||
// Delete the current statefulset without deleting the pods
|
||||
deletePropagationPolicy := metav1.DeletePropagationOrphan
|
||||
|
|
@ -207,7 +232,7 @@ func (c *Cluster) replaceStatefulSet(newStatefulSet *appsv1.StatefulSet) error {
|
|||
// make sure we clear the stored statefulset status if the subsequent create fails.
|
||||
c.Statefulset = nil
|
||||
// wait until the statefulset is truly deleted
|
||||
c.logger.Debugf("waiting for the statefulset to be deleted")
|
||||
c.logger.Debug("waiting for the statefulset to be deleted")
|
||||
|
||||
err = retryutil.Retry(c.OpConfig.ResourceCheckInterval, c.OpConfig.ResourceCheckTimeout,
|
||||
func() (bool, error) {
|
||||
|
|
@ -241,7 +266,7 @@ func (c *Cluster) replaceStatefulSet(newStatefulSet *appsv1.StatefulSet) error {
|
|||
|
||||
func (c *Cluster) deleteStatefulSet() error {
|
||||
c.setProcessName("deleting statefulset")
|
||||
c.logger.Debugln("deleting statefulset")
|
||||
c.logger.Debug("deleting statefulset")
|
||||
if c.Statefulset == nil {
|
||||
c.logger.Debug("there is no statefulset in the cluster")
|
||||
return nil
|
||||
|
|
@ -263,10 +288,10 @@ func (c *Cluster) deleteStatefulSet() error {
|
|||
|
||||
if c.OpConfig.EnablePersistentVolumeClaimDeletion != nil && *c.OpConfig.EnablePersistentVolumeClaimDeletion {
|
||||
if err := c.deletePersistentVolumeClaims(); err != nil {
|
||||
return fmt.Errorf("could not delete PersistentVolumeClaims: %v", err)
|
||||
return fmt.Errorf("could not delete persistent volume claims: %v", err)
|
||||
}
|
||||
} else {
|
||||
c.logger.Info("not deleting PersistentVolumeClaims because disabled in configuration")
|
||||
c.logger.Info("not deleting persistent volume claims because disabled in configuration")
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -309,7 +334,7 @@ func (c *Cluster) updateService(role PostgresRole, oldService *v1.Service, newSe
|
|||
}
|
||||
}
|
||||
|
||||
if changed, _ := c.compareAnnotations(oldService.Annotations, newService.Annotations); changed {
|
||||
if changed, _ := c.compareAnnotations(oldService.Annotations, newService.Annotations, nil); changed {
|
||||
patchData, err := metaAnnotationsPatch(newService.Annotations)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not form patch for service %q annotations: %v", oldService.Name, err)
|
||||
|
|
@ -324,7 +349,8 @@ func (c *Cluster) updateService(role PostgresRole, oldService *v1.Service, newSe
|
|||
}
|
||||
|
||||
func (c *Cluster) deleteService(role PostgresRole) error {
|
||||
c.logger.Debugf("deleting service %s", role)
|
||||
c.setProcessName("deleting service")
|
||||
c.logger.Debugf("deleting %s service", role)
|
||||
|
||||
if c.Services[role] == nil {
|
||||
c.logger.Debugf("No service for %s role was found, nothing to delete", role)
|
||||
|
|
@ -332,11 +358,10 @@ func (c *Cluster) deleteService(role PostgresRole) error {
|
|||
}
|
||||
|
||||
if err := c.KubeClient.Services(c.Services[role].Namespace).Delete(context.TODO(), c.Services[role].Name, c.deleteOptions); err != nil {
|
||||
if k8sutil.ResourceNotFound(err) {
|
||||
c.logger.Debugf("%s service has already been deleted", role)
|
||||
} else if err != nil {
|
||||
return err
|
||||
if !k8sutil.ResourceNotFound(err) {
|
||||
return fmt.Errorf("could not delete %s service: %v", role, err)
|
||||
}
|
||||
c.logger.Debugf("%s service has already been deleted", role)
|
||||
}
|
||||
|
||||
c.logger.Infof("%s service %q has been deleted", role, util.NameFromMeta(c.Services[role].ObjectMeta))
|
||||
|
|
@ -397,59 +422,128 @@ func (c *Cluster) generateEndpointSubsets(role PostgresRole) []v1.EndpointSubset
|
|||
return result
|
||||
}
|
||||
|
||||
func (c *Cluster) createPodDisruptionBudget() (*policyv1.PodDisruptionBudget, error) {
|
||||
podDisruptionBudgetSpec := c.generatePodDisruptionBudget()
|
||||
func (c *Cluster) createPrimaryPodDisruptionBudget() error {
|
||||
c.logger.Debug("creating primary pod disruption budget")
|
||||
if c.PrimaryPodDisruptionBudget != nil {
|
||||
c.logger.Warning("primary pod disruption budget already exists in the cluster")
|
||||
return nil
|
||||
}
|
||||
|
||||
podDisruptionBudgetSpec := c.generatePrimaryPodDisruptionBudget()
|
||||
podDisruptionBudget, err := c.KubeClient.
|
||||
PodDisruptionBudgets(podDisruptionBudgetSpec.Namespace).
|
||||
Create(context.TODO(), podDisruptionBudgetSpec, metav1.CreateOptions{})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return err
|
||||
}
|
||||
c.PodDisruptionBudget = podDisruptionBudget
|
||||
c.logger.Infof("primary pod disruption budget %q has been successfully created", util.NameFromMeta(podDisruptionBudget.ObjectMeta))
|
||||
c.PrimaryPodDisruptionBudget = podDisruptionBudget
|
||||
|
||||
return podDisruptionBudget, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) updatePodDisruptionBudget(pdb *policyv1.PodDisruptionBudget) error {
|
||||
if c.PodDisruptionBudget == nil {
|
||||
return fmt.Errorf("there is no pod disruption budget in the cluster")
|
||||
func (c *Cluster) createCriticalOpPodDisruptionBudget() error {
|
||||
c.logger.Debug("creating pod disruption budget for critical operations")
|
||||
if c.CriticalOpPodDisruptionBudget != nil {
|
||||
c.logger.Warning("pod disruption budget for critical operations already exists in the cluster")
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := c.deletePodDisruptionBudget(); err != nil {
|
||||
return fmt.Errorf("could not delete pod disruption budget: %v", err)
|
||||
podDisruptionBudgetSpec := c.generateCriticalOpPodDisruptionBudget()
|
||||
podDisruptionBudget, err := c.KubeClient.
|
||||
PodDisruptionBudgets(podDisruptionBudgetSpec.Namespace).
|
||||
Create(context.TODO(), podDisruptionBudgetSpec, metav1.CreateOptions{})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.logger.Infof("pod disruption budget for critical operations %q has been successfully created", util.NameFromMeta(podDisruptionBudget.ObjectMeta))
|
||||
c.CriticalOpPodDisruptionBudget = podDisruptionBudget
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) createPodDisruptionBudgets() error {
|
||||
errors := make([]string, 0)
|
||||
|
||||
err := c.createPrimaryPodDisruptionBudget()
|
||||
if err != nil {
|
||||
errors = append(errors, fmt.Sprintf("could not create primary pod disruption budget: %v", err))
|
||||
}
|
||||
|
||||
err = c.createCriticalOpPodDisruptionBudget()
|
||||
if err != nil {
|
||||
errors = append(errors, fmt.Sprintf("could not create pod disruption budget for critical operations: %v", err))
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("%v", strings.Join(errors, `', '`))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) updatePrimaryPodDisruptionBudget(pdb *policyv1.PodDisruptionBudget) error {
|
||||
c.logger.Debug("updating primary pod disruption budget")
|
||||
if c.PrimaryPodDisruptionBudget == nil {
|
||||
return fmt.Errorf("there is no primary pod disruption budget in the cluster")
|
||||
}
|
||||
|
||||
if err := c.deletePrimaryPodDisruptionBudget(); err != nil {
|
||||
return fmt.Errorf("could not delete primary pod disruption budget: %v", err)
|
||||
}
|
||||
|
||||
newPdb, err := c.KubeClient.
|
||||
PodDisruptionBudgets(pdb.Namespace).
|
||||
Create(context.TODO(), pdb, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not create pod disruption budget: %v", err)
|
||||
return fmt.Errorf("could not create primary pod disruption budget: %v", err)
|
||||
}
|
||||
c.PodDisruptionBudget = newPdb
|
||||
c.PrimaryPodDisruptionBudget = newPdb
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePodDisruptionBudget() error {
|
||||
c.logger.Debug("deleting pod disruption budget")
|
||||
if c.PodDisruptionBudget == nil {
|
||||
c.logger.Debug("there is no pod disruption budget in the cluster")
|
||||
func (c *Cluster) updateCriticalOpPodDisruptionBudget(pdb *policyv1.PodDisruptionBudget) error {
|
||||
c.logger.Debug("updating pod disruption budget for critical operations")
|
||||
if c.CriticalOpPodDisruptionBudget == nil {
|
||||
return fmt.Errorf("there is no pod disruption budget for critical operations in the cluster")
|
||||
}
|
||||
|
||||
if err := c.deleteCriticalOpPodDisruptionBudget(); err != nil {
|
||||
return fmt.Errorf("could not delete pod disruption budget for critical operations: %v", err)
|
||||
}
|
||||
|
||||
newPdb, err := c.KubeClient.
|
||||
PodDisruptionBudgets(pdb.Namespace).
|
||||
Create(context.TODO(), pdb, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not create pod disruption budget for critical operations: %v", err)
|
||||
}
|
||||
c.CriticalOpPodDisruptionBudget = newPdb
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePrimaryPodDisruptionBudget() error {
|
||||
c.logger.Debug("deleting primary pod disruption budget")
|
||||
if c.PrimaryPodDisruptionBudget == nil {
|
||||
c.logger.Debug("there is no primary pod disruption budget in the cluster")
|
||||
return nil
|
||||
}
|
||||
|
||||
pdbName := util.NameFromMeta(c.PodDisruptionBudget.ObjectMeta)
|
||||
pdbName := util.NameFromMeta(c.PrimaryPodDisruptionBudget.ObjectMeta)
|
||||
err := c.KubeClient.
|
||||
PodDisruptionBudgets(c.PodDisruptionBudget.Namespace).
|
||||
Delete(context.TODO(), c.PodDisruptionBudget.Name, c.deleteOptions)
|
||||
PodDisruptionBudgets(c.PrimaryPodDisruptionBudget.Namespace).
|
||||
Delete(context.TODO(), c.PrimaryPodDisruptionBudget.Name, c.deleteOptions)
|
||||
if k8sutil.ResourceNotFound(err) {
|
||||
c.logger.Debugf("PodDisruptionBudget %q has already been deleted", util.NameFromMeta(c.PodDisruptionBudget.ObjectMeta))
|
||||
c.logger.Debugf("PodDisruptionBudget %q has already been deleted", util.NameFromMeta(c.PrimaryPodDisruptionBudget.ObjectMeta))
|
||||
} else if err != nil {
|
||||
return fmt.Errorf("could not delete PodDisruptionBudget: %v", err)
|
||||
return fmt.Errorf("could not delete primary pod disruption budget: %v", err)
|
||||
}
|
||||
|
||||
c.logger.Infof("pod disruption budget %q has been deleted", util.NameFromMeta(c.PodDisruptionBudget.ObjectMeta))
|
||||
c.PodDisruptionBudget = nil
|
||||
c.logger.Infof("pod disruption budget %q has been deleted", util.NameFromMeta(c.PrimaryPodDisruptionBudget.ObjectMeta))
|
||||
c.PrimaryPodDisruptionBudget = nil
|
||||
|
||||
err = retryutil.Retry(c.OpConfig.ResourceCheckInterval, c.OpConfig.ResourceCheckTimeout,
|
||||
func() (bool, error) {
|
||||
|
|
@ -463,26 +557,80 @@ func (c *Cluster) deletePodDisruptionBudget() error {
|
|||
return false, err2
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not delete pod disruption budget: %v", err)
|
||||
return fmt.Errorf("could not delete primary pod disruption budget: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deleteCriticalOpPodDisruptionBudget() error {
|
||||
c.logger.Debug("deleting pod disruption budget for critical operations")
|
||||
if c.CriticalOpPodDisruptionBudget == nil {
|
||||
c.logger.Debug("there is no pod disruption budget for critical operations in the cluster")
|
||||
return nil
|
||||
}
|
||||
|
||||
pdbName := util.NameFromMeta(c.CriticalOpPodDisruptionBudget.ObjectMeta)
|
||||
err := c.KubeClient.
|
||||
PodDisruptionBudgets(c.CriticalOpPodDisruptionBudget.Namespace).
|
||||
Delete(context.TODO(), c.CriticalOpPodDisruptionBudget.Name, c.deleteOptions)
|
||||
if k8sutil.ResourceNotFound(err) {
|
||||
c.logger.Debugf("PodDisruptionBudget %q has already been deleted", util.NameFromMeta(c.CriticalOpPodDisruptionBudget.ObjectMeta))
|
||||
} else if err != nil {
|
||||
return fmt.Errorf("could not delete pod disruption budget for critical operations: %v", err)
|
||||
}
|
||||
|
||||
c.logger.Infof("pod disruption budget %q has been deleted", util.NameFromMeta(c.CriticalOpPodDisruptionBudget.ObjectMeta))
|
||||
c.CriticalOpPodDisruptionBudget = nil
|
||||
|
||||
err = retryutil.Retry(c.OpConfig.ResourceCheckInterval, c.OpConfig.ResourceCheckTimeout,
|
||||
func() (bool, error) {
|
||||
_, err2 := c.KubeClient.PodDisruptionBudgets(pdbName.Namespace).Get(context.TODO(), pdbName.Name, metav1.GetOptions{})
|
||||
if err2 == nil {
|
||||
return false, nil
|
||||
}
|
||||
if k8sutil.ResourceNotFound(err2) {
|
||||
return true, nil
|
||||
}
|
||||
return false, err2
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not delete pod disruption budget for critical operations: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePodDisruptionBudgets() error {
|
||||
errors := make([]string, 0)
|
||||
|
||||
if err := c.deletePrimaryPodDisruptionBudget(); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("%v", err))
|
||||
}
|
||||
|
||||
if err := c.deleteCriticalOpPodDisruptionBudget(); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("%v", err))
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("%v", strings.Join(errors, `', '`))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deleteEndpoint(role PostgresRole) error {
|
||||
c.setProcessName("deleting endpoint")
|
||||
c.logger.Debugln("deleting endpoint")
|
||||
c.logger.Debugf("deleting %s endpoint", role)
|
||||
if c.Endpoints[role] == nil {
|
||||
c.logger.Debugf("there is no %s endpoint in the cluster", role)
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := c.KubeClient.Endpoints(c.Endpoints[role].Namespace).Delete(context.TODO(), c.Endpoints[role].Name, c.deleteOptions); err != nil {
|
||||
if k8sutil.ResourceNotFound(err) {
|
||||
c.logger.Debugf("%s endpoint has already been deleted", role)
|
||||
} else if err != nil {
|
||||
return fmt.Errorf("could not delete endpoint: %v", err)
|
||||
if !k8sutil.ResourceNotFound(err) {
|
||||
return fmt.Errorf("could not delete %s endpoint: %v", role, err)
|
||||
}
|
||||
c.logger.Debugf("%s endpoint has already been deleted", role)
|
||||
}
|
||||
|
||||
c.logger.Infof("%s endpoint %q has been deleted", role, util.NameFromMeta(c.Endpoints[role].ObjectMeta))
|
||||
|
|
@ -491,12 +639,83 @@ func (c *Cluster) deleteEndpoint(role PostgresRole) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePatroniResources() error {
|
||||
c.setProcessName("deleting Patroni resources")
|
||||
errors := make([]string, 0)
|
||||
|
||||
if err := c.deleteService(Patroni); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("%v", err))
|
||||
}
|
||||
|
||||
for _, suffix := range patroniObjectSuffixes {
|
||||
if c.patroniKubernetesUseConfigMaps() {
|
||||
if err := c.deletePatroniConfigMap(suffix); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("%v", err))
|
||||
}
|
||||
} else {
|
||||
if err := c.deletePatroniEndpoint(suffix); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("%v", err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("%v", strings.Join(errors, `', '`))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePatroniConfigMap(suffix string) error {
|
||||
c.setProcessName("deleting Patroni config map")
|
||||
c.logger.Debugf("deleting %s Patroni config map", suffix)
|
||||
cm := c.PatroniConfigMaps[suffix]
|
||||
if cm == nil {
|
||||
c.logger.Debugf("there is no %s Patroni config map in the cluster", suffix)
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := c.KubeClient.ConfigMaps(cm.Namespace).Delete(context.TODO(), cm.Name, c.deleteOptions); err != nil {
|
||||
if !k8sutil.ResourceNotFound(err) {
|
||||
return fmt.Errorf("could not delete %s Patroni config map %q: %v", suffix, cm.Name, err)
|
||||
}
|
||||
c.logger.Debugf("%s Patroni config map has already been deleted", suffix)
|
||||
}
|
||||
|
||||
c.logger.Infof("%s Patroni config map %q has been deleted", suffix, util.NameFromMeta(cm.ObjectMeta))
|
||||
delete(c.PatroniConfigMaps, suffix)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePatroniEndpoint(suffix string) error {
|
||||
c.setProcessName("deleting Patroni endpoint")
|
||||
c.logger.Debugf("deleting %s Patroni endpoint", suffix)
|
||||
ep := c.PatroniEndpoints[suffix]
|
||||
if ep == nil {
|
||||
c.logger.Debugf("there is no %s Patroni endpoint in the cluster", suffix)
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := c.KubeClient.Endpoints(ep.Namespace).Delete(context.TODO(), ep.Name, c.deleteOptions); err != nil {
|
||||
if !k8sutil.ResourceNotFound(err) {
|
||||
return fmt.Errorf("could not delete %s Patroni endpoint %q: %v", suffix, ep.Name, err)
|
||||
}
|
||||
c.logger.Debugf("%s Patroni endpoint has already been deleted", suffix)
|
||||
}
|
||||
|
||||
c.logger.Infof("%s Patroni endpoint %q has been deleted", suffix, util.NameFromMeta(ep.ObjectMeta))
|
||||
delete(c.PatroniEndpoints, suffix)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deleteSecrets() error {
|
||||
c.setProcessName("deleting secrets")
|
||||
errors := make([]string, 0)
|
||||
|
||||
for uid, secret := range c.Secrets {
|
||||
err := c.deleteSecret(uid, *secret)
|
||||
for uid := range c.Secrets {
|
||||
err := c.deleteSecret(uid)
|
||||
if err != nil {
|
||||
errors = append(errors, fmt.Sprintf("%v", err))
|
||||
}
|
||||
|
|
@ -509,8 +728,9 @@ func (c *Cluster) deleteSecrets() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deleteSecret(uid types.UID, secret v1.Secret) error {
|
||||
func (c *Cluster) deleteSecret(uid types.UID) error {
|
||||
c.setProcessName("deleting secret")
|
||||
secret := c.Secrets[uid]
|
||||
secretName := util.NameFromMeta(secret.ObjectMeta)
|
||||
c.logger.Debugf("deleting secret %q", secretName)
|
||||
err := c.KubeClient.Secrets(secret.Namespace).Delete(context.TODO(), secret.Name, c.deleteOptions)
|
||||
|
|
@ -538,12 +758,12 @@ func (c *Cluster) createLogicalBackupJob() (err error) {
|
|||
if err != nil {
|
||||
return fmt.Errorf("could not generate k8s cron job spec: %v", err)
|
||||
}
|
||||
c.logger.Debugf("Generated cronJobSpec: %v", logicalBackupJobSpec)
|
||||
|
||||
_, err = c.KubeClient.CronJobsGetter.CronJobs(c.Namespace).Create(context.TODO(), logicalBackupJobSpec, metav1.CreateOptions{})
|
||||
cronJob, err := c.KubeClient.CronJobsGetter.CronJobs(c.Namespace).Create(context.TODO(), logicalBackupJobSpec, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not create k8s cron job: %v", err)
|
||||
}
|
||||
c.LogicalBackupJob = cronJob
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -557,7 +777,7 @@ func (c *Cluster) patchLogicalBackupJob(newJob *batchv1.CronJob) error {
|
|||
}
|
||||
|
||||
// update the backup job spec
|
||||
_, err = c.KubeClient.CronJobsGetter.CronJobs(c.Namespace).Patch(
|
||||
cronJob, err := c.KubeClient.CronJobsGetter.CronJobs(c.Namespace).Patch(
|
||||
context.TODO(),
|
||||
c.getLogicalBackupJobName(),
|
||||
types.MergePatchType,
|
||||
|
|
@ -567,20 +787,24 @@ func (c *Cluster) patchLogicalBackupJob(newJob *batchv1.CronJob) error {
|
|||
if err != nil {
|
||||
return fmt.Errorf("could not patch logical backup job: %v", err)
|
||||
}
|
||||
c.LogicalBackupJob = cronJob
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deleteLogicalBackupJob() error {
|
||||
|
||||
if c.LogicalBackupJob == nil {
|
||||
return nil
|
||||
}
|
||||
c.logger.Info("removing the logical backup job")
|
||||
|
||||
err := c.KubeClient.CronJobsGetter.CronJobs(c.Namespace).Delete(context.TODO(), c.getLogicalBackupJobName(), c.deleteOptions)
|
||||
err := c.KubeClient.CronJobsGetter.CronJobs(c.LogicalBackupJob.Namespace).Delete(context.TODO(), c.getLogicalBackupJobName(), c.deleteOptions)
|
||||
if k8sutil.ResourceNotFound(err) {
|
||||
c.logger.Debugf("logical backup cron job %q has already been deleted", c.getLogicalBackupJobName())
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
c.LogicalBackupJob = nil
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -610,7 +834,12 @@ func (c *Cluster) GetStatefulSet() *appsv1.StatefulSet {
|
|||
return c.Statefulset
|
||||
}
|
||||
|
||||
// GetPodDisruptionBudget returns cluster's kubernetes PodDisruptionBudget
|
||||
func (c *Cluster) GetPodDisruptionBudget() *policyv1.PodDisruptionBudget {
|
||||
return c.PodDisruptionBudget
|
||||
// GetPrimaryPodDisruptionBudget returns cluster's primary kubernetes PodDisruptionBudget
|
||||
func (c *Cluster) GetPrimaryPodDisruptionBudget() *policyv1.PodDisruptionBudget {
|
||||
return c.PrimaryPodDisruptionBudget
|
||||
}
|
||||
|
||||
// GetCriticalOpPodDisruptionBudget returns cluster's kubernetes PodDisruptionBudget for critical operations
|
||||
func (c *Cluster) GetCriticalOpPodDisruptionBudget() *policyv1.PodDisruptionBudget {
|
||||
return c.CriticalOpPodDisruptionBudget
|
||||
}
|
||||
|
|
|
|||
|
|
@ -29,51 +29,48 @@ func (c *Cluster) createStreams(appId string) (*zalandov1.FabricEventStream, err
|
|||
return streamCRD, nil
|
||||
}
|
||||
|
||||
func (c *Cluster) updateStreams(newEventStreams *zalandov1.FabricEventStream) error {
|
||||
func (c *Cluster) updateStreams(newEventStreams *zalandov1.FabricEventStream) (patchedStream *zalandov1.FabricEventStream, err error) {
|
||||
c.setProcessName("updating event streams")
|
||||
|
||||
patch, err := json.Marshal(newEventStreams)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not marshal new event stream CRD %q: %v", newEventStreams.Name, err)
|
||||
return nil, fmt.Errorf("could not marshal new event stream CRD %q: %v", newEventStreams.Name, err)
|
||||
}
|
||||
if _, err := c.KubeClient.FabricEventStreams(newEventStreams.Namespace).Patch(
|
||||
if patchedStream, err = c.KubeClient.FabricEventStreams(newEventStreams.Namespace).Patch(
|
||||
context.TODO(), newEventStreams.Name, types.MergePatchType, patch, metav1.PatchOptions{}); err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return nil
|
||||
return patchedStream, nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deleteStream(stream *zalandov1.FabricEventStream) error {
|
||||
func (c *Cluster) deleteStream(appId string) error {
|
||||
c.setProcessName("deleting event stream")
|
||||
c.logger.Debugf("deleting event stream with applicationId %s", appId)
|
||||
|
||||
err := c.KubeClient.FabricEventStreams(stream.Namespace).Delete(context.TODO(), stream.Name, metav1.DeleteOptions{})
|
||||
err := c.KubeClient.FabricEventStreams(c.Streams[appId].Namespace).Delete(context.TODO(), c.Streams[appId].Name, metav1.DeleteOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not delete event stream %q: %v", stream.Name, err)
|
||||
return fmt.Errorf("could not delete event stream %q with applicationId %s: %v", c.Streams[appId].Name, appId, err)
|
||||
}
|
||||
c.logger.Infof("event stream %q with applicationId %s has been successfully deleted", c.Streams[appId].Name, appId)
|
||||
delete(c.Streams, appId)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deleteStreams() error {
|
||||
c.setProcessName("deleting event streams")
|
||||
|
||||
// check if stream CRD is installed before trying a delete
|
||||
_, err := c.KubeClient.CustomResourceDefinitions().Get(context.TODO(), constants.EventStreamCRDName, metav1.GetOptions{})
|
||||
if k8sutil.ResourceNotFound(err) {
|
||||
return nil
|
||||
}
|
||||
|
||||
c.setProcessName("deleting event streams")
|
||||
errors := make([]string, 0)
|
||||
listOptions := metav1.ListOptions{
|
||||
LabelSelector: c.labelsSet(true).String(),
|
||||
}
|
||||
streams, err := c.KubeClient.FabricEventStreams(c.Namespace).List(context.TODO(), listOptions)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not list of FabricEventStreams: %v", err)
|
||||
}
|
||||
for _, stream := range streams.Items {
|
||||
err := c.deleteStream(&stream)
|
||||
|
||||
for appId := range c.Streams {
|
||||
err := c.deleteStream(appId)
|
||||
if err != nil {
|
||||
errors = append(errors, fmt.Sprintf("could not delete event stream %q: %v", stream.Name, err))
|
||||
errors = append(errors, fmt.Sprintf("%v", err))
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -84,7 +81,7 @@ func (c *Cluster) deleteStreams() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func gatherApplicationIds(streams []acidv1.Stream) []string {
|
||||
func getDistinctApplicationIds(streams []acidv1.Stream) []string {
|
||||
appIds := make([]string, 0)
|
||||
for _, stream := range streams {
|
||||
if !util.SliceContains(appIds, stream.ApplicationId) {
|
||||
|
|
@ -117,10 +114,10 @@ func (c *Cluster) syncPublication(dbName string, databaseSlotsList map[string]za
|
|||
}
|
||||
|
||||
for slotName, slotAndPublication := range databaseSlotsList {
|
||||
tables := slotAndPublication.Publication
|
||||
tableNames := make([]string, len(tables))
|
||||
newTables := slotAndPublication.Publication
|
||||
tableNames := make([]string, len(newTables))
|
||||
i := 0
|
||||
for t := range tables {
|
||||
for t := range newTables {
|
||||
tableName, schemaName := getTableSchema(t)
|
||||
tableNames[i] = fmt.Sprintf("%s.%s", schemaName, tableName)
|
||||
i++
|
||||
|
|
@ -129,16 +126,23 @@ func (c *Cluster) syncPublication(dbName string, databaseSlotsList map[string]za
|
|||
tableList := strings.Join(tableNames, ", ")
|
||||
|
||||
currentTables, exists := currentPublications[slotName]
|
||||
// if newTables is empty it means that it's definition was removed from streams section
|
||||
// but when slot is defined in manifest we should sync publications, too
|
||||
// by reusing current tables we make sure it is not
|
||||
if len(newTables) == 0 {
|
||||
tableList = currentTables
|
||||
}
|
||||
if !exists {
|
||||
createPublications[slotName] = tableList
|
||||
} else if currentTables != tableList {
|
||||
alterPublications[slotName] = tableList
|
||||
} else {
|
||||
(*slotsToSync)[slotName] = slotAndPublication.Slot
|
||||
}
|
||||
(*slotsToSync)[slotName] = slotAndPublication.Slot
|
||||
}
|
||||
|
||||
// check if there is any deletion
|
||||
for slotName, _ := range currentPublications {
|
||||
for slotName := range currentPublications {
|
||||
if _, exists := databaseSlotsList[slotName]; !exists {
|
||||
deletePublications = append(deletePublications, slotName)
|
||||
}
|
||||
|
|
@ -148,21 +152,31 @@ func (c *Cluster) syncPublication(dbName string, databaseSlotsList map[string]za
|
|||
return nil
|
||||
}
|
||||
|
||||
errors := make([]string, 0)
|
||||
for publicationName, tables := range createPublications {
|
||||
if err = c.executeCreatePublication(publicationName, tables); err != nil {
|
||||
return fmt.Errorf("creation of publication %q failed: %v", publicationName, err)
|
||||
errors = append(errors, fmt.Sprintf("creation of publication %q failed: %v", publicationName, err))
|
||||
continue
|
||||
}
|
||||
(*slotsToSync)[publicationName] = databaseSlotsList[publicationName].Slot
|
||||
}
|
||||
for publicationName, tables := range alterPublications {
|
||||
if err = c.executeAlterPublication(publicationName, tables); err != nil {
|
||||
return fmt.Errorf("update of publication %q failed: %v", publicationName, err)
|
||||
errors = append(errors, fmt.Sprintf("update of publication %q failed: %v", publicationName, err))
|
||||
continue
|
||||
}
|
||||
(*slotsToSync)[publicationName] = databaseSlotsList[publicationName].Slot
|
||||
}
|
||||
for _, publicationName := range deletePublications {
|
||||
(*slotsToSync)[publicationName] = nil
|
||||
if err = c.executeDropPublication(publicationName); err != nil {
|
||||
return fmt.Errorf("deletion of publication %q failed: %v", publicationName, err)
|
||||
errors = append(errors, fmt.Sprintf("deletion of publication %q failed: %v", publicationName, err))
|
||||
continue
|
||||
}
|
||||
(*slotsToSync)[publicationName] = nil
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("%v", strings.Join(errors, `', '`))
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -170,16 +184,25 @@ func (c *Cluster) syncPublication(dbName string, databaseSlotsList map[string]za
|
|||
|
||||
func (c *Cluster) generateFabricEventStream(appId string) *zalandov1.FabricEventStream {
|
||||
eventStreams := make([]zalandov1.EventStream, 0)
|
||||
resourceAnnotations := map[string]string{}
|
||||
var err, err2 error
|
||||
|
||||
for _, stream := range c.Spec.Streams {
|
||||
if stream.ApplicationId != appId {
|
||||
continue
|
||||
}
|
||||
|
||||
err = setResourceAnnotation(&resourceAnnotations, stream.CPU, constants.EventStreamCpuAnnotationKey)
|
||||
err2 = setResourceAnnotation(&resourceAnnotations, stream.Memory, constants.EventStreamMemoryAnnotationKey)
|
||||
if err != nil || err2 != nil {
|
||||
c.logger.Warningf("could not set resource annotation for event stream: %v", err)
|
||||
}
|
||||
|
||||
for tableName, table := range stream.Tables {
|
||||
streamSource := c.getEventStreamSource(stream, tableName, table.IdColumn)
|
||||
streamFlow := getEventStreamFlow(stream, table.PayloadColumn)
|
||||
streamFlow := getEventStreamFlow(table.PayloadColumn)
|
||||
streamSink := getEventStreamSink(stream, table.EventType)
|
||||
streamRecovery := getEventStreamRecovery(stream, table.RecoveryEventType, table.EventType)
|
||||
streamRecovery := getEventStreamRecovery(stream, table.RecoveryEventType, table.EventType, table.IgnoreRecovery)
|
||||
|
||||
eventStreams = append(eventStreams, zalandov1.EventStream{
|
||||
EventStreamFlow: streamFlow,
|
||||
|
|
@ -196,11 +219,10 @@ func (c *Cluster) generateFabricEventStream(appId string) *zalandov1.FabricEvent
|
|||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
// max length for cluster name is 58 so we can only add 5 more characters / numbers
|
||||
Name: fmt.Sprintf("%s-%s", c.Name, strings.ToLower(util.RandomPassword(5))),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.labelsSet(true),
|
||||
Annotations: c.AnnotationsToPropagate(c.annotationsSet(nil)),
|
||||
// make cluster StatefulSet the owner (like with connection pooler objects)
|
||||
Name: fmt.Sprintf("%s-%s", c.Name, strings.ToLower(util.RandomPassword(5))),
|
||||
Namespace: c.Namespace,
|
||||
Labels: c.labelsSet(true),
|
||||
Annotations: c.AnnotationsToPropagate(c.annotationsSet(resourceAnnotations)),
|
||||
OwnerReferences: c.ownerReferences(),
|
||||
},
|
||||
Spec: zalandov1.FabricEventStreamSpec{
|
||||
|
|
@ -210,6 +232,27 @@ func (c *Cluster) generateFabricEventStream(appId string) *zalandov1.FabricEvent
|
|||
}
|
||||
}
|
||||
|
||||
func setResourceAnnotation(annotations *map[string]string, resource *string, key string) error {
|
||||
var (
|
||||
isSmaller bool
|
||||
err error
|
||||
)
|
||||
if resource != nil {
|
||||
currentValue, exists := (*annotations)[key]
|
||||
if exists {
|
||||
isSmaller, err = util.IsSmallerQuantity(currentValue, *resource)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not compare resource in %q annotation: %v", key, err)
|
||||
}
|
||||
}
|
||||
if isSmaller || !exists {
|
||||
(*annotations)[key] = *resource
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) getEventStreamSource(stream acidv1.Stream, tableName string, idColumn *string) zalandov1.EventStreamSource {
|
||||
table, schema := getTableSchema(tableName)
|
||||
streamFilter := stream.Filter[tableName]
|
||||
|
|
@ -225,7 +268,7 @@ func (c *Cluster) getEventStreamSource(stream acidv1.Stream, tableName string, i
|
|||
}
|
||||
}
|
||||
|
||||
func getEventStreamFlow(stream acidv1.Stream, payloadColumn *string) zalandov1.EventStreamFlow {
|
||||
func getEventStreamFlow(payloadColumn *string) zalandov1.EventStreamFlow {
|
||||
return zalandov1.EventStreamFlow{
|
||||
Type: constants.EventStreamFlowPgGenericType,
|
||||
PayloadColumn: payloadColumn,
|
||||
|
|
@ -240,7 +283,7 @@ func getEventStreamSink(stream acidv1.Stream, eventType string) zalandov1.EventS
|
|||
}
|
||||
}
|
||||
|
||||
func getEventStreamRecovery(stream acidv1.Stream, recoveryEventType, eventType string) zalandov1.EventStreamRecovery {
|
||||
func getEventStreamRecovery(stream acidv1.Stream, recoveryEventType, eventType string, ignoreRecovery *bool) zalandov1.EventStreamRecovery {
|
||||
if (stream.EnableRecovery != nil && !*stream.EnableRecovery) ||
|
||||
(stream.EnableRecovery == nil && recoveryEventType == "") {
|
||||
return zalandov1.EventStreamRecovery{
|
||||
|
|
@ -248,6 +291,12 @@ func getEventStreamRecovery(stream acidv1.Stream, recoveryEventType, eventType s
|
|||
}
|
||||
}
|
||||
|
||||
if ignoreRecovery != nil && *ignoreRecovery {
|
||||
return zalandov1.EventStreamRecovery{
|
||||
Type: constants.EventStreamRecoveryIgnoreType,
|
||||
}
|
||||
}
|
||||
|
||||
if stream.EnableRecovery != nil && *stream.EnableRecovery && recoveryEventType == "" {
|
||||
recoveryEventType = fmt.Sprintf("%s-%s", eventType, constants.EventStreamRecoverySuffix)
|
||||
}
|
||||
|
|
@ -303,20 +352,12 @@ func (c *Cluster) syncStreams() error {
|
|||
|
||||
_, err := c.KubeClient.CustomResourceDefinitions().Get(context.TODO(), constants.EventStreamCRDName, metav1.GetOptions{})
|
||||
if k8sutil.ResourceNotFound(err) {
|
||||
c.logger.Debugf("event stream CRD not installed, skipping")
|
||||
c.logger.Debug("event stream CRD not installed, skipping")
|
||||
return nil
|
||||
}
|
||||
|
||||
databaseSlots := make(map[string]map[string]zalandov1.Slot)
|
||||
slotsToSync := make(map[string]map[string]string)
|
||||
requiredPatroniConfig := c.Spec.Patroni
|
||||
|
||||
if len(requiredPatroniConfig.Slots) > 0 {
|
||||
for slotName, slotConfig := range requiredPatroniConfig.Slots {
|
||||
slotsToSync[slotName] = slotConfig
|
||||
}
|
||||
}
|
||||
|
||||
// create map with every database and empty slot defintion
|
||||
// we need it to detect removal of streams from databases
|
||||
if err := c.initDbConn(); err != nil {
|
||||
return fmt.Errorf("could not init database connection")
|
||||
}
|
||||
|
|
@ -329,14 +370,29 @@ func (c *Cluster) syncStreams() error {
|
|||
if err != nil {
|
||||
return fmt.Errorf("could not get list of databases: %v", err)
|
||||
}
|
||||
// get database name with empty list of slot, except template0 and template1
|
||||
for dbName, _ := range listDatabases {
|
||||
databaseSlots := make(map[string]map[string]zalandov1.Slot)
|
||||
for dbName := range listDatabases {
|
||||
if dbName != "template0" && dbName != "template1" {
|
||||
databaseSlots[dbName] = map[string]zalandov1.Slot{}
|
||||
}
|
||||
}
|
||||
|
||||
// gather list of required slots and publications, group by database
|
||||
// need to take explicitly defined slots into account whey syncing Patroni config
|
||||
slotsToSync := make(map[string]map[string]string)
|
||||
requiredPatroniConfig := c.Spec.Patroni
|
||||
if len(requiredPatroniConfig.Slots) > 0 {
|
||||
for slotName, slotConfig := range requiredPatroniConfig.Slots {
|
||||
slotsToSync[slotName] = slotConfig
|
||||
if _, exists := databaseSlots[slotConfig["database"]]; exists {
|
||||
databaseSlots[slotConfig["database"]][slotName] = zalandov1.Slot{
|
||||
Slot: slotConfig,
|
||||
Publication: make(map[string]acidv1.StreamTable),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// get list of required slots and publications, group by database
|
||||
for _, stream := range c.Spec.Streams {
|
||||
if _, exists := databaseSlots[stream.Database]; !exists {
|
||||
c.logger.Warningf("database %q does not exist in the cluster", stream.Database)
|
||||
|
|
@ -348,13 +404,13 @@ func (c *Cluster) syncStreams() error {
|
|||
"type": "logical",
|
||||
}
|
||||
slotName := getSlotName(stream.Database, stream.ApplicationId)
|
||||
if _, exists := databaseSlots[stream.Database][slotName]; !exists {
|
||||
slotAndPublication, exists := databaseSlots[stream.Database][slotName]
|
||||
if !exists {
|
||||
databaseSlots[stream.Database][slotName] = zalandov1.Slot{
|
||||
Slot: slot,
|
||||
Publication: stream.Tables,
|
||||
}
|
||||
} else {
|
||||
slotAndPublication := databaseSlots[stream.Database][slotName]
|
||||
streamTables := slotAndPublication.Publication
|
||||
for tableName, table := range stream.Tables {
|
||||
if _, exists := streamTables[tableName]; !exists {
|
||||
|
|
@ -371,7 +427,7 @@ func (c *Cluster) syncStreams() error {
|
|||
for dbName, databaseSlotsList := range databaseSlots {
|
||||
err := c.syncPublication(dbName, databaseSlotsList, &slotsToSync)
|
||||
if err != nil {
|
||||
c.logger.Warningf("could not sync publications in database %q: %v", dbName, err)
|
||||
c.logger.Warningf("could not sync all publications in database %q: %v", dbName, err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
|
@ -390,76 +446,138 @@ func (c *Cluster) syncStreams() error {
|
|||
}
|
||||
|
||||
// finally sync stream CRDs
|
||||
err = c.createOrUpdateStreams()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) createOrUpdateStreams() error {
|
||||
|
||||
// fetch different application IDs from streams section
|
||||
// get distinct application IDs from streams section
|
||||
// there will be a separate event stream resource for each ID
|
||||
appIds := gatherApplicationIds(c.Spec.Streams)
|
||||
|
||||
// list all existing stream CRDs
|
||||
listOptions := metav1.ListOptions{
|
||||
LabelSelector: c.labelsSet(true).String(),
|
||||
}
|
||||
streams, err := c.KubeClient.FabricEventStreams(c.Namespace).List(context.TODO(), listOptions)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not list of FabricEventStreams: %v", err)
|
||||
}
|
||||
|
||||
appIds := getDistinctApplicationIds(c.Spec.Streams)
|
||||
for _, appId := range appIds {
|
||||
streamExists := false
|
||||
|
||||
// update stream when it exists and EventStreams array differs
|
||||
for _, stream := range streams.Items {
|
||||
if appId == stream.Spec.ApplicationId {
|
||||
streamExists = true
|
||||
desiredStreams := c.generateFabricEventStream(appId)
|
||||
if match, reason := sameStreams(stream.Spec.EventStreams, desiredStreams.Spec.EventStreams); !match {
|
||||
c.logger.Debugf("updating event streams: %s", reason)
|
||||
desiredStreams.ObjectMeta = stream.ObjectMeta
|
||||
err = c.updateStreams(desiredStreams)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed updating event stream %s: %v", stream.Name, err)
|
||||
}
|
||||
c.logger.Infof("event stream %q has been successfully updated", stream.Name)
|
||||
}
|
||||
continue
|
||||
if hasSlotsInSync(appId, databaseSlots, slotsToSync) {
|
||||
if err = c.syncStream(appId); err != nil {
|
||||
c.logger.Warningf("could not sync event streams with applicationId %s: %v", appId, err)
|
||||
}
|
||||
}
|
||||
|
||||
if !streamExists {
|
||||
c.logger.Infof("event streams with applicationId %s do not exist, create it", appId)
|
||||
streamCRD, err := c.createStreams(appId)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed creating event streams with applicationId %s: %v", appId, err)
|
||||
}
|
||||
c.logger.Infof("event streams %q have been successfully created", streamCRD.Name)
|
||||
} else {
|
||||
c.logger.Warningf("database replication slots %#v for streams with applicationId %s not in sync, skipping event stream sync", slotsToSync, appId)
|
||||
}
|
||||
}
|
||||
|
||||
// check if there is any deletion
|
||||
for _, stream := range streams.Items {
|
||||
if !util.SliceContains(appIds, stream.Spec.ApplicationId) {
|
||||
c.logger.Infof("event streams with applicationId %s do not exist in the manifest, delete it", stream.Spec.ApplicationId)
|
||||
err := c.deleteStream(&stream)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed deleting event streams with applicationId %s: %v", stream.Spec.ApplicationId, err)
|
||||
}
|
||||
c.logger.Infof("event streams %q have been successfully deleted", stream.Name)
|
||||
}
|
||||
if err = c.cleanupRemovedStreams(appIds); err != nil {
|
||||
return fmt.Errorf("%v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func sameStreams(curEventStreams, newEventStreams []zalandov1.EventStream) (match bool, reason string) {
|
||||
func hasSlotsInSync(appId string, databaseSlots map[string]map[string]zalandov1.Slot, slotsToSync map[string]map[string]string) bool {
|
||||
allSlotsInSync := true
|
||||
for dbName, slots := range databaseSlots {
|
||||
for slotName := range slots {
|
||||
if slotName == getSlotName(dbName, appId) {
|
||||
if slot, exists := slotsToSync[slotName]; !exists || slot == nil {
|
||||
allSlotsInSync = false
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return allSlotsInSync
|
||||
}
|
||||
|
||||
func (c *Cluster) syncStream(appId string) error {
|
||||
var (
|
||||
streams *zalandov1.FabricEventStreamList
|
||||
err error
|
||||
)
|
||||
c.setProcessName("syncing stream with applicationId %s", appId)
|
||||
c.logger.Debugf("syncing stream with applicationId %s", appId)
|
||||
|
||||
listOptions := metav1.ListOptions{
|
||||
LabelSelector: c.labelsSet(false).String(),
|
||||
}
|
||||
streams, err = c.KubeClient.FabricEventStreams(c.Namespace).List(context.TODO(), listOptions)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not list of FabricEventStreams for applicationId %s: %v", appId, err)
|
||||
}
|
||||
|
||||
streamExists := false
|
||||
for _, stream := range streams.Items {
|
||||
if stream.Spec.ApplicationId != appId {
|
||||
continue
|
||||
}
|
||||
streamExists = true
|
||||
c.Streams[appId] = &stream
|
||||
desiredStreams := c.generateFabricEventStream(appId)
|
||||
if !reflect.DeepEqual(stream.ObjectMeta.OwnerReferences, desiredStreams.ObjectMeta.OwnerReferences) {
|
||||
c.logger.Infof("owner references of event streams with applicationId %s do not match the current ones", appId)
|
||||
stream.ObjectMeta.OwnerReferences = desiredStreams.ObjectMeta.OwnerReferences
|
||||
c.setProcessName("updating event streams with applicationId %s", appId)
|
||||
updatedStream, err := c.KubeClient.FabricEventStreams(stream.Namespace).Update(context.TODO(), &stream, metav1.UpdateOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not update event streams with applicationId %s: %v", appId, err)
|
||||
}
|
||||
c.Streams[appId] = updatedStream
|
||||
}
|
||||
if match, reason := c.compareStreams(&stream, desiredStreams); !match {
|
||||
c.logger.Infof("updating event streams with applicationId %s: %s", appId, reason)
|
||||
// make sure to keep the old name with randomly generated suffix
|
||||
desiredStreams.ObjectMeta.Name = stream.ObjectMeta.Name
|
||||
updatedStream, err := c.updateStreams(desiredStreams)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed updating event streams %s with applicationId %s: %v", stream.Name, appId, err)
|
||||
}
|
||||
c.Streams[appId] = updatedStream
|
||||
c.logger.Infof("event streams %q with applicationId %s have been successfully updated", updatedStream.Name, appId)
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
if !streamExists {
|
||||
c.logger.Infof("event streams with applicationId %s do not exist, create it", appId)
|
||||
createdStream, err := c.createStreams(appId)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed creating event streams with applicationId %s: %v", appId, err)
|
||||
}
|
||||
c.logger.Infof("event streams %q have been successfully created", createdStream.Name)
|
||||
c.Streams[appId] = createdStream
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) compareStreams(curEventStreams, newEventStreams *zalandov1.FabricEventStream) (match bool, reason string) {
|
||||
reasons := make([]string, 0)
|
||||
desiredAnnotations := make(map[string]string)
|
||||
match = true
|
||||
|
||||
// stream operator can add extra annotations so incl. current annotations in desired annotations
|
||||
for curKey, curValue := range curEventStreams.Annotations {
|
||||
if _, exists := desiredAnnotations[curKey]; !exists {
|
||||
desiredAnnotations[curKey] = curValue
|
||||
}
|
||||
}
|
||||
// add/or override annotations if cpu and memory values were changed
|
||||
for newKey, newValue := range newEventStreams.Annotations {
|
||||
desiredAnnotations[newKey] = newValue
|
||||
}
|
||||
if changed, reason := c.compareAnnotations(curEventStreams.ObjectMeta.Annotations, desiredAnnotations, nil); changed {
|
||||
match = false
|
||||
reasons = append(reasons, fmt.Sprintf("new streams annotations do not match: %s", reason))
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(curEventStreams.ObjectMeta.Labels, newEventStreams.ObjectMeta.Labels) {
|
||||
match = false
|
||||
reasons = append(reasons, "new streams labels do not match the current ones")
|
||||
}
|
||||
|
||||
if changed, reason := sameEventStreams(curEventStreams.Spec.EventStreams, newEventStreams.Spec.EventStreams); !changed {
|
||||
match = false
|
||||
reasons = append(reasons, fmt.Sprintf("new streams EventStreams array does not match : %s", reason))
|
||||
}
|
||||
|
||||
return match, strings.Join(reasons, ", ")
|
||||
}
|
||||
|
||||
func sameEventStreams(curEventStreams, newEventStreams []zalandov1.EventStream) (match bool, reason string) {
|
||||
if len(newEventStreams) != len(curEventStreams) {
|
||||
return false, "number of defined streams is different"
|
||||
}
|
||||
|
|
@ -483,3 +601,22 @@ func sameStreams(curEventStreams, newEventStreams []zalandov1.EventStream) (matc
|
|||
|
||||
return true, ""
|
||||
}
|
||||
|
||||
func (c *Cluster) cleanupRemovedStreams(appIds []string) error {
|
||||
errors := make([]string, 0)
|
||||
for appId := range c.Streams {
|
||||
if !util.SliceContains(appIds, appId) {
|
||||
c.logger.Infof("event streams with applicationId %s do not exist in the manifest, delete it", appId)
|
||||
err := c.deleteStream(appId)
|
||||
if err != nil {
|
||||
errors = append(errors, fmt.Sprintf("failed deleting event streams with applicationId %s: %v", appId, err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("could not delete all removed event streams: %v", strings.Join(errors, `', '`))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ package cluster
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"context"
|
||||
|
|
@ -18,29 +19,25 @@ import (
|
|||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
"k8s.io/client-go/kubernetes/fake"
|
||||
)
|
||||
|
||||
func newFakeK8sStreamClient() (k8sutil.KubernetesClient, *fake.Clientset) {
|
||||
zalandoClientSet := fakezalandov1.NewSimpleClientset()
|
||||
clientSet := fake.NewSimpleClientset()
|
||||
|
||||
return k8sutil.KubernetesClient{
|
||||
FabricEventStreamsGetter: zalandoClientSet.ZalandoV1(),
|
||||
PostgresqlsGetter: zalandoClientSet.AcidV1(),
|
||||
PodsGetter: clientSet.CoreV1(),
|
||||
StatefulSetsGetter: clientSet.AppsV1(),
|
||||
}, clientSet
|
||||
}
|
||||
|
||||
var (
|
||||
clusterName string = "acid-test-cluster"
|
||||
clusterName string = "acid-stream-cluster"
|
||||
namespace string = "default"
|
||||
appId string = "test-app"
|
||||
dbName string = "foo"
|
||||
fesUser string = fmt.Sprintf("%s%s", constants.EventStreamSourceSlotPrefix, constants.UserRoleNameSuffix)
|
||||
slotName string = fmt.Sprintf("%s_%s_%s", constants.EventStreamSourceSlotPrefix, dbName, strings.Replace(appId, "-", "_", -1))
|
||||
|
||||
zalandoClientSet = fakezalandov1.NewSimpleClientset()
|
||||
|
||||
client = k8sutil.KubernetesClient{
|
||||
FabricEventStreamsGetter: zalandoClientSet.ZalandoV1(),
|
||||
PostgresqlsGetter: zalandoClientSet.AcidV1(),
|
||||
PodsGetter: clientSet.CoreV1(),
|
||||
StatefulSetsGetter: clientSet.AppsV1(),
|
||||
}
|
||||
|
||||
pg = acidv1.Postgresql{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: "Postgresql",
|
||||
|
|
@ -59,21 +56,26 @@ var (
|
|||
ApplicationId: appId,
|
||||
Database: "foo",
|
||||
Tables: map[string]acidv1.StreamTable{
|
||||
"data.bar": acidv1.StreamTable{
|
||||
"data.bar": {
|
||||
EventType: "stream-type-a",
|
||||
IdColumn: k8sutil.StringToPointer("b_id"),
|
||||
PayloadColumn: k8sutil.StringToPointer("b_payload"),
|
||||
},
|
||||
"data.foobar": acidv1.StreamTable{
|
||||
"data.foobar": {
|
||||
EventType: "stream-type-b",
|
||||
RecoveryEventType: "stream-type-b-dlq",
|
||||
},
|
||||
"data.foofoobar": {
|
||||
EventType: "stream-type-c",
|
||||
IgnoreRecovery: util.True(),
|
||||
},
|
||||
},
|
||||
EnableRecovery: util.True(),
|
||||
Filter: map[string]*string{
|
||||
"data.bar": k8sutil.StringToPointer("[?(@.source.txId > 500 && @.source.lsn > 123456)]"),
|
||||
},
|
||||
BatchSize: k8sutil.UInt32ToPointer(uint32(100)),
|
||||
CPU: k8sutil.StringToPointer("250m"),
|
||||
},
|
||||
},
|
||||
TeamID: "acid",
|
||||
|
|
@ -91,8 +93,16 @@ var (
|
|||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: fmt.Sprintf("%s-12345", clusterName),
|
||||
Namespace: namespace,
|
||||
Annotations: map[string]string{
|
||||
constants.EventStreamCpuAnnotationKey: "250m",
|
||||
},
|
||||
Labels: map[string]string{
|
||||
"application": "spilo",
|
||||
"cluster-name": clusterName,
|
||||
"team": "acid",
|
||||
},
|
||||
OwnerReferences: []metav1.OwnerReference{
|
||||
metav1.OwnerReference{
|
||||
{
|
||||
APIVersion: "apps/v1",
|
||||
Kind: "StatefulSet",
|
||||
Name: "acid-test-cluster",
|
||||
|
|
@ -103,7 +113,7 @@ var (
|
|||
Spec: zalandov1.FabricEventStreamSpec{
|
||||
ApplicationId: appId,
|
||||
EventStreams: []zalandov1.EventStream{
|
||||
zalandov1.EventStream{
|
||||
{
|
||||
EventStreamFlow: zalandov1.EventStreamFlow{
|
||||
PayloadColumn: k8sutil.StringToPointer("b_payload"),
|
||||
Type: constants.EventStreamFlowPgGenericType,
|
||||
|
|
@ -142,7 +152,7 @@ var (
|
|||
Type: constants.EventStreamSourcePGType,
|
||||
},
|
||||
},
|
||||
zalandov1.EventStream{
|
||||
{
|
||||
EventStreamFlow: zalandov1.EventStreamFlow{
|
||||
Type: constants.EventStreamFlowPgGenericType,
|
||||
},
|
||||
|
|
@ -178,24 +188,42 @@ var (
|
|||
Type: constants.EventStreamSourcePGType,
|
||||
},
|
||||
},
|
||||
{
|
||||
EventStreamFlow: zalandov1.EventStreamFlow{
|
||||
Type: constants.EventStreamFlowPgGenericType,
|
||||
},
|
||||
EventStreamRecovery: zalandov1.EventStreamRecovery{
|
||||
Type: constants.EventStreamRecoveryIgnoreType,
|
||||
},
|
||||
EventStreamSink: zalandov1.EventStreamSink{
|
||||
EventType: "stream-type-c",
|
||||
MaxBatchSize: k8sutil.UInt32ToPointer(uint32(100)),
|
||||
Type: constants.EventStreamSinkNakadiType,
|
||||
},
|
||||
EventStreamSource: zalandov1.EventStreamSource{
|
||||
Connection: zalandov1.Connection{
|
||||
DBAuth: zalandov1.DBAuth{
|
||||
Name: fmt.Sprintf("fes-user.%s.credentials.postgresql.acid.zalan.do", clusterName),
|
||||
PasswordKey: "password",
|
||||
Type: constants.EventStreamSourceAuthType,
|
||||
UserKey: "username",
|
||||
},
|
||||
Url: fmt.Sprintf("jdbc:postgresql://%s.%s/foo?user=%s&ssl=true&sslmode=require", clusterName, namespace, fesUser),
|
||||
SlotName: slotName,
|
||||
PluginType: constants.EventStreamSourcePluginType,
|
||||
},
|
||||
Schema: "data",
|
||||
EventStreamTable: zalandov1.EventStreamTable{
|
||||
Name: "foofoobar",
|
||||
},
|
||||
Type: constants.EventStreamSourcePGType,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func TestGatherApplicationIds(t *testing.T) {
|
||||
testAppIds := []string{appId}
|
||||
appIds := gatherApplicationIds(pg.Spec.Streams)
|
||||
|
||||
if !util.IsEqualIgnoreOrder(testAppIds, appIds) {
|
||||
t.Errorf("gathered applicationIds do not match, expected %#v, got %#v", testAppIds, appIds)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGenerateFabricEventStream(t *testing.T) {
|
||||
client, _ := newFakeK8sStreamClient()
|
||||
|
||||
var cluster = New(
|
||||
cluster = New(
|
||||
Config{
|
||||
OpConfig: config.Config{
|
||||
Auth: config.Auth{
|
||||
|
|
@ -213,60 +241,335 @@ func TestGenerateFabricEventStream(t *testing.T) {
|
|||
},
|
||||
},
|
||||
}, client, pg, logger, eventRecorder)
|
||||
)
|
||||
|
||||
func TestGatherApplicationIds(t *testing.T) {
|
||||
testAppIds := []string{appId}
|
||||
appIds := getDistinctApplicationIds(pg.Spec.Streams)
|
||||
|
||||
if !util.IsEqualIgnoreOrder(testAppIds, appIds) {
|
||||
t.Errorf("list of applicationIds does not match, expected %#v, got %#v", testAppIds, appIds)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHasSlotsInSync(t *testing.T) {
|
||||
cluster.Name = clusterName
|
||||
cluster.Namespace = namespace
|
||||
|
||||
// create statefulset to have ownerReference for streams
|
||||
_, err := cluster.createStatefulSet()
|
||||
assert.NoError(t, err)
|
||||
appId2 := fmt.Sprintf("%s-2", appId)
|
||||
dbNotExists := "dbnotexists"
|
||||
slotNotExists := fmt.Sprintf("%s_%s_%s", constants.EventStreamSourceSlotPrefix, dbNotExists, strings.Replace(appId, "-", "_", -1))
|
||||
slotNotExistsAppId2 := fmt.Sprintf("%s_%s_%s", constants.EventStreamSourceSlotPrefix, dbNotExists, strings.Replace(appId2, "-", "_", -1))
|
||||
|
||||
tests := []struct {
|
||||
subTest string
|
||||
applicationId string
|
||||
expectedSlots map[string]map[string]zalandov1.Slot
|
||||
actualSlots map[string]map[string]string
|
||||
slotsInSync bool
|
||||
}{
|
||||
{
|
||||
subTest: fmt.Sprintf("slots in sync for applicationId %s", appId),
|
||||
applicationId: appId,
|
||||
expectedSlots: map[string]map[string]zalandov1.Slot{
|
||||
dbName: {
|
||||
slotName: zalandov1.Slot{
|
||||
Slot: map[string]string{
|
||||
"databases": dbName,
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
Publication: map[string]acidv1.StreamTable{
|
||||
"test1": {
|
||||
EventType: "stream-type-a",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
actualSlots: map[string]map[string]string{
|
||||
slotName: {
|
||||
"databases": dbName,
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
},
|
||||
slotsInSync: true,
|
||||
}, {
|
||||
subTest: fmt.Sprintf("slots empty for applicationId %s after create or update of publication failed", appId),
|
||||
applicationId: appId,
|
||||
expectedSlots: map[string]map[string]zalandov1.Slot{
|
||||
dbNotExists: {
|
||||
slotNotExists: zalandov1.Slot{
|
||||
Slot: map[string]string{
|
||||
"databases": dbName,
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
Publication: map[string]acidv1.StreamTable{
|
||||
"test1": {
|
||||
EventType: "stream-type-a",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
actualSlots: map[string]map[string]string{},
|
||||
slotsInSync: false,
|
||||
}, {
|
||||
subTest: fmt.Sprintf("slot with empty definition for applicationId %s after publication git deleted", appId),
|
||||
applicationId: appId,
|
||||
expectedSlots: map[string]map[string]zalandov1.Slot{
|
||||
dbNotExists: {
|
||||
slotNotExists: zalandov1.Slot{
|
||||
Slot: map[string]string{
|
||||
"databases": dbName,
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
Publication: map[string]acidv1.StreamTable{
|
||||
"test1": {
|
||||
EventType: "stream-type-a",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
actualSlots: map[string]map[string]string{
|
||||
slotName: nil,
|
||||
},
|
||||
slotsInSync: false,
|
||||
}, {
|
||||
subTest: fmt.Sprintf("one slot not in sync for applicationId %s because database does not exist", appId),
|
||||
applicationId: appId,
|
||||
expectedSlots: map[string]map[string]zalandov1.Slot{
|
||||
dbName: {
|
||||
slotName: zalandov1.Slot{
|
||||
Slot: map[string]string{
|
||||
"databases": dbName,
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
Publication: map[string]acidv1.StreamTable{
|
||||
"test1": {
|
||||
EventType: "stream-type-a",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
dbNotExists: {
|
||||
slotNotExists: zalandov1.Slot{
|
||||
Slot: map[string]string{
|
||||
"databases": "dbnotexists",
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
Publication: map[string]acidv1.StreamTable{
|
||||
"test2": {
|
||||
EventType: "stream-type-b",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
actualSlots: map[string]map[string]string{
|
||||
slotName: {
|
||||
"databases": dbName,
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
},
|
||||
slotsInSync: false,
|
||||
}, {
|
||||
subTest: fmt.Sprintf("slots in sync for applicationId %s, but not for %s - checking %s should return true", appId, appId2, appId),
|
||||
applicationId: appId,
|
||||
expectedSlots: map[string]map[string]zalandov1.Slot{
|
||||
dbName: {
|
||||
slotName: zalandov1.Slot{
|
||||
Slot: map[string]string{
|
||||
"databases": dbName,
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
Publication: map[string]acidv1.StreamTable{
|
||||
"test1": {
|
||||
EventType: "stream-type-a",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
dbNotExists: {
|
||||
slotNotExistsAppId2: zalandov1.Slot{
|
||||
Slot: map[string]string{
|
||||
"databases": "dbnotexists",
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
Publication: map[string]acidv1.StreamTable{
|
||||
"test2": {
|
||||
EventType: "stream-type-b",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
actualSlots: map[string]map[string]string{
|
||||
slotName: {
|
||||
"databases": dbName,
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
},
|
||||
slotsInSync: true,
|
||||
}, {
|
||||
subTest: fmt.Sprintf("slots in sync for applicationId %s, but not for %s - checking %s should return false", appId, appId2, appId2),
|
||||
applicationId: appId2,
|
||||
expectedSlots: map[string]map[string]zalandov1.Slot{
|
||||
dbName: {
|
||||
slotName: zalandov1.Slot{
|
||||
Slot: map[string]string{
|
||||
"databases": dbName,
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
Publication: map[string]acidv1.StreamTable{
|
||||
"test1": {
|
||||
EventType: "stream-type-a",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
dbNotExists: {
|
||||
slotNotExistsAppId2: zalandov1.Slot{
|
||||
Slot: map[string]string{
|
||||
"databases": "dbnotexists",
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
Publication: map[string]acidv1.StreamTable{
|
||||
"test2": {
|
||||
EventType: "stream-type-b",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
actualSlots: map[string]map[string]string{
|
||||
slotName: {
|
||||
"databases": dbName,
|
||||
"plugin": constants.EventStreamSourcePluginType,
|
||||
"type": "logical",
|
||||
},
|
||||
},
|
||||
slotsInSync: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
result := hasSlotsInSync(tt.applicationId, tt.expectedSlots, tt.actualSlots)
|
||||
if result != tt.slotsInSync {
|
||||
t.Errorf("%s: unexpected result for slot test of applicationId: %v, expected slots %#v, actual slots %#v", tt.subTest, tt.applicationId, tt.expectedSlots, tt.actualSlots)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestGenerateFabricEventStream(t *testing.T) {
|
||||
cluster.Name = clusterName
|
||||
cluster.Namespace = namespace
|
||||
|
||||
// create the streams
|
||||
err = cluster.createOrUpdateStreams()
|
||||
err := cluster.syncStream(appId)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// compare generated stream with expected stream
|
||||
result := cluster.generateFabricEventStream(appId)
|
||||
if match, _ := sameStreams(result.Spec.EventStreams, fes.Spec.EventStreams); !match {
|
||||
if match, _ := cluster.compareStreams(result, fes); !match {
|
||||
t.Errorf("malformed FabricEventStream, expected %#v, got %#v", fes, result)
|
||||
}
|
||||
|
||||
listOptions := metav1.ListOptions{
|
||||
LabelSelector: cluster.labelsSet(true).String(),
|
||||
LabelSelector: cluster.labelsSet(false).String(),
|
||||
}
|
||||
streams, err := cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// check if there is only one stream
|
||||
if len(streams.Items) > 1 {
|
||||
t.Errorf("too many stream CRDs found: got %d, but expected only one", len(streams.Items))
|
||||
}
|
||||
assert.Equalf(t, 1, len(streams.Items), "unexpected number of streams found: got %d, but expected only one", len(streams.Items))
|
||||
|
||||
// compare stream returned from API with expected stream
|
||||
if match, _ := sameStreams(streams.Items[0].Spec.EventStreams, fes.Spec.EventStreams); !match {
|
||||
if match, _ := cluster.compareStreams(&streams.Items[0], fes); !match {
|
||||
t.Errorf("malformed FabricEventStream returned from API, expected %#v, got %#v", fes, streams.Items[0])
|
||||
}
|
||||
|
||||
// sync streams once again
|
||||
err = cluster.createOrUpdateStreams()
|
||||
err = cluster.syncStream(appId)
|
||||
assert.NoError(t, err)
|
||||
|
||||
streams, err = cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// check if there is still only one stream
|
||||
if len(streams.Items) > 1 {
|
||||
t.Errorf("too many stream CRDs found after sync: got %d, but expected only one", len(streams.Items))
|
||||
}
|
||||
assert.Equalf(t, 1, len(streams.Items), "unexpected number of streams found: got %d, but expected only one", len(streams.Items))
|
||||
|
||||
// compare stream resturned from API with generated stream
|
||||
if match, _ := sameStreams(streams.Items[0].Spec.EventStreams, result.Spec.EventStreams); !match {
|
||||
if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
|
||||
t.Errorf("returned FabricEventStream differs from generated one, expected %#v, got %#v", result, streams.Items[0])
|
||||
}
|
||||
}
|
||||
|
||||
func newFabricEventStream(streams []zalandov1.EventStream, annotations map[string]string) *zalandov1.FabricEventStream {
|
||||
return &zalandov1.FabricEventStream{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: fmt.Sprintf("%s-12345", clusterName),
|
||||
Annotations: annotations,
|
||||
},
|
||||
Spec: zalandov1.FabricEventStreamSpec{
|
||||
ApplicationId: appId,
|
||||
EventStreams: streams,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func TestSyncStreams(t *testing.T) {
|
||||
newClusterName := fmt.Sprintf("%s-2", pg.Name)
|
||||
pg.Name = newClusterName
|
||||
var cluster = New(
|
||||
Config{
|
||||
OpConfig: config.Config{
|
||||
PodManagementPolicy: "ordered_ready",
|
||||
Resources: config.Resources{
|
||||
ClusterLabels: map[string]string{"application": "spilo"},
|
||||
ClusterNameLabel: "cluster-name",
|
||||
DefaultCPURequest: "300m",
|
||||
DefaultCPULimit: "300m",
|
||||
DefaultMemoryRequest: "300Mi",
|
||||
DefaultMemoryLimit: "300Mi",
|
||||
PodRoleLabel: "spilo-role",
|
||||
},
|
||||
},
|
||||
}, client, pg, logger, eventRecorder)
|
||||
|
||||
_, err := cluster.KubeClient.Postgresqls(namespace).Create(
|
||||
context.TODO(), &pg, metav1.CreateOptions{})
|
||||
assert.NoError(t, err)
|
||||
|
||||
// create the stream
|
||||
err = cluster.syncStream(appId)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// sync the stream again
|
||||
err = cluster.syncStream(appId)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// check that only one stream remains after sync
|
||||
listOptions := metav1.ListOptions{
|
||||
LabelSelector: cluster.labelsSet(false).String(),
|
||||
}
|
||||
streams, err := cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
|
||||
assert.NoError(t, err)
|
||||
assert.Equalf(t, 1, len(streams.Items), "unexpected number of streams found: got %d, but expected only 1", len(streams.Items))
|
||||
}
|
||||
|
||||
func TestSameStreams(t *testing.T) {
|
||||
testName := "TestSameStreams"
|
||||
annotationsA := map[string]string{constants.EventStreamMemoryAnnotationKey: "500Mi"}
|
||||
annotationsB := map[string]string{constants.EventStreamMemoryAnnotationKey: "1Gi"}
|
||||
|
||||
stream1 := zalandov1.EventStream{
|
||||
EventStreamFlow: zalandov1.EventStreamFlow{},
|
||||
|
|
@ -311,67 +614,179 @@ func TestSameStreams(t *testing.T) {
|
|||
|
||||
tests := []struct {
|
||||
subTest string
|
||||
streamsA []zalandov1.EventStream
|
||||
streamsB []zalandov1.EventStream
|
||||
streamsA *zalandov1.FabricEventStream
|
||||
streamsB *zalandov1.FabricEventStream
|
||||
match bool
|
||||
reason string
|
||||
}{
|
||||
{
|
||||
subTest: "identical streams",
|
||||
streamsA: []zalandov1.EventStream{stream1, stream2},
|
||||
streamsB: []zalandov1.EventStream{stream1, stream2},
|
||||
streamsA: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, annotationsA),
|
||||
streamsB: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, annotationsA),
|
||||
match: true,
|
||||
reason: "",
|
||||
},
|
||||
{
|
||||
subTest: "same streams different order",
|
||||
streamsA: []zalandov1.EventStream{stream1, stream2},
|
||||
streamsB: []zalandov1.EventStream{stream2, stream1},
|
||||
streamsA: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, nil),
|
||||
streamsB: newFabricEventStream([]zalandov1.EventStream{stream2, stream1}, nil),
|
||||
match: true,
|
||||
reason: "",
|
||||
},
|
||||
{
|
||||
subTest: "same streams different order",
|
||||
streamsA: []zalandov1.EventStream{stream1},
|
||||
streamsB: []zalandov1.EventStream{stream1, stream2},
|
||||
streamsA: newFabricEventStream([]zalandov1.EventStream{stream1}, nil),
|
||||
streamsB: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, nil),
|
||||
match: false,
|
||||
reason: "number of defined streams is different",
|
||||
reason: "new streams EventStreams array does not match : number of defined streams is different",
|
||||
},
|
||||
{
|
||||
subTest: "different number of streams",
|
||||
streamsA: []zalandov1.EventStream{stream1},
|
||||
streamsB: []zalandov1.EventStream{stream1, stream2},
|
||||
streamsA: newFabricEventStream([]zalandov1.EventStream{stream1}, nil),
|
||||
streamsB: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, nil),
|
||||
match: false,
|
||||
reason: "number of defined streams is different",
|
||||
reason: "new streams EventStreams array does not match : number of defined streams is different",
|
||||
},
|
||||
{
|
||||
subTest: "event stream specs differ",
|
||||
streamsA: []zalandov1.EventStream{stream1, stream2},
|
||||
streamsB: fes.Spec.EventStreams,
|
||||
streamsA: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, nil),
|
||||
streamsB: fes,
|
||||
match: false,
|
||||
reason: "number of defined streams is different",
|
||||
reason: "new streams annotations do not match: Added \"fes.zalando.org/FES_CPU\" with value \"250m\"., new streams labels do not match the current ones, new streams EventStreams array does not match : number of defined streams is different",
|
||||
},
|
||||
{
|
||||
subTest: "event stream recovery specs differ",
|
||||
streamsA: []zalandov1.EventStream{stream2},
|
||||
streamsB: []zalandov1.EventStream{stream3},
|
||||
streamsA: newFabricEventStream([]zalandov1.EventStream{stream2}, nil),
|
||||
streamsB: newFabricEventStream([]zalandov1.EventStream{stream3}, nil),
|
||||
match: false,
|
||||
reason: "event stream specs differ",
|
||||
reason: "new streams EventStreams array does not match : event stream specs differ",
|
||||
},
|
||||
{
|
||||
subTest: "event stream with new annotations",
|
||||
streamsA: newFabricEventStream([]zalandov1.EventStream{stream2}, nil),
|
||||
streamsB: newFabricEventStream([]zalandov1.EventStream{stream2}, annotationsA),
|
||||
match: false,
|
||||
reason: "new streams annotations do not match: Added \"fes.zalando.org/FES_MEMORY\" with value \"500Mi\".",
|
||||
},
|
||||
{
|
||||
subTest: "event stream annotations differ",
|
||||
streamsA: newFabricEventStream([]zalandov1.EventStream{stream3}, annotationsA),
|
||||
streamsB: newFabricEventStream([]zalandov1.EventStream{stream3}, annotationsB),
|
||||
match: false,
|
||||
reason: "new streams annotations do not match: \"fes.zalando.org/FES_MEMORY\" changed from \"500Mi\" to \"1Gi\".",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
streamsMatch, matchReason := sameStreams(tt.streamsA, tt.streamsB)
|
||||
if streamsMatch != tt.match {
|
||||
t.Errorf("%s %s: unexpected match result when comparing streams: got %s, epxected %s",
|
||||
streamsMatch, matchReason := cluster.compareStreams(tt.streamsA, tt.streamsB)
|
||||
if streamsMatch != tt.match || matchReason != tt.reason {
|
||||
t.Errorf("%s %s: unexpected match result when comparing streams: got %s, expected %s",
|
||||
testName, tt.subTest, matchReason, tt.reason)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateFabricEventStream(t *testing.T) {
|
||||
client, _ := newFakeK8sStreamClient()
|
||||
func TestUpdateStreams(t *testing.T) {
|
||||
pg.Name = fmt.Sprintf("%s-3", pg.Name)
|
||||
var cluster = New(
|
||||
Config{
|
||||
OpConfig: config.Config{
|
||||
PodManagementPolicy: "ordered_ready",
|
||||
Resources: config.Resources{
|
||||
ClusterLabels: map[string]string{"application": "spilo"},
|
||||
ClusterNameLabel: "cluster-name",
|
||||
DefaultCPURequest: "300m",
|
||||
DefaultCPULimit: "300m",
|
||||
DefaultMemoryRequest: "300Mi",
|
||||
DefaultMemoryLimit: "300Mi",
|
||||
EnableOwnerReferences: util.True(),
|
||||
PodRoleLabel: "spilo-role",
|
||||
},
|
||||
},
|
||||
}, client, pg, logger, eventRecorder)
|
||||
|
||||
_, err := cluster.KubeClient.Postgresqls(namespace).Create(
|
||||
context.TODO(), &pg, metav1.CreateOptions{})
|
||||
assert.NoError(t, err)
|
||||
|
||||
// create stream with different owner reference
|
||||
fes.ObjectMeta.Name = fmt.Sprintf("%s-12345", pg.Name)
|
||||
fes.ObjectMeta.Labels["cluster-name"] = pg.Name
|
||||
createdStream, err := cluster.KubeClient.FabricEventStreams(namespace).Create(
|
||||
context.TODO(), fes, metav1.CreateOptions{})
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, createdStream.Spec.ApplicationId, appId)
|
||||
|
||||
// sync the stream which should update the owner reference
|
||||
err = cluster.syncStream(appId)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// check that only one stream exists after sync
|
||||
listOptions := metav1.ListOptions{
|
||||
LabelSelector: cluster.labelsSet(true).String(),
|
||||
}
|
||||
streams, err := cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
|
||||
assert.NoError(t, err)
|
||||
assert.Equalf(t, 1, len(streams.Items), "unexpected number of streams found: got %d, but expected only 1", len(streams.Items))
|
||||
|
||||
// compare owner references
|
||||
if !reflect.DeepEqual(streams.Items[0].OwnerReferences, cluster.ownerReferences()) {
|
||||
t.Errorf("unexpected owner references, expected %#v, got %#v", cluster.ownerReferences(), streams.Items[0].OwnerReferences)
|
||||
}
|
||||
|
||||
// change specs of streams and patch CRD
|
||||
for i, stream := range pg.Spec.Streams {
|
||||
if stream.ApplicationId == appId {
|
||||
streamTable := stream.Tables["data.bar"]
|
||||
streamTable.EventType = "stream-type-c"
|
||||
stream.Tables["data.bar"] = streamTable
|
||||
stream.BatchSize = k8sutil.UInt32ToPointer(uint32(250))
|
||||
pg.Spec.Streams[i] = stream
|
||||
}
|
||||
}
|
||||
|
||||
// compare stream returned from API with expected stream
|
||||
streams = patchPostgresqlStreams(t, cluster, &pg.Spec, listOptions)
|
||||
result := cluster.generateFabricEventStream(appId)
|
||||
if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
|
||||
t.Errorf("Malformed FabricEventStream after updating manifest, expected %#v, got %#v", streams.Items[0], result)
|
||||
}
|
||||
|
||||
// disable recovery
|
||||
for idx, stream := range pg.Spec.Streams {
|
||||
if stream.ApplicationId == appId {
|
||||
stream.EnableRecovery = util.False()
|
||||
pg.Spec.Streams[idx] = stream
|
||||
}
|
||||
}
|
||||
|
||||
streams = patchPostgresqlStreams(t, cluster, &pg.Spec, listOptions)
|
||||
result = cluster.generateFabricEventStream(appId)
|
||||
if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
|
||||
t.Errorf("Malformed FabricEventStream after disabling event recovery, expected %#v, got %#v", streams.Items[0], result)
|
||||
}
|
||||
}
|
||||
|
||||
func patchPostgresqlStreams(t *testing.T, cluster *Cluster, pgSpec *acidv1.PostgresSpec, listOptions metav1.ListOptions) (streams *zalandov1.FabricEventStreamList) {
|
||||
patchData, err := specPatch(pgSpec)
|
||||
assert.NoError(t, err)
|
||||
|
||||
pgPatched, err := cluster.KubeClient.Postgresqls(namespace).Patch(
|
||||
context.TODO(), cluster.Name, types.MergePatchType, patchData, metav1.PatchOptions{}, "spec")
|
||||
assert.NoError(t, err)
|
||||
|
||||
cluster.Postgresql.Spec = pgPatched.Spec
|
||||
err = cluster.syncStream(appId)
|
||||
assert.NoError(t, err)
|
||||
|
||||
streams, err = cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
|
||||
assert.NoError(t, err)
|
||||
|
||||
return streams
|
||||
}
|
||||
|
||||
func TestDeleteStreams(t *testing.T) {
|
||||
pg.Name = fmt.Sprintf("%s-4", pg.Name)
|
||||
var cluster = New(
|
||||
Config{
|
||||
OpConfig: config.Config{
|
||||
|
|
@ -392,12 +807,8 @@ func TestUpdateFabricEventStream(t *testing.T) {
|
|||
context.TODO(), &pg, metav1.CreateOptions{})
|
||||
assert.NoError(t, err)
|
||||
|
||||
// create statefulset to have ownerReference for streams
|
||||
_, err = cluster.createStatefulSet()
|
||||
assert.NoError(t, err)
|
||||
|
||||
// now create the stream
|
||||
err = cluster.createOrUpdateStreams()
|
||||
// create the stream
|
||||
err = cluster.syncStream(appId)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// change specs of streams and patch CRD
|
||||
|
|
@ -411,65 +822,70 @@ func TestUpdateFabricEventStream(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
patchData, err := specPatch(pg.Spec)
|
||||
assert.NoError(t, err)
|
||||
|
||||
pgPatched, err := cluster.KubeClient.Postgresqls(namespace).Patch(
|
||||
context.TODO(), cluster.Name, types.MergePatchType, patchData, metav1.PatchOptions{}, "spec")
|
||||
assert.NoError(t, err)
|
||||
|
||||
cluster.Postgresql.Spec = pgPatched.Spec
|
||||
err = cluster.createOrUpdateStreams()
|
||||
assert.NoError(t, err)
|
||||
|
||||
// compare stream returned from API with expected stream
|
||||
listOptions := metav1.ListOptions{
|
||||
LabelSelector: cluster.labelsSet(true).String(),
|
||||
LabelSelector: cluster.labelsSet(false).String(),
|
||||
}
|
||||
streams, err := cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
|
||||
assert.NoError(t, err)
|
||||
|
||||
streams := patchPostgresqlStreams(t, cluster, &pg.Spec, listOptions)
|
||||
result := cluster.generateFabricEventStream(appId)
|
||||
if match, _ := sameStreams(streams.Items[0].Spec.EventStreams, result.Spec.EventStreams); !match {
|
||||
if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
|
||||
t.Errorf("Malformed FabricEventStream after updating manifest, expected %#v, got %#v", streams.Items[0], result)
|
||||
}
|
||||
|
||||
// change teamId and check that stream is updated
|
||||
pg.Spec.TeamID = "new-team"
|
||||
streams = patchPostgresqlStreams(t, cluster, &pg.Spec, listOptions)
|
||||
result = cluster.generateFabricEventStream(appId)
|
||||
if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
|
||||
t.Errorf("Malformed FabricEventStream after updating teamId, expected %#v, got %#v", streams.Items[0].ObjectMeta.Labels, result.ObjectMeta.Labels)
|
||||
}
|
||||
|
||||
// disable recovery
|
||||
for _, stream := range pg.Spec.Streams {
|
||||
for idx, stream := range pg.Spec.Streams {
|
||||
if stream.ApplicationId == appId {
|
||||
stream.EnableRecovery = util.False()
|
||||
pg.Spec.Streams[idx] = stream
|
||||
}
|
||||
}
|
||||
patchData, err = specPatch(pg.Spec)
|
||||
assert.NoError(t, err)
|
||||
|
||||
pgPatched, err = cluster.KubeClient.Postgresqls(namespace).Patch(
|
||||
context.TODO(), cluster.Name, types.MergePatchType, patchData, metav1.PatchOptions{}, "spec")
|
||||
assert.NoError(t, err)
|
||||
|
||||
cluster.Postgresql.Spec = pgPatched.Spec
|
||||
err = cluster.createOrUpdateStreams()
|
||||
assert.NoError(t, err)
|
||||
|
||||
streams = patchPostgresqlStreams(t, cluster, &pg.Spec, listOptions)
|
||||
result = cluster.generateFabricEventStream(appId)
|
||||
if match, _ := sameStreams(streams.Items[0].Spec.EventStreams, result.Spec.EventStreams); !match {
|
||||
if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
|
||||
t.Errorf("Malformed FabricEventStream after disabling event recovery, expected %#v, got %#v", streams.Items[0], result)
|
||||
}
|
||||
|
||||
mockClient := k8sutil.NewMockKubernetesClient()
|
||||
cluster.KubeClient.CustomResourceDefinitionsGetter = mockClient.CustomResourceDefinitionsGetter
|
||||
|
||||
// remove streams from manifest
|
||||
pgPatched.Spec.Streams = nil
|
||||
pg.Spec.Streams = nil
|
||||
pgUpdated, err := cluster.KubeClient.Postgresqls(namespace).Update(
|
||||
context.TODO(), pgPatched, metav1.UpdateOptions{})
|
||||
context.TODO(), &pg, metav1.UpdateOptions{})
|
||||
assert.NoError(t, err)
|
||||
|
||||
cluster.Postgresql.Spec = pgUpdated.Spec
|
||||
cluster.createOrUpdateStreams()
|
||||
appIds := getDistinctApplicationIds(pgUpdated.Spec.Streams)
|
||||
cluster.cleanupRemovedStreams(appIds)
|
||||
|
||||
streamList, err := cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
|
||||
if len(streamList.Items) > 0 || err != nil {
|
||||
t.Errorf("stream resource has not been removed or unexpected error %v", err)
|
||||
}
|
||||
// check that streams have been deleted
|
||||
streams, err = cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
|
||||
assert.NoError(t, err)
|
||||
assert.Equalf(t, 0, len(streams.Items), "unexpected number of streams found: got %d, but expected none", len(streams.Items))
|
||||
|
||||
// create stream to test deleteStreams code
|
||||
fes.ObjectMeta.Name = fmt.Sprintf("%s-12345", pg.Name)
|
||||
fes.ObjectMeta.Labels["cluster-name"] = pg.Name
|
||||
_, err = cluster.KubeClient.FabricEventStreams(namespace).Create(
|
||||
context.TODO(), fes, metav1.CreateOptions{})
|
||||
assert.NoError(t, err)
|
||||
|
||||
// sync it once to cluster struct
|
||||
err = cluster.syncStream(appId)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// we need a mock client because deleteStreams checks for CRD existance
|
||||
mockClient := k8sutil.NewMockKubernetesClient()
|
||||
cluster.KubeClient.CustomResourceDefinitionsGetter = mockClient.CustomResourceDefinitionsGetter
|
||||
cluster.deleteStreams()
|
||||
|
||||
// check that streams have been deleted
|
||||
streams, err = cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
|
||||
assert.NoError(t, err)
|
||||
assert.Equalf(t, 0, len(streams.Items), "unexpected number of streams found: got %d, but expected none", len(streams.Items))
|
||||
}
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ import (
|
|||
"github.com/zalando/postgres-operator/pkg/util"
|
||||
"github.com/zalando/postgres-operator/pkg/util/constants"
|
||||
"github.com/zalando/postgres-operator/pkg/util/k8sutil"
|
||||
"golang.org/x/exp/maps"
|
||||
"golang.org/x/exp/slices"
|
||||
batchv1 "k8s.io/api/batch/v1"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
|
|
@ -95,6 +96,10 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
|
|||
return err
|
||||
}
|
||||
|
||||
if err = c.syncPatroniResources(); err != nil {
|
||||
c.logger.Errorf("could not sync Patroni resources: %v", err)
|
||||
}
|
||||
|
||||
// sync volume may already transition volumes to gp3, if iops/throughput or type is specified
|
||||
if err = c.syncVolumes(); err != nil {
|
||||
return err
|
||||
|
|
@ -107,6 +112,11 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
|
|||
}
|
||||
}
|
||||
|
||||
if !isInMaintenanceWindow(newSpec.Spec.MaintenanceWindows) {
|
||||
// do not apply any major version related changes yet
|
||||
newSpec.Spec.PostgresqlParam.PgVersion = oldSpec.Spec.PostgresqlParam.PgVersion
|
||||
}
|
||||
|
||||
if err = c.syncStatefulSet(); err != nil {
|
||||
if !k8sutil.ResourceAlreadyExists(err) {
|
||||
err = fmt.Errorf("could not sync statefulsets: %v", err)
|
||||
|
|
@ -122,8 +132,8 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
|
|||
}
|
||||
|
||||
c.logger.Debug("syncing pod disruption budgets")
|
||||
if err = c.syncPodDisruptionBudget(false); err != nil {
|
||||
err = fmt.Errorf("could not sync pod disruption budget: %v", err)
|
||||
if err = c.syncPodDisruptionBudgets(false); err != nil {
|
||||
err = fmt.Errorf("could not sync pod disruption budgets: %v", err)
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
@ -158,7 +168,10 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
|
|||
return fmt.Errorf("could not sync connection pooler: %v", err)
|
||||
}
|
||||
|
||||
if len(c.Spec.Streams) > 0 {
|
||||
// sync if manifest stream count is different from stream CR count
|
||||
// it can be that they are always different due to grouping of manifest streams
|
||||
// but we would catch missed removals on update
|
||||
if len(c.Spec.Streams) != len(c.Streams) {
|
||||
c.logger.Debug("syncing streams")
|
||||
if err = c.syncStreams(); err != nil {
|
||||
err = fmt.Errorf("could not sync streams: %v", err)
|
||||
|
|
@ -188,6 +201,167 @@ func (c *Cluster) syncFinalizer() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) syncPatroniResources() error {
|
||||
errors := make([]string, 0)
|
||||
|
||||
if err := c.syncPatroniService(); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("could not sync %s service: %v", Patroni, err))
|
||||
}
|
||||
|
||||
for _, suffix := range patroniObjectSuffixes {
|
||||
if c.patroniKubernetesUseConfigMaps() {
|
||||
if err := c.syncPatroniConfigMap(suffix); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("could not sync %s Patroni config map: %v", suffix, err))
|
||||
}
|
||||
} else {
|
||||
if err := c.syncPatroniEndpoint(suffix); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("could not sync %s Patroni endpoint: %v", suffix, err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("%v", strings.Join(errors, `', '`))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) syncPatroniConfigMap(suffix string) error {
|
||||
var (
|
||||
cm *v1.ConfigMap
|
||||
err error
|
||||
)
|
||||
configMapName := fmt.Sprintf("%s-%s", c.Name, suffix)
|
||||
c.logger.Debugf("syncing %s config map", configMapName)
|
||||
c.setProcessName("syncing %s config map", configMapName)
|
||||
|
||||
if cm, err = c.KubeClient.ConfigMaps(c.Namespace).Get(context.TODO(), configMapName, metav1.GetOptions{}); err == nil {
|
||||
c.PatroniConfigMaps[suffix] = cm
|
||||
desiredOwnerRefs := c.ownerReferences()
|
||||
if !reflect.DeepEqual(cm.ObjectMeta.OwnerReferences, desiredOwnerRefs) {
|
||||
c.logger.Infof("new %s config map's owner references do not match the current ones", configMapName)
|
||||
cm.ObjectMeta.OwnerReferences = desiredOwnerRefs
|
||||
c.setProcessName("updating %s config map", configMapName)
|
||||
cm, err = c.KubeClient.ConfigMaps(c.Namespace).Update(context.TODO(), cm, metav1.UpdateOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not update %s config map: %v", configMapName, err)
|
||||
}
|
||||
c.PatroniConfigMaps[suffix] = cm
|
||||
}
|
||||
annotations := make(map[string]string)
|
||||
maps.Copy(annotations, cm.Annotations)
|
||||
// Patroni can add extra annotations so incl. current annotations in desired annotations
|
||||
desiredAnnotations := c.annotationsSet(cm.Annotations)
|
||||
if changed, _ := c.compareAnnotations(annotations, desiredAnnotations, nil); changed {
|
||||
patchData, err := metaAnnotationsPatch(desiredAnnotations)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not form patch for %s config map: %v", configMapName, err)
|
||||
}
|
||||
cm, err = c.KubeClient.ConfigMaps(c.Namespace).Patch(context.TODO(), configMapName, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not patch annotations of %s config map: %v", configMapName, err)
|
||||
}
|
||||
c.PatroniConfigMaps[suffix] = cm
|
||||
}
|
||||
} else if !k8sutil.ResourceNotFound(err) {
|
||||
// if config map does not exist yet, Patroni should create it
|
||||
return fmt.Errorf("could not get %s config map: %v", configMapName, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) syncPatroniEndpoint(suffix string) error {
|
||||
var (
|
||||
ep *v1.Endpoints
|
||||
err error
|
||||
)
|
||||
endpointName := fmt.Sprintf("%s-%s", c.Name, suffix)
|
||||
c.logger.Debugf("syncing %s endpoint", endpointName)
|
||||
c.setProcessName("syncing %s endpoint", endpointName)
|
||||
|
||||
if ep, err = c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), endpointName, metav1.GetOptions{}); err == nil {
|
||||
c.PatroniEndpoints[suffix] = ep
|
||||
desiredOwnerRefs := c.ownerReferences()
|
||||
if !reflect.DeepEqual(ep.ObjectMeta.OwnerReferences, desiredOwnerRefs) {
|
||||
c.logger.Infof("new %s endpoints's owner references do not match the current ones", endpointName)
|
||||
ep.ObjectMeta.OwnerReferences = desiredOwnerRefs
|
||||
c.setProcessName("updating %s endpoint", endpointName)
|
||||
ep, err = c.KubeClient.Endpoints(c.Namespace).Update(context.TODO(), ep, metav1.UpdateOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not update %s endpoint: %v", endpointName, err)
|
||||
}
|
||||
c.PatroniEndpoints[suffix] = ep
|
||||
}
|
||||
annotations := make(map[string]string)
|
||||
maps.Copy(annotations, ep.Annotations)
|
||||
// Patroni can add extra annotations so incl. current annotations in desired annotations
|
||||
desiredAnnotations := c.annotationsSet(ep.Annotations)
|
||||
if changed, _ := c.compareAnnotations(annotations, desiredAnnotations, nil); changed {
|
||||
patchData, err := metaAnnotationsPatch(desiredAnnotations)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not form patch for %s endpoint: %v", endpointName, err)
|
||||
}
|
||||
ep, err = c.KubeClient.Endpoints(c.Namespace).Patch(context.TODO(), endpointName, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not patch annotations of %s endpoint: %v", endpointName, err)
|
||||
}
|
||||
c.PatroniEndpoints[suffix] = ep
|
||||
}
|
||||
} else if !k8sutil.ResourceNotFound(err) {
|
||||
// if endpoint does not exist yet, Patroni should create it
|
||||
return fmt.Errorf("could not get %s endpoint: %v", endpointName, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) syncPatroniService() error {
|
||||
var (
|
||||
svc *v1.Service
|
||||
err error
|
||||
)
|
||||
serviceName := fmt.Sprintf("%s-%s", c.Name, Patroni)
|
||||
c.logger.Debugf("syncing %s service", serviceName)
|
||||
c.setProcessName("syncing %s service", serviceName)
|
||||
|
||||
if svc, err = c.KubeClient.Services(c.Namespace).Get(context.TODO(), serviceName, metav1.GetOptions{}); err == nil {
|
||||
c.Services[Patroni] = svc
|
||||
desiredOwnerRefs := c.ownerReferences()
|
||||
if !reflect.DeepEqual(svc.ObjectMeta.OwnerReferences, desiredOwnerRefs) {
|
||||
c.logger.Infof("new %s service's owner references do not match the current ones", serviceName)
|
||||
svc.ObjectMeta.OwnerReferences = desiredOwnerRefs
|
||||
c.setProcessName("updating %v service", serviceName)
|
||||
svc, err = c.KubeClient.Services(c.Namespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not update %s service: %v", serviceName, err)
|
||||
}
|
||||
c.Services[Patroni] = svc
|
||||
}
|
||||
annotations := make(map[string]string)
|
||||
maps.Copy(annotations, svc.Annotations)
|
||||
// Patroni can add extra annotations so incl. current annotations in desired annotations
|
||||
desiredAnnotations := c.annotationsSet(svc.Annotations)
|
||||
if changed, _ := c.compareAnnotations(annotations, desiredAnnotations, nil); changed {
|
||||
patchData, err := metaAnnotationsPatch(desiredAnnotations)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not form patch for %s service: %v", serviceName, err)
|
||||
}
|
||||
svc, err = c.KubeClient.Services(c.Namespace).Patch(context.TODO(), serviceName, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not patch annotations of %s service: %v", serviceName, err)
|
||||
}
|
||||
c.Services[Patroni] = svc
|
||||
}
|
||||
} else if !k8sutil.ResourceNotFound(err) {
|
||||
// if config service does not exist yet, Patroni should create it
|
||||
return fmt.Errorf("could not get %s service: %v", serviceName, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) syncServices() error {
|
||||
for _, role := range []PostgresRole{Master, Replica} {
|
||||
c.logger.Debugf("syncing %s service", role)
|
||||
|
|
@ -220,14 +394,12 @@ func (c *Cluster) syncService(role PostgresRole) error {
|
|||
return fmt.Errorf("could not update %s service to match desired state: %v", role, err)
|
||||
}
|
||||
c.Services[role] = updatedSvc
|
||||
c.logger.Infof("%s service %q is in the desired state now", role, util.NameFromMeta(desiredSvc.ObjectMeta))
|
||||
return nil
|
||||
}
|
||||
if !k8sutil.ResourceNotFound(err) {
|
||||
return fmt.Errorf("could not get %s service: %v", role, err)
|
||||
}
|
||||
// no existing service, create new one
|
||||
c.Services[role] = nil
|
||||
c.logger.Infof("could not find the cluster's %s service", role)
|
||||
|
||||
if svc, err = c.createService(role); err == nil {
|
||||
|
|
@ -252,16 +424,26 @@ func (c *Cluster) syncEndpoint(role PostgresRole) error {
|
|||
)
|
||||
c.setProcessName("syncing %s endpoint", role)
|
||||
|
||||
if ep, err = c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), c.endpointName(role), metav1.GetOptions{}); err == nil {
|
||||
if ep, err = c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), c.serviceName(role), metav1.GetOptions{}); err == nil {
|
||||
desiredEp := c.generateEndpoint(role, ep.Subsets)
|
||||
if changed, _ := c.compareAnnotations(ep.Annotations, desiredEp.Annotations); changed {
|
||||
patchData, err := metaAnnotationsPatch(desiredEp.Annotations)
|
||||
// if owner references differ we update which would also change annotations
|
||||
if !reflect.DeepEqual(ep.ObjectMeta.OwnerReferences, desiredEp.ObjectMeta.OwnerReferences) {
|
||||
c.logger.Infof("new %s endpoints's owner references do not match the current ones", role)
|
||||
c.setProcessName("updating %v endpoint", role)
|
||||
ep, err = c.KubeClient.Endpoints(c.Namespace).Update(context.TODO(), desiredEp, metav1.UpdateOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not form patch for %s endpoint: %v", role, err)
|
||||
return fmt.Errorf("could not update %s endpoint: %v", role, err)
|
||||
}
|
||||
ep, err = c.KubeClient.Endpoints(c.Namespace).Patch(context.TODO(), c.endpointName(role), types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not patch annotations of %s endpoint: %v", role, err)
|
||||
} else {
|
||||
if changed, _ := c.compareAnnotations(ep.Annotations, desiredEp.Annotations, nil); changed {
|
||||
patchData, err := metaAnnotationsPatch(desiredEp.Annotations)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not form patch for %s endpoint: %v", role, err)
|
||||
}
|
||||
ep, err = c.KubeClient.Endpoints(c.Namespace).Patch(context.TODO(), c.serviceName(role), types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not patch annotations of %s endpoint: %v", role, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
c.Endpoints[role] = ep
|
||||
|
|
@ -271,7 +453,6 @@ func (c *Cluster) syncEndpoint(role PostgresRole) error {
|
|||
return fmt.Errorf("could not get %s endpoint: %v", role, err)
|
||||
}
|
||||
// no existing endpoint, create new one
|
||||
c.Endpoints[role] = nil
|
||||
c.logger.Infof("could not find the cluster's %s endpoint", role)
|
||||
|
||||
if ep, err = c.createEndpoint(role); err == nil {
|
||||
|
|
@ -281,7 +462,7 @@ func (c *Cluster) syncEndpoint(role PostgresRole) error {
|
|||
return fmt.Errorf("could not create missing %s endpoint: %v", role, err)
|
||||
}
|
||||
c.logger.Infof("%s endpoint %q already exists", role, util.NameFromMeta(ep.ObjectMeta))
|
||||
if ep, err = c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), c.endpointName(role), metav1.GetOptions{}); err != nil {
|
||||
if ep, err = c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), c.serviceName(role), metav1.GetOptions{}); err != nil {
|
||||
return fmt.Errorf("could not fetch existing %s endpoint: %v", role, err)
|
||||
}
|
||||
}
|
||||
|
|
@ -289,22 +470,22 @@ func (c *Cluster) syncEndpoint(role PostgresRole) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) syncPodDisruptionBudget(isUpdate bool) error {
|
||||
func (c *Cluster) syncPrimaryPodDisruptionBudget(isUpdate bool) error {
|
||||
var (
|
||||
pdb *policyv1.PodDisruptionBudget
|
||||
err error
|
||||
)
|
||||
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.podDisruptionBudgetName(), metav1.GetOptions{}); err == nil {
|
||||
c.PodDisruptionBudget = pdb
|
||||
newPDB := c.generatePodDisruptionBudget()
|
||||
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.PrimaryPodDisruptionBudgetName(), metav1.GetOptions{}); err == nil {
|
||||
c.PrimaryPodDisruptionBudget = pdb
|
||||
newPDB := c.generatePrimaryPodDisruptionBudget()
|
||||
match, reason := c.comparePodDisruptionBudget(pdb, newPDB)
|
||||
if !match {
|
||||
c.logPDBChanges(pdb, newPDB, isUpdate, reason)
|
||||
if err = c.updatePodDisruptionBudget(newPDB); err != nil {
|
||||
if err = c.updatePrimaryPodDisruptionBudget(newPDB); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
c.PodDisruptionBudget = pdb
|
||||
c.PrimaryPodDisruptionBudget = pdb
|
||||
}
|
||||
return nil
|
||||
|
||||
|
|
@ -313,22 +494,74 @@ func (c *Cluster) syncPodDisruptionBudget(isUpdate bool) error {
|
|||
return fmt.Errorf("could not get pod disruption budget: %v", err)
|
||||
}
|
||||
// no existing pod disruption budget, create new one
|
||||
c.PodDisruptionBudget = nil
|
||||
c.logger.Infof("could not find the cluster's pod disruption budget")
|
||||
c.logger.Infof("could not find the primary pod disruption budget")
|
||||
|
||||
if pdb, err = c.createPodDisruptionBudget(); err != nil {
|
||||
if err = c.createPrimaryPodDisruptionBudget(); err != nil {
|
||||
if !k8sutil.ResourceAlreadyExists(err) {
|
||||
return fmt.Errorf("could not create pod disruption budget: %v", err)
|
||||
return fmt.Errorf("could not create primary pod disruption budget: %v", err)
|
||||
}
|
||||
c.logger.Infof("pod disruption budget %q already exists", util.NameFromMeta(pdb.ObjectMeta))
|
||||
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.podDisruptionBudgetName(), metav1.GetOptions{}); err != nil {
|
||||
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.PrimaryPodDisruptionBudgetName(), metav1.GetOptions{}); err != nil {
|
||||
return fmt.Errorf("could not fetch existing %q pod disruption budget", util.NameFromMeta(pdb.ObjectMeta))
|
||||
}
|
||||
}
|
||||
|
||||
c.logger.Infof("created missing pod disruption budget %q", util.NameFromMeta(pdb.ObjectMeta))
|
||||
c.PodDisruptionBudget = pdb
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) syncCriticalOpPodDisruptionBudget(isUpdate bool) error {
|
||||
var (
|
||||
pdb *policyv1.PodDisruptionBudget
|
||||
err error
|
||||
)
|
||||
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.criticalOpPodDisruptionBudgetName(), metav1.GetOptions{}); err == nil {
|
||||
c.CriticalOpPodDisruptionBudget = pdb
|
||||
newPDB := c.generateCriticalOpPodDisruptionBudget()
|
||||
match, reason := c.comparePodDisruptionBudget(pdb, newPDB)
|
||||
if !match {
|
||||
c.logPDBChanges(pdb, newPDB, isUpdate, reason)
|
||||
if err = c.updateCriticalOpPodDisruptionBudget(newPDB); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
c.CriticalOpPodDisruptionBudget = pdb
|
||||
}
|
||||
return nil
|
||||
|
||||
}
|
||||
if !k8sutil.ResourceNotFound(err) {
|
||||
return fmt.Errorf("could not get pod disruption budget: %v", err)
|
||||
}
|
||||
// no existing pod disruption budget, create new one
|
||||
c.logger.Infof("could not find pod disruption budget for critical operations")
|
||||
|
||||
if err = c.createCriticalOpPodDisruptionBudget(); err != nil {
|
||||
if !k8sutil.ResourceAlreadyExists(err) {
|
||||
return fmt.Errorf("could not create pod disruption budget for critical operations: %v", err)
|
||||
}
|
||||
c.logger.Infof("pod disruption budget %q already exists", util.NameFromMeta(pdb.ObjectMeta))
|
||||
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.criticalOpPodDisruptionBudgetName(), metav1.GetOptions{}); err != nil {
|
||||
return fmt.Errorf("could not fetch existing %q pod disruption budget", util.NameFromMeta(pdb.ObjectMeta))
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) syncPodDisruptionBudgets(isUpdate bool) error {
|
||||
errors := make([]string, 0)
|
||||
|
||||
if err := c.syncPrimaryPodDisruptionBudget(isUpdate); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("%v", err))
|
||||
}
|
||||
|
||||
if err := c.syncCriticalOpPodDisruptionBudget(isUpdate); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("%v", err))
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("%v", strings.Join(errors, `', '`))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
@ -340,6 +573,7 @@ func (c *Cluster) syncStatefulSet() error {
|
|||
)
|
||||
podsToRecreate := make([]v1.Pod, 0)
|
||||
isSafeToRecreatePods := true
|
||||
postponeReasons := make([]string, 0)
|
||||
switchoverCandidates := make([]spec.NamespacedName, 0)
|
||||
|
||||
pods, err := c.listPods()
|
||||
|
|
@ -355,7 +589,6 @@ func (c *Cluster) syncStatefulSet() error {
|
|||
|
||||
if err != nil {
|
||||
// statefulset does not exist, try to re-create it
|
||||
c.Statefulset = nil
|
||||
c.logger.Infof("cluster's statefulset does not exist")
|
||||
|
||||
sset, err = c.createStatefulSet()
|
||||
|
|
@ -382,7 +615,7 @@ func (c *Cluster) syncStatefulSet() error {
|
|||
if err != nil {
|
||||
return fmt.Errorf("could not generate statefulset: %v", err)
|
||||
}
|
||||
c.logger.Debugf("syncing statefulsets")
|
||||
c.logger.Debug("syncing statefulsets")
|
||||
// check if there are still pods with a rolling update flag
|
||||
for _, pod := range pods {
|
||||
if c.getRollingUpdateFlagFromPod(&pod) {
|
||||
|
|
@ -397,7 +630,7 @@ func (c *Cluster) syncStatefulSet() error {
|
|||
}
|
||||
|
||||
if len(podsToRecreate) > 0 {
|
||||
c.logger.Debugf("%d / %d pod(s) still need to be rotated", len(podsToRecreate), len(pods))
|
||||
c.logger.Infof("%d / %d pod(s) still need to be rotated", len(podsToRecreate), len(pods))
|
||||
}
|
||||
|
||||
// statefulset is already there, make sure we use its definition in order to compare with the spec.
|
||||
|
|
@ -405,13 +638,22 @@ func (c *Cluster) syncStatefulSet() error {
|
|||
|
||||
cmp := c.compareStatefulSetWith(desiredSts)
|
||||
if !cmp.rollingUpdate {
|
||||
updatedPodAnnotations := map[string]*string{}
|
||||
for _, anno := range cmp.deletedPodAnnotations {
|
||||
updatedPodAnnotations[anno] = nil
|
||||
}
|
||||
for anno, val := range desiredSts.Spec.Template.Annotations {
|
||||
updatedPodAnnotations[anno] = &val
|
||||
}
|
||||
metadataReq := map[string]map[string]map[string]*string{"metadata": {"annotations": updatedPodAnnotations}}
|
||||
patch, err := json.Marshal(metadataReq)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not form patch for pod annotations: %v", err)
|
||||
}
|
||||
|
||||
for _, pod := range pods {
|
||||
if changed, _ := c.compareAnnotations(pod.Annotations, desiredSts.Spec.Template.Annotations); changed {
|
||||
patchData, err := metaAnnotationsPatch(desiredSts.Spec.Template.Annotations)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not form patch for pod %q annotations: %v", pod.Name, err)
|
||||
}
|
||||
_, err = c.KubeClient.Pods(pod.Namespace).Patch(context.TODO(), pod.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
if changed, _ := c.compareAnnotations(pod.Annotations, desiredSts.Spec.Template.Annotations, nil); changed {
|
||||
_, err = c.KubeClient.Pods(c.Namespace).Patch(context.TODO(), pod.Name, types.StrategicMergePatchType, patch, metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not patch annotations for pod %q: %v", pod.Name, err)
|
||||
}
|
||||
|
|
@ -490,12 +732,14 @@ func (c *Cluster) syncStatefulSet() error {
|
|||
c.logger.Debug("syncing Patroni config")
|
||||
if configPatched, restartPrimaryFirst, restartWait, err = c.syncPatroniConfig(pods, c.Spec.Patroni, requiredPgParameters); err != nil {
|
||||
c.logger.Warningf("Patroni config updated? %v - errors during config sync: %v", configPatched, err)
|
||||
postponeReasons = append(postponeReasons, "errors during Patroni config sync")
|
||||
isSafeToRecreatePods = false
|
||||
}
|
||||
|
||||
// restart Postgres where it is still pending
|
||||
if err = c.restartInstances(pods, restartWait, restartPrimaryFirst); err != nil {
|
||||
c.logger.Errorf("errors while restarting Postgres in pods via Patroni API: %v", err)
|
||||
postponeReasons = append(postponeReasons, "errors while restarting Postgres via Patroni API")
|
||||
isSafeToRecreatePods = false
|
||||
}
|
||||
|
||||
|
|
@ -503,14 +747,14 @@ func (c *Cluster) syncStatefulSet() error {
|
|||
// statefulset or those that got their configuration from the outdated statefulset)
|
||||
if len(podsToRecreate) > 0 {
|
||||
if isSafeToRecreatePods {
|
||||
c.logger.Debugln("performing rolling update")
|
||||
c.logger.Info("performing rolling update")
|
||||
c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "Update", "Performing rolling update")
|
||||
if err := c.recreatePods(podsToRecreate, switchoverCandidates); err != nil {
|
||||
return fmt.Errorf("could not recreate pods: %v", err)
|
||||
}
|
||||
c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "Update", "Rolling update done - pods have been recreated")
|
||||
} else {
|
||||
c.logger.Warningf("postpone pod recreation until next sync because of errors during config sync")
|
||||
c.logger.Warningf("postpone pod recreation until next sync - reason: %s", strings.Join(postponeReasons, `', '`))
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -720,7 +964,7 @@ func (c *Cluster) checkAndSetGlobalPostgreSQLConfiguration(pod *v1.Pod, effectiv
|
|||
// check if specified slots exist in config and if they differ
|
||||
for slotName, desiredSlot := range desiredPatroniConfig.Slots {
|
||||
// only add slots specified in manifest to c.replicationSlots
|
||||
for manifestSlotName, _ := range c.Spec.Patroni.Slots {
|
||||
for manifestSlotName := range c.Spec.Patroni.Slots {
|
||||
if manifestSlotName == slotName {
|
||||
c.replicationSlots[slotName] = desiredSlot
|
||||
}
|
||||
|
|
@ -816,7 +1060,7 @@ func (c *Cluster) syncStandbyClusterConfiguration() error {
|
|||
// carries the request to change configuration through
|
||||
for _, pod := range pods {
|
||||
podName := util.NameFromMeta(pod.ObjectMeta)
|
||||
c.logger.Debugf("patching Postgres config via Patroni API on pod %s with following options: %s",
|
||||
c.logger.Infof("patching Postgres config via Patroni API on pod %s with following options: %s",
|
||||
podName, standbyOptionsToSet)
|
||||
if err = c.patroni.SetStandbyClusterParameters(&pod, standbyOptionsToSet); err == nil {
|
||||
return nil
|
||||
|
|
@ -828,7 +1072,7 @@ func (c *Cluster) syncStandbyClusterConfiguration() error {
|
|||
}
|
||||
|
||||
func (c *Cluster) syncSecrets() error {
|
||||
c.logger.Info("syncing secrets")
|
||||
c.logger.Debug("syncing secrets")
|
||||
c.setProcessName("syncing secrets")
|
||||
generatedSecrets := c.generateUserSecrets()
|
||||
retentionUsers := make([]string, 0)
|
||||
|
|
@ -838,7 +1082,7 @@ func (c *Cluster) syncSecrets() error {
|
|||
secret, err := c.KubeClient.Secrets(generatedSecret.Namespace).Create(context.TODO(), generatedSecret, metav1.CreateOptions{})
|
||||
if err == nil {
|
||||
c.Secrets[secret.UID] = secret
|
||||
c.logger.Debugf("created new secret %s, namespace: %s, uid: %s", util.NameFromMeta(secret.ObjectMeta), generatedSecret.Namespace, secret.UID)
|
||||
c.logger.Infof("created new secret %s, namespace: %s, uid: %s", util.NameFromMeta(secret.ObjectMeta), generatedSecret.Namespace, secret.UID)
|
||||
continue
|
||||
}
|
||||
if k8sutil.ResourceAlreadyExists(err) {
|
||||
|
|
@ -972,23 +1216,30 @@ func (c *Cluster) updateSecret(
|
|||
userMap[userKey] = pwdUser
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(secret.ObjectMeta.OwnerReferences, generatedSecret.ObjectMeta.OwnerReferences) {
|
||||
updateSecret = true
|
||||
updateSecretMsg = fmt.Sprintf("secret %s owner references do not match the current ones", secretName)
|
||||
secret.ObjectMeta.OwnerReferences = generatedSecret.ObjectMeta.OwnerReferences
|
||||
}
|
||||
|
||||
if updateSecret {
|
||||
c.logger.Debugln(updateSecretMsg)
|
||||
if _, err = c.KubeClient.Secrets(secret.Namespace).Update(context.TODO(), secret, metav1.UpdateOptions{}); err != nil {
|
||||
c.logger.Infof(updateSecretMsg)
|
||||
if secret, err = c.KubeClient.Secrets(secret.Namespace).Update(context.TODO(), secret, metav1.UpdateOptions{}); err != nil {
|
||||
return fmt.Errorf("could not update secret %s: %v", secretName, err)
|
||||
}
|
||||
c.Secrets[secret.UID] = secret
|
||||
}
|
||||
|
||||
if changed, _ := c.compareAnnotations(secret.Annotations, generatedSecret.Annotations); changed {
|
||||
if changed, _ := c.compareAnnotations(secret.Annotations, generatedSecret.Annotations, nil); changed {
|
||||
patchData, err := metaAnnotationsPatch(generatedSecret.Annotations)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not form patch for secret %q annotations: %v", secret.Name, err)
|
||||
}
|
||||
_, err = c.KubeClient.Secrets(secret.Namespace).Patch(context.TODO(), secret.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
secret, err = c.KubeClient.Secrets(secret.Namespace).Patch(context.TODO(), secret.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not patch annotations for secret %q: %v", secret.Name, err)
|
||||
}
|
||||
c.Secrets[secret.UID] = secret
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
@ -1416,19 +1667,46 @@ func (c *Cluster) syncLogicalBackupJob() error {
|
|||
if err != nil {
|
||||
return fmt.Errorf("could not generate the desired logical backup job state: %v", err)
|
||||
}
|
||||
if match, reason := c.compareLogicalBackupJob(job, desiredJob); !match {
|
||||
if !reflect.DeepEqual(job.ObjectMeta.OwnerReferences, desiredJob.ObjectMeta.OwnerReferences) {
|
||||
c.logger.Info("new logical backup job's owner references do not match the current ones")
|
||||
job, err = c.KubeClient.CronJobs(job.Namespace).Update(context.TODO(), desiredJob, metav1.UpdateOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not update owner references for logical backup job %q: %v", job.Name, err)
|
||||
}
|
||||
c.logger.Infof("logical backup job %s updated", c.getLogicalBackupJobName())
|
||||
}
|
||||
if cmp := c.compareLogicalBackupJob(job, desiredJob); !cmp.match {
|
||||
c.logger.Infof("logical job %s is not in the desired state and needs to be updated",
|
||||
c.getLogicalBackupJobName(),
|
||||
)
|
||||
if reason != "" {
|
||||
c.logger.Infof("reason: %s", reason)
|
||||
if len(cmp.reasons) != 0 {
|
||||
for _, reason := range cmp.reasons {
|
||||
c.logger.Infof("reason: %s", reason)
|
||||
}
|
||||
}
|
||||
if len(cmp.deletedPodAnnotations) != 0 {
|
||||
templateMetadataReq := map[string]map[string]map[string]map[string]map[string]map[string]map[string]*string{
|
||||
"spec": {"jobTemplate": {"spec": {"template": {"metadata": {"annotations": {}}}}}}}
|
||||
for _, anno := range cmp.deletedPodAnnotations {
|
||||
templateMetadataReq["spec"]["jobTemplate"]["spec"]["template"]["metadata"]["annotations"][anno] = nil
|
||||
}
|
||||
patch, err := json.Marshal(templateMetadataReq)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not marshal ObjectMeta for logical backup job %q pod template: %v", jobName, err)
|
||||
}
|
||||
|
||||
job, err = c.KubeClient.CronJobs(c.Namespace).Patch(context.TODO(), jobName, types.StrategicMergePatchType, patch, metav1.PatchOptions{}, "")
|
||||
if err != nil {
|
||||
c.logger.Errorf("failed to remove annotations from the logical backup job %q pod template: %v", jobName, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err = c.patchLogicalBackupJob(desiredJob); err != nil {
|
||||
return fmt.Errorf("could not update logical backup job to match desired state: %v", err)
|
||||
}
|
||||
c.logger.Info("the logical backup job is synced")
|
||||
}
|
||||
if changed, _ := c.compareAnnotations(job.Annotations, desiredJob.Annotations); changed {
|
||||
if changed, _ := c.compareAnnotations(job.Annotations, desiredJob.Annotations, nil); changed {
|
||||
patchData, err := metaAnnotationsPatch(desiredJob.Annotations)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not form patch for the logical backup job %q: %v", jobName, err)
|
||||
|
|
@ -1438,6 +1716,7 @@ func (c *Cluster) syncLogicalBackupJob() error {
|
|||
return fmt.Errorf("could not patch annotations of the logical backup job %q: %v", jobName, err)
|
||||
}
|
||||
}
|
||||
c.LogicalBackupJob = desiredJob
|
||||
return nil
|
||||
}
|
||||
if !k8sutil.ResourceNotFound(err) {
|
||||
|
|
|
|||
|
|
@ -142,6 +142,181 @@ func TestSyncStatefulSetsAnnotations(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestPodAnnotationsSync(t *testing.T) {
|
||||
clusterName := "acid-test-cluster-2"
|
||||
namespace := "default"
|
||||
podAnnotation := "no-scale-down"
|
||||
podAnnotations := map[string]string{podAnnotation: "true"}
|
||||
customPodAnnotation := "foo"
|
||||
customPodAnnotations := map[string]string{customPodAnnotation: "true"}
|
||||
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
mockClient := mocks.NewMockHTTPClient(ctrl)
|
||||
client, _ := newFakeK8sAnnotationsClient()
|
||||
|
||||
pg := acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: clusterName,
|
||||
Namespace: namespace,
|
||||
},
|
||||
Spec: acidv1.PostgresSpec{
|
||||
Volume: acidv1.Volume{
|
||||
Size: "1Gi",
|
||||
},
|
||||
EnableConnectionPooler: boolToPointer(true),
|
||||
EnableLogicalBackup: true,
|
||||
EnableReplicaConnectionPooler: boolToPointer(true),
|
||||
PodAnnotations: podAnnotations,
|
||||
NumberOfInstances: 2,
|
||||
},
|
||||
}
|
||||
|
||||
var cluster = New(
|
||||
Config{
|
||||
OpConfig: config.Config{
|
||||
PatroniAPICheckInterval: time.Duration(1),
|
||||
PatroniAPICheckTimeout: time.Duration(5),
|
||||
PodManagementPolicy: "ordered_ready",
|
||||
CustomPodAnnotations: customPodAnnotations,
|
||||
ConnectionPooler: config.ConnectionPooler{
|
||||
ConnectionPoolerDefaultCPURequest: "100m",
|
||||
ConnectionPoolerDefaultCPULimit: "100m",
|
||||
ConnectionPoolerDefaultMemoryRequest: "100Mi",
|
||||
ConnectionPoolerDefaultMemoryLimit: "100Mi",
|
||||
NumberOfInstances: k8sutil.Int32ToPointer(1),
|
||||
},
|
||||
Resources: config.Resources{
|
||||
ClusterLabels: map[string]string{"application": "spilo"},
|
||||
ClusterNameLabel: "cluster-name",
|
||||
DefaultCPURequest: "300m",
|
||||
DefaultCPULimit: "300m",
|
||||
DefaultMemoryRequest: "300Mi",
|
||||
DefaultMemoryLimit: "300Mi",
|
||||
MaxInstances: -1,
|
||||
PodRoleLabel: "spilo-role",
|
||||
ResourceCheckInterval: time.Duration(3),
|
||||
ResourceCheckTimeout: time.Duration(10),
|
||||
},
|
||||
},
|
||||
}, client, pg, logger, eventRecorder)
|
||||
|
||||
configJson := `{"postgresql": {"parameters": {"log_min_duration_statement": 200, "max_connections": 50}}}, "ttl": 20}`
|
||||
response := http.Response{
|
||||
StatusCode: 200,
|
||||
Body: io.NopCloser(bytes.NewReader([]byte(configJson))),
|
||||
}
|
||||
|
||||
mockClient.EXPECT().Do(gomock.Any()).Return(&response, nil).AnyTimes()
|
||||
cluster.patroni = patroni.New(patroniLogger, mockClient)
|
||||
cluster.Name = clusterName
|
||||
cluster.Namespace = namespace
|
||||
clusterOptions := clusterLabelsOptions(cluster)
|
||||
|
||||
// create a statefulset
|
||||
_, err := cluster.createStatefulSet()
|
||||
assert.NoError(t, err)
|
||||
// create a pods
|
||||
podsList := createPods(cluster)
|
||||
for _, pod := range podsList {
|
||||
_, err = cluster.KubeClient.Pods(namespace).Create(context.TODO(), &pod, metav1.CreateOptions{})
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
// create connection pooler
|
||||
_, err = cluster.createConnectionPooler(mockInstallLookupFunction)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// create cron job
|
||||
err = cluster.createLogicalBackupJob()
|
||||
assert.NoError(t, err)
|
||||
|
||||
annotateResources(cluster)
|
||||
err = cluster.Sync(&cluster.Postgresql)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// 1. PodAnnotations set
|
||||
stsList, err := cluster.KubeClient.StatefulSets(namespace).List(context.TODO(), clusterOptions)
|
||||
assert.NoError(t, err)
|
||||
for _, sts := range stsList.Items {
|
||||
for _, annotation := range []string{podAnnotation, customPodAnnotation} {
|
||||
assert.Contains(t, sts.Spec.Template.Annotations, annotation)
|
||||
}
|
||||
}
|
||||
|
||||
for _, role := range []PostgresRole{Master, Replica} {
|
||||
deploy, err := cluster.KubeClient.Deployments(namespace).Get(context.TODO(), cluster.connectionPoolerName(role), metav1.GetOptions{})
|
||||
assert.NoError(t, err)
|
||||
for _, annotation := range []string{podAnnotation, customPodAnnotation} {
|
||||
assert.Contains(t, deploy.Spec.Template.Annotations, annotation,
|
||||
fmt.Sprintf("pooler deployment pod template %s should contain annotation %s, found %#v",
|
||||
deploy.Name, annotation, deploy.Spec.Template.Annotations))
|
||||
}
|
||||
}
|
||||
|
||||
podList, err := cluster.KubeClient.Pods(namespace).List(context.TODO(), clusterOptions)
|
||||
assert.NoError(t, err)
|
||||
for _, pod := range podList.Items {
|
||||
for _, annotation := range []string{podAnnotation, customPodAnnotation} {
|
||||
assert.Contains(t, pod.Annotations, annotation,
|
||||
fmt.Sprintf("pod %s should contain annotation %s, found %#v", pod.Name, annotation, pod.Annotations))
|
||||
}
|
||||
}
|
||||
|
||||
cronJobList, err := cluster.KubeClient.CronJobs(namespace).List(context.TODO(), clusterOptions)
|
||||
assert.NoError(t, err)
|
||||
for _, cronJob := range cronJobList.Items {
|
||||
for _, annotation := range []string{podAnnotation, customPodAnnotation} {
|
||||
assert.Contains(t, cronJob.Spec.JobTemplate.Spec.Template.Annotations, annotation,
|
||||
fmt.Sprintf("logical backup cron job's pod template should contain annotation %s, found %#v",
|
||||
annotation, cronJob.Spec.JobTemplate.Spec.Template.Annotations))
|
||||
}
|
||||
}
|
||||
|
||||
// 2 PodAnnotations removed
|
||||
newSpec := cluster.Postgresql.DeepCopy()
|
||||
newSpec.Spec.PodAnnotations = nil
|
||||
cluster.OpConfig.CustomPodAnnotations = nil
|
||||
err = cluster.Sync(newSpec)
|
||||
assert.NoError(t, err)
|
||||
|
||||
stsList, err = cluster.KubeClient.StatefulSets(namespace).List(context.TODO(), clusterOptions)
|
||||
assert.NoError(t, err)
|
||||
for _, sts := range stsList.Items {
|
||||
for _, annotation := range []string{podAnnotation, customPodAnnotation} {
|
||||
assert.NotContains(t, sts.Spec.Template.Annotations, annotation)
|
||||
}
|
||||
}
|
||||
|
||||
for _, role := range []PostgresRole{Master, Replica} {
|
||||
deploy, err := cluster.KubeClient.Deployments(namespace).Get(context.TODO(), cluster.connectionPoolerName(role), metav1.GetOptions{})
|
||||
assert.NoError(t, err)
|
||||
for _, annotation := range []string{podAnnotation, customPodAnnotation} {
|
||||
assert.NotContains(t, deploy.Spec.Template.Annotations, annotation,
|
||||
fmt.Sprintf("pooler deployment pod template %s should not contain annotation %s, found %#v",
|
||||
deploy.Name, annotation, deploy.Spec.Template.Annotations))
|
||||
}
|
||||
}
|
||||
|
||||
podList, err = cluster.KubeClient.Pods(namespace).List(context.TODO(), clusterOptions)
|
||||
assert.NoError(t, err)
|
||||
for _, pod := range podList.Items {
|
||||
for _, annotation := range []string{podAnnotation, customPodAnnotation} {
|
||||
assert.NotContains(t, pod.Annotations, annotation,
|
||||
fmt.Sprintf("pod %s should not contain annotation %s, found %#v", pod.Name, annotation, pod.Annotations))
|
||||
}
|
||||
}
|
||||
|
||||
cronJobList, err = cluster.KubeClient.CronJobs(namespace).List(context.TODO(), clusterOptions)
|
||||
assert.NoError(t, err)
|
||||
for _, cronJob := range cronJobList.Items {
|
||||
for _, annotation := range []string{podAnnotation, customPodAnnotation} {
|
||||
assert.NotContains(t, cronJob.Spec.JobTemplate.Spec.Template.Annotations, annotation,
|
||||
fmt.Sprintf("logical backup cron job's pod template should not contain annotation %s, found %#v",
|
||||
annotation, cronJob.Spec.JobTemplate.Spec.Template.Annotations))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckAndSetGlobalPostgreSQLConfiguration(t *testing.T) {
|
||||
testName := "test config comparison"
|
||||
client, _ := newFakeK8sSyncClient()
|
||||
|
|
@ -644,7 +819,7 @@ func TestUpdateSecret(t *testing.T) {
|
|||
ApplicationId: appId,
|
||||
Database: dbname,
|
||||
Tables: map[string]acidv1.StreamTable{
|
||||
"data.foo": acidv1.StreamTable{
|
||||
"data.foo": {
|
||||
EventType: "stream-type-b",
|
||||
},
|
||||
},
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ const (
|
|||
// spilo roles
|
||||
Master PostgresRole = "master"
|
||||
Replica PostgresRole = "replica"
|
||||
Patroni PostgresRole = "config"
|
||||
|
||||
// roles returned by Patroni cluster endpoint
|
||||
Leader PostgresRole = "leader"
|
||||
|
|
@ -57,15 +58,16 @@ type WorkerStatus struct {
|
|||
|
||||
// ClusterStatus describes status of the cluster
|
||||
type ClusterStatus struct {
|
||||
Team string
|
||||
Cluster string
|
||||
Namespace string
|
||||
MasterService *v1.Service
|
||||
ReplicaService *v1.Service
|
||||
MasterEndpoint *v1.Endpoints
|
||||
ReplicaEndpoint *v1.Endpoints
|
||||
StatefulSet *appsv1.StatefulSet
|
||||
PodDisruptionBudget *policyv1.PodDisruptionBudget
|
||||
Team string
|
||||
Cluster string
|
||||
Namespace string
|
||||
MasterService *v1.Service
|
||||
ReplicaService *v1.Service
|
||||
MasterEndpoint *v1.Endpoints
|
||||
ReplicaEndpoint *v1.Endpoints
|
||||
StatefulSet *appsv1.StatefulSet
|
||||
PrimaryPodDisruptionBudget *policyv1.PodDisruptionBudget
|
||||
CriticalOpPodDisruptionBudget *policyv1.PodDisruptionBudget
|
||||
|
||||
CurrentProcess Process
|
||||
Worker uint32
|
||||
|
|
|
|||
|
|
@ -176,6 +176,10 @@ func (c *Cluster) logPDBChanges(old, new *policyv1.PodDisruptionBudget, isUpdate
|
|||
}
|
||||
|
||||
logNiceDiff(c.logger, old.Spec, new.Spec)
|
||||
|
||||
if reason != "" {
|
||||
c.logger.Infof("reason: %s", reason)
|
||||
}
|
||||
}
|
||||
|
||||
func logNiceDiff(log *logrus.Entry, old, new interface{}) {
|
||||
|
|
@ -189,7 +193,7 @@ func logNiceDiff(log *logrus.Entry, old, new interface{}) {
|
|||
nice := nicediff.Diff(string(o), string(n), true)
|
||||
for _, s := range strings.Split(nice, "\n") {
|
||||
// " is not needed in the value to understand
|
||||
log.Debugf(strings.ReplaceAll(s, "\"", ""))
|
||||
log.Debug(strings.ReplaceAll(s, "\"", ""))
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -205,7 +209,7 @@ func (c *Cluster) logStatefulSetChanges(old, new *appsv1.StatefulSet, isUpdate b
|
|||
logNiceDiff(c.logger, old.Spec, new.Spec)
|
||||
|
||||
if !reflect.DeepEqual(old.Annotations, new.Annotations) {
|
||||
c.logger.Debugf("metadata.annotation are different")
|
||||
c.logger.Debug("metadata.annotation are different")
|
||||
logNiceDiff(c.logger, old.Annotations, new.Annotations)
|
||||
}
|
||||
|
||||
|
|
@ -276,7 +280,7 @@ func (c *Cluster) getTeamMembers(teamID string) ([]string, error) {
|
|||
}
|
||||
|
||||
if !c.OpConfig.EnableTeamsAPI {
|
||||
c.logger.Debugf("team API is disabled")
|
||||
c.logger.Debug("team API is disabled")
|
||||
return members, nil
|
||||
}
|
||||
|
||||
|
|
@ -412,7 +416,7 @@ func (c *Cluster) _waitPodLabelsReady(anyReplica bool) error {
|
|||
podsNumber = len(pods.Items)
|
||||
c.logger.Debugf("Waiting for %d pods to become ready", podsNumber)
|
||||
} else {
|
||||
c.logger.Debugf("Waiting for any replica pod to become ready")
|
||||
c.logger.Debug("Waiting for any replica pod to become ready")
|
||||
}
|
||||
|
||||
err := retryutil.Retry(c.OpConfig.ResourceCheckInterval, c.OpConfig.ResourceCheckTimeout,
|
||||
|
|
@ -445,10 +449,6 @@ func (c *Cluster) _waitPodLabelsReady(anyReplica bool) error {
|
|||
return err
|
||||
}
|
||||
|
||||
func (c *Cluster) waitForAnyReplicaLabelReady() error {
|
||||
return c._waitPodLabelsReady(true)
|
||||
}
|
||||
|
||||
func (c *Cluster) waitForAllPodsLabelReady() error {
|
||||
return c._waitPodLabelsReady(false)
|
||||
}
|
||||
|
|
@ -662,3 +662,24 @@ func parseResourceRequirements(resourcesRequirement v1.ResourceRequirements) (ac
|
|||
}
|
||||
return resources, nil
|
||||
}
|
||||
|
||||
func isInMaintenanceWindow(specMaintenanceWindows []acidv1.MaintenanceWindow) bool {
|
||||
if len(specMaintenanceWindows) == 0 {
|
||||
return true
|
||||
}
|
||||
now := time.Now()
|
||||
currentDay := now.Weekday()
|
||||
currentTime := now.Format("15:04")
|
||||
|
||||
for _, window := range specMaintenanceWindows {
|
||||
startTime := window.StartTime.Format("15:04")
|
||||
endTime := window.EndTime.Format("15:04")
|
||||
|
||||
if window.Everyday || window.Weekday == currentDay {
|
||||
if currentTime >= startTime && currentTime <= endTime {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
|
|
|||
|
|
@ -16,17 +16,28 @@ import (
|
|||
"github.com/zalando/postgres-operator/mocks"
|
||||
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
|
||||
fakeacidv1 "github.com/zalando/postgres-operator/pkg/generated/clientset/versioned/fake"
|
||||
"github.com/zalando/postgres-operator/pkg/util"
|
||||
"github.com/zalando/postgres-operator/pkg/util/config"
|
||||
"github.com/zalando/postgres-operator/pkg/util/k8sutil"
|
||||
"github.com/zalando/postgres-operator/pkg/util/patroni"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
k8sFake "k8s.io/client-go/kubernetes/fake"
|
||||
)
|
||||
|
||||
var externalAnnotations = map[string]string{"existing": "annotation"}
|
||||
|
||||
func mustParseTime(s string) metav1.Time {
|
||||
v, err := time.Parse("15:04", s)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
return metav1.Time{Time: v.UTC()}
|
||||
}
|
||||
|
||||
func newFakeK8sAnnotationsClient() (k8sutil.KubernetesClient, *k8sFake.Clientset) {
|
||||
clientSet := k8sFake.NewSimpleClientset()
|
||||
acidClientSet := fakeacidv1.NewSimpleClientset()
|
||||
|
|
@ -40,8 +51,10 @@ func newFakeK8sAnnotationsClient() (k8sutil.KubernetesClient, *k8sFake.Clientset
|
|||
PersistentVolumeClaimsGetter: clientSet.CoreV1(),
|
||||
PersistentVolumesGetter: clientSet.CoreV1(),
|
||||
EndpointsGetter: clientSet.CoreV1(),
|
||||
ConfigMapsGetter: clientSet.CoreV1(),
|
||||
PodsGetter: clientSet.CoreV1(),
|
||||
DeploymentsGetter: clientSet.AppsV1(),
|
||||
CronJobsGetter: clientSet.BatchV1(),
|
||||
}, clientSet
|
||||
}
|
||||
|
||||
|
|
@ -56,12 +69,8 @@ func checkResourcesInheritedAnnotations(cluster *Cluster, resultAnnotations map[
|
|||
clusterOptions := clusterLabelsOptions(cluster)
|
||||
// helper functions
|
||||
containsAnnotations := func(expected map[string]string, actual map[string]string, objName string, objType string) error {
|
||||
if expected == nil {
|
||||
if len(actual) != 0 {
|
||||
return fmt.Errorf("%s %v expected not to have any annotations, got: %#v", objType, objName, actual)
|
||||
}
|
||||
} else if !(reflect.DeepEqual(expected, actual)) {
|
||||
return fmt.Errorf("%s %v expected annotations: %#v, got: %#v", objType, objName, expected, actual)
|
||||
if !util.MapContains(actual, expected) {
|
||||
return fmt.Errorf("%s %v expected annotations %#v to be contained in %#v", objType, objName, expected, actual)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
@ -167,6 +176,22 @@ func checkResourcesInheritedAnnotations(cluster *Cluster, resultAnnotations map[
|
|||
return nil
|
||||
}
|
||||
|
||||
checkCronJob := func(annotations map[string]string) error {
|
||||
cronJobList, err := cluster.KubeClient.CronJobs(namespace).List(context.TODO(), clusterOptions)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, cronJob := range cronJobList.Items {
|
||||
if err := containsAnnotations(annotations, cronJob.Annotations, cronJob.ObjectMeta.Name, "Logical backup cron job"); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := containsAnnotations(updateAnnotations(annotations), cronJob.Spec.JobTemplate.Spec.Template.Annotations, cronJob.Name, "Logical backup cron job pod template"); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
checkSecrets := func(annotations map[string]string) error {
|
||||
secretList, err := cluster.KubeClient.Secrets(namespace).List(context.TODO(), clusterOptions)
|
||||
if err != nil {
|
||||
|
|
@ -193,8 +218,21 @@ func checkResourcesInheritedAnnotations(cluster *Cluster, resultAnnotations map[
|
|||
return nil
|
||||
}
|
||||
|
||||
checkConfigMaps := func(annotations map[string]string) error {
|
||||
cmList, err := cluster.KubeClient.ConfigMaps(namespace).List(context.TODO(), clusterOptions)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, cm := range cmList.Items {
|
||||
if err := containsAnnotations(annotations, cm.Annotations, cm.ObjectMeta.Name, "ConfigMap"); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
checkFuncs := []func(map[string]string) error{
|
||||
checkSts, checkPods, checkSvc, checkPdb, checkPooler, checkPvc, checkSecrets, checkEndpoints,
|
||||
checkSts, checkPods, checkSvc, checkPdb, checkPooler, checkCronJob, checkPvc, checkSecrets, checkEndpoints, checkConfigMaps,
|
||||
}
|
||||
for _, f := range checkFuncs {
|
||||
if err := f(resultAnnotations); err != nil {
|
||||
|
|
@ -209,18 +247,18 @@ func createPods(cluster *Cluster) []v1.Pod {
|
|||
for i, role := range []PostgresRole{Master, Replica} {
|
||||
podsList = append(podsList, v1.Pod{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: fmt.Sprintf("%s-%d", clusterName, i),
|
||||
Name: fmt.Sprintf("%s-%d", cluster.Name, i),
|
||||
Namespace: namespace,
|
||||
Labels: map[string]string{
|
||||
"application": "spilo",
|
||||
"cluster-name": clusterName,
|
||||
"cluster-name": cluster.Name,
|
||||
"spilo-role": string(role),
|
||||
},
|
||||
},
|
||||
})
|
||||
podsList = append(podsList, v1.Pod{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: fmt.Sprintf("%s-pooler-%s", clusterName, role),
|
||||
Name: fmt.Sprintf("%s-pooler-%s", cluster.Name, role),
|
||||
Namespace: namespace,
|
||||
Labels: cluster.connectionPoolerLabels(role, true).MatchLabels,
|
||||
},
|
||||
|
|
@ -242,6 +280,7 @@ func newInheritedAnnotationsCluster(client k8sutil.KubernetesClient) (*Cluster,
|
|||
Spec: acidv1.PostgresSpec{
|
||||
EnableConnectionPooler: boolToPointer(true),
|
||||
EnableReplicaConnectionPooler: boolToPointer(true),
|
||||
EnableLogicalBackup: true,
|
||||
Volume: acidv1.Volume{
|
||||
Size: "1Gi",
|
||||
},
|
||||
|
|
@ -254,6 +293,7 @@ func newInheritedAnnotationsCluster(client k8sutil.KubernetesClient) (*Cluster,
|
|||
OpConfig: config.Config{
|
||||
PatroniAPICheckInterval: time.Duration(1),
|
||||
PatroniAPICheckTimeout: time.Duration(5),
|
||||
KubernetesUseConfigMaps: true,
|
||||
ConnectionPooler: config.ConnectionPooler{
|
||||
ConnectionPoolerDefaultCPURequest: "100m",
|
||||
ConnectionPoolerDefaultCPULimit: "100m",
|
||||
|
|
@ -289,7 +329,7 @@ func newInheritedAnnotationsCluster(client k8sutil.KubernetesClient) (*Cluster,
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
_, err = cluster.createPodDisruptionBudget()
|
||||
err = cluster.createPodDisruptionBudgets()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
@ -297,6 +337,10 @@ func newInheritedAnnotationsCluster(client k8sutil.KubernetesClient) (*Cluster,
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = cluster.createLogicalBackupJob()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pvcList := CreatePVCs(namespace, clusterName, cluster.labelsSet(false), 2, "1Gi")
|
||||
for _, pvc := range pvcList.Items {
|
||||
_, err = cluster.KubeClient.PersistentVolumeClaims(namespace).Create(context.TODO(), &pvc, metav1.CreateOptions{})
|
||||
|
|
@ -312,11 +356,60 @@ func newInheritedAnnotationsCluster(client k8sutil.KubernetesClient) (*Cluster,
|
|||
}
|
||||
}
|
||||
|
||||
// resources which Patroni creates
|
||||
if err = createPatroniResources(cluster); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return cluster, nil
|
||||
}
|
||||
|
||||
func createPatroniResources(cluster *Cluster) error {
|
||||
patroniService := cluster.generateService(Replica, &pg.Spec)
|
||||
patroniService.ObjectMeta.Name = cluster.serviceName(Patroni)
|
||||
_, err := cluster.KubeClient.Services(namespace).Create(context.TODO(), patroniService, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, suffix := range patroniObjectSuffixes {
|
||||
metadata := metav1.ObjectMeta{
|
||||
Name: fmt.Sprintf("%s-%s", clusterName, suffix),
|
||||
Namespace: namespace,
|
||||
Annotations: map[string]string{
|
||||
"initialize": "123456789",
|
||||
},
|
||||
Labels: cluster.labelsSet(false),
|
||||
}
|
||||
|
||||
if cluster.OpConfig.KubernetesUseConfigMaps {
|
||||
configMap := v1.ConfigMap{
|
||||
ObjectMeta: metadata,
|
||||
}
|
||||
_, err := cluster.KubeClient.ConfigMaps(namespace).Create(context.TODO(), &configMap, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
endpoints := v1.Endpoints{
|
||||
ObjectMeta: metadata,
|
||||
}
|
||||
_, err := cluster.KubeClient.Endpoints(namespace).Create(context.TODO(), &endpoints, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func annotateResources(cluster *Cluster) error {
|
||||
clusterOptions := clusterLabelsOptions(cluster)
|
||||
patchData, err := metaAnnotationsPatch(externalAnnotations)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
stsList, err := cluster.KubeClient.StatefulSets(namespace).List(context.TODO(), clusterOptions)
|
||||
if err != nil {
|
||||
|
|
@ -324,7 +417,7 @@ func annotateResources(cluster *Cluster) error {
|
|||
}
|
||||
for _, sts := range stsList.Items {
|
||||
sts.Annotations = externalAnnotations
|
||||
if _, err = cluster.KubeClient.StatefulSets(namespace).Update(context.TODO(), &sts, metav1.UpdateOptions{}); err != nil {
|
||||
if _, err = cluster.KubeClient.StatefulSets(namespace).Patch(context.TODO(), sts.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
@ -335,7 +428,7 @@ func annotateResources(cluster *Cluster) error {
|
|||
}
|
||||
for _, pod := range podList.Items {
|
||||
pod.Annotations = externalAnnotations
|
||||
if _, err = cluster.KubeClient.Pods(namespace).Update(context.TODO(), &pod, metav1.UpdateOptions{}); err != nil {
|
||||
if _, err = cluster.KubeClient.Pods(namespace).Patch(context.TODO(), pod.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
@ -346,7 +439,7 @@ func annotateResources(cluster *Cluster) error {
|
|||
}
|
||||
for _, svc := range svcList.Items {
|
||||
svc.Annotations = externalAnnotations
|
||||
if _, err = cluster.KubeClient.Services(namespace).Update(context.TODO(), &svc, metav1.UpdateOptions{}); err != nil {
|
||||
if _, err = cluster.KubeClient.Services(namespace).Patch(context.TODO(), svc.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
@ -357,7 +450,19 @@ func annotateResources(cluster *Cluster) error {
|
|||
}
|
||||
for _, pdb := range pdbList.Items {
|
||||
pdb.Annotations = externalAnnotations
|
||||
_, err = cluster.KubeClient.PodDisruptionBudgets(namespace).Update(context.TODO(), &pdb, metav1.UpdateOptions{})
|
||||
_, err = cluster.KubeClient.PodDisruptionBudgets(namespace).Patch(context.TODO(), pdb.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
cronJobList, err := cluster.KubeClient.CronJobs(namespace).List(context.TODO(), clusterOptions)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, cronJob := range cronJobList.Items {
|
||||
cronJob.Annotations = externalAnnotations
|
||||
_, err = cluster.KubeClient.CronJobs(namespace).Patch(context.TODO(), cronJob.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -369,7 +474,7 @@ func annotateResources(cluster *Cluster) error {
|
|||
}
|
||||
for _, pvc := range pvcList.Items {
|
||||
pvc.Annotations = externalAnnotations
|
||||
if _, err = cluster.KubeClient.PersistentVolumeClaims(namespace).Update(context.TODO(), &pvc, metav1.UpdateOptions{}); err != nil {
|
||||
if _, err = cluster.KubeClient.PersistentVolumeClaims(namespace).Patch(context.TODO(), pvc.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
@ -380,7 +485,7 @@ func annotateResources(cluster *Cluster) error {
|
|||
return err
|
||||
}
|
||||
deploy.Annotations = externalAnnotations
|
||||
if _, err = cluster.KubeClient.Deployments(namespace).Update(context.TODO(), deploy, metav1.UpdateOptions{}); err != nil {
|
||||
if _, err = cluster.KubeClient.Deployments(namespace).Patch(context.TODO(), deploy.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
@ -391,7 +496,7 @@ func annotateResources(cluster *Cluster) error {
|
|||
}
|
||||
for _, secret := range secrets.Items {
|
||||
secret.Annotations = externalAnnotations
|
||||
if _, err = cluster.KubeClient.Secrets(namespace).Update(context.TODO(), &secret, metav1.UpdateOptions{}); err != nil {
|
||||
if _, err = cluster.KubeClient.Secrets(namespace).Patch(context.TODO(), secret.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
@ -402,10 +507,22 @@ func annotateResources(cluster *Cluster) error {
|
|||
}
|
||||
for _, ep := range endpoints.Items {
|
||||
ep.Annotations = externalAnnotations
|
||||
if _, err = cluster.KubeClient.Endpoints(namespace).Update(context.TODO(), &ep, metav1.UpdateOptions{}); err != nil {
|
||||
if _, err = cluster.KubeClient.Endpoints(namespace).Patch(context.TODO(), ep.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
configMaps, err := cluster.KubeClient.ConfigMaps(namespace).List(context.TODO(), clusterOptions)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, cm := range configMaps.Items {
|
||||
cm.Annotations = externalAnnotations
|
||||
if _, err = cluster.KubeClient.ConfigMaps(namespace).Patch(context.TODO(), cm.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
@ -472,7 +589,18 @@ func TestInheritedAnnotations(t *testing.T) {
|
|||
err = checkResourcesInheritedAnnotations(cluster, result)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// 3. Existing annotations (should not be removed)
|
||||
// 3. Change from ConfigMaps to Endpoints
|
||||
err = cluster.deletePatroniResources()
|
||||
assert.NoError(t, err)
|
||||
cluster.OpConfig.KubernetesUseConfigMaps = false
|
||||
err = createPatroniResources(cluster)
|
||||
assert.NoError(t, err)
|
||||
err = cluster.Sync(newSpec.DeepCopy())
|
||||
assert.NoError(t, err)
|
||||
err = checkResourcesInheritedAnnotations(cluster, result)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// 4. Existing annotations (should not be removed)
|
||||
err = annotateResources(cluster)
|
||||
assert.NoError(t, err)
|
||||
maps.Copy(result, externalAnnotations)
|
||||
|
|
@ -521,3 +649,65 @@ func Test_trimCronjobName(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsInMaintenanceWindow(t *testing.T) {
|
||||
now := time.Now()
|
||||
futureTimeStart := now.Add(1 * time.Hour)
|
||||
futureTimeStartFormatted := futureTimeStart.Format("15:04")
|
||||
futureTimeEnd := now.Add(2 * time.Hour)
|
||||
futureTimeEndFormatted := futureTimeEnd.Format("15:04")
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
windows []acidv1.MaintenanceWindow
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
name: "no maintenance windows",
|
||||
windows: nil,
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "maintenance windows with everyday",
|
||||
windows: []acidv1.MaintenanceWindow{
|
||||
{
|
||||
Everyday: true,
|
||||
StartTime: mustParseTime("00:00"),
|
||||
EndTime: mustParseTime("23:59"),
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "maintenance windows with weekday",
|
||||
windows: []acidv1.MaintenanceWindow{
|
||||
{
|
||||
Weekday: now.Weekday(),
|
||||
StartTime: mustParseTime("00:00"),
|
||||
EndTime: mustParseTime("23:59"),
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "maintenance windows with future interval time",
|
||||
windows: []acidv1.MaintenanceWindow{
|
||||
{
|
||||
Weekday: now.Weekday(),
|
||||
StartTime: mustParseTime(futureTimeStartFormatted),
|
||||
EndTime: mustParseTime(futureTimeEndFormatted),
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
cluster.Spec.MaintenanceWindows = tt.windows
|
||||
if isInMaintenanceWindow(cluster.Spec.MaintenanceWindows) != tt.expected {
|
||||
t.Errorf("Expected isInMaintenanceWindow to return %t", tt.expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -13,9 +13,9 @@ import (
|
|||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/zalando/postgres-operator/pkg/spec"
|
||||
"github.com/zalando/postgres-operator/pkg/util"
|
||||
"github.com/zalando/postgres-operator/pkg/util/constants"
|
||||
"github.com/zalando/postgres-operator/pkg/util/filesystems"
|
||||
"github.com/zalando/postgres-operator/pkg/util/k8sutil"
|
||||
"github.com/zalando/postgres-operator/pkg/util/volumes"
|
||||
)
|
||||
|
||||
|
|
@ -66,7 +66,7 @@ func (c *Cluster) syncVolumes() error {
|
|||
}
|
||||
|
||||
func (c *Cluster) syncUnderlyingEBSVolume() error {
|
||||
c.logger.Infof("starting to sync EBS volumes: type, iops, throughput, and size")
|
||||
c.logger.Debug("starting to sync EBS volumes: type, iops, throughput, and size")
|
||||
|
||||
var (
|
||||
err error
|
||||
|
|
@ -136,7 +136,7 @@ func (c *Cluster) syncUnderlyingEBSVolume() error {
|
|||
}
|
||||
|
||||
func (c *Cluster) populateVolumeMetaData() error {
|
||||
c.logger.Infof("starting reading ebs meta data")
|
||||
c.logger.Debug("starting reading ebs meta data")
|
||||
|
||||
pvs, err := c.listPersistentVolumes()
|
||||
if err != nil {
|
||||
|
|
@ -151,7 +151,7 @@ func (c *Cluster) populateVolumeMetaData() error {
|
|||
volumeIds := []string{}
|
||||
var volumeID string
|
||||
for _, pv := range pvs {
|
||||
volumeID, err = c.VolumeResizer.ExtractVolumeID(pv.Spec.AWSElasticBlockStore.VolumeID)
|
||||
volumeID, err = c.VolumeResizer.GetProviderVolumeID(pv)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
|
@ -165,7 +165,7 @@ func (c *Cluster) populateVolumeMetaData() error {
|
|||
}
|
||||
|
||||
if len(currentVolumes) != len(c.EBSVolumes) && len(c.EBSVolumes) > 0 {
|
||||
c.logger.Debugf("number of ebs volumes (%d) discovered differs from already known volumes (%d)", len(currentVolumes), len(c.EBSVolumes))
|
||||
c.logger.Infof("number of ebs volumes (%d) discovered differs from already known volumes (%d)", len(currentVolumes), len(c.EBSVolumes))
|
||||
}
|
||||
|
||||
// reset map, operator is not responsible for dangling ebs volumes
|
||||
|
|
@ -185,8 +185,7 @@ func (c *Cluster) syncVolumeClaims() error {
|
|||
|
||||
if c.OpConfig.StorageResizeMode == "off" || c.OpConfig.StorageResizeMode == "ebs" {
|
||||
ignoreResize = true
|
||||
c.logger.Debugf("Storage resize mode is set to %q. Skipping volume size sync of PVCs.", c.OpConfig.StorageResizeMode)
|
||||
|
||||
c.logger.Debugf("Storage resize mode is set to %q. Skipping volume size sync of persistent volume claims.", c.OpConfig.StorageResizeMode)
|
||||
}
|
||||
|
||||
newSize, err := resource.ParseQuantity(c.Spec.Volume.Size)
|
||||
|
|
@ -197,45 +196,49 @@ func (c *Cluster) syncVolumeClaims() error {
|
|||
|
||||
pvcs, err := c.listPersistentVolumeClaims()
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not receive persistent volume claims: %v", err)
|
||||
return fmt.Errorf("could not list persistent volume claims: %v", err)
|
||||
}
|
||||
for _, pvc := range pvcs {
|
||||
c.VolumeClaims[pvc.UID] = &pvc
|
||||
needsUpdate := false
|
||||
currentSize := quantityToGigabyte(pvc.Spec.Resources.Requests[v1.ResourceStorage])
|
||||
if !ignoreResize && currentSize != manifestSize {
|
||||
if currentSize < manifestSize {
|
||||
pvc.Spec.Resources.Requests[v1.ResourceStorage] = newSize
|
||||
needsUpdate = true
|
||||
c.logger.Debugf("persistent volume claim for volume %q needs to be resized", pvc.Name)
|
||||
c.logger.Infof("persistent volume claim for volume %q needs to be resized", pvc.Name)
|
||||
} else {
|
||||
c.logger.Warningf("cannot shrink persistent volume")
|
||||
}
|
||||
}
|
||||
|
||||
if needsUpdate {
|
||||
c.logger.Debugf("updating persistent volume claim definition for volume %q", pvc.Name)
|
||||
if _, err := c.KubeClient.PersistentVolumeClaims(pvc.Namespace).Update(context.TODO(), &pvc, metav1.UpdateOptions{}); err != nil {
|
||||
c.logger.Infof("updating persistent volume claim definition for volume %q", pvc.Name)
|
||||
updatedPvc, err := c.KubeClient.PersistentVolumeClaims(pvc.Namespace).Update(context.TODO(), &pvc, metav1.UpdateOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not update persistent volume claim: %q", err)
|
||||
}
|
||||
c.logger.Debugf("successfully updated persistent volume claim %q", pvc.Name)
|
||||
c.VolumeClaims[pvc.UID] = updatedPvc
|
||||
c.logger.Infof("successfully updated persistent volume claim %q", pvc.Name)
|
||||
} else {
|
||||
c.logger.Debugf("volume claim for volume %q do not require updates", pvc.Name)
|
||||
}
|
||||
|
||||
newAnnotations := c.annotationsSet(nil)
|
||||
if changed, _ := c.compareAnnotations(pvc.Annotations, newAnnotations); changed {
|
||||
if changed, _ := c.compareAnnotations(pvc.Annotations, newAnnotations, nil); changed {
|
||||
patchData, err := metaAnnotationsPatch(newAnnotations)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not form patch for the persistent volume claim for volume %q: %v", pvc.Name, err)
|
||||
}
|
||||
_, err = c.KubeClient.PersistentVolumeClaims(pvc.Namespace).Patch(context.TODO(), pvc.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
patchedPvc, err := c.KubeClient.PersistentVolumeClaims(pvc.Namespace).Patch(context.TODO(), pvc.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not patch annotations of the persistent volume claim for volume %q: %v", pvc.Name, err)
|
||||
}
|
||||
c.VolumeClaims[pvc.UID] = patchedPvc
|
||||
}
|
||||
}
|
||||
|
||||
c.logger.Infof("volume claims have been synced successfully")
|
||||
c.logger.Debug("volume claims have been synced successfully")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -256,7 +259,7 @@ func (c *Cluster) syncEbsVolumes() error {
|
|||
return fmt.Errorf("could not sync volumes: %v", err)
|
||||
}
|
||||
|
||||
c.logger.Infof("volumes have been synced successfully")
|
||||
c.logger.Debug("volumes have been synced successfully")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -269,38 +272,50 @@ func (c *Cluster) listPersistentVolumeClaims() ([]v1.PersistentVolumeClaim, erro
|
|||
|
||||
pvcs, err := c.KubeClient.PersistentVolumeClaims(ns).List(context.TODO(), listOptions)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not list of PersistentVolumeClaims: %v", err)
|
||||
return nil, fmt.Errorf("could not list of persistent volume claims: %v", err)
|
||||
}
|
||||
return pvcs.Items, nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePersistentVolumeClaims() error {
|
||||
c.logger.Debugln("deleting PVCs")
|
||||
pvcs, err := c.listPersistentVolumeClaims()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, pvc := range pvcs {
|
||||
c.logger.Debugf("deleting PVC %q", util.NameFromMeta(pvc.ObjectMeta))
|
||||
if err := c.KubeClient.PersistentVolumeClaims(pvc.Namespace).Delete(context.TODO(), pvc.Name, c.deleteOptions); err != nil {
|
||||
c.logger.Warningf("could not delete PersistentVolumeClaim: %v", err)
|
||||
c.setProcessName("deleting persistent volume claims")
|
||||
errors := make([]string, 0)
|
||||
for uid := range c.VolumeClaims {
|
||||
err := c.deletePersistentVolumeClaim(uid)
|
||||
if err != nil {
|
||||
errors = append(errors, fmt.Sprintf("%v", err))
|
||||
}
|
||||
}
|
||||
if len(pvcs) > 0 {
|
||||
c.logger.Debugln("PVCs have been deleted")
|
||||
} else {
|
||||
c.logger.Debugln("no PVCs to delete")
|
||||
|
||||
if len(errors) > 0 {
|
||||
c.logger.Warningf("could not delete all persistent volume claims: %v", strings.Join(errors, `', '`))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) deletePersistentVolumeClaim(uid types.UID) error {
|
||||
c.setProcessName("deleting persistent volume claim")
|
||||
pvc := c.VolumeClaims[uid]
|
||||
c.logger.Debugf("deleting persistent volume claim %q", pvc.Name)
|
||||
err := c.KubeClient.PersistentVolumeClaims(pvc.Namespace).Delete(context.TODO(), pvc.Name, c.deleteOptions)
|
||||
if k8sutil.ResourceNotFound(err) {
|
||||
c.logger.Debugf("persistent volume claim %q has already been deleted", pvc.Name)
|
||||
} else if err != nil {
|
||||
return fmt.Errorf("could not delete persistent volume claim %q: %v", pvc.Name, err)
|
||||
}
|
||||
c.logger.Infof("persistent volume claim %q has been deleted", pvc.Name)
|
||||
delete(c.VolumeClaims, uid)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) listPersistentVolumes() ([]*v1.PersistentVolume, error) {
|
||||
result := make([]*v1.PersistentVolume, 0)
|
||||
|
||||
pvcs, err := c.listPersistentVolumeClaims()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not list cluster's PersistentVolumeClaims: %v", err)
|
||||
return nil, fmt.Errorf("could not list cluster's persistent volume claims: %v", err)
|
||||
}
|
||||
|
||||
pods, err := c.listPods()
|
||||
|
|
@ -383,22 +398,22 @@ func (c *Cluster) resizeVolumes() error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.logger.Debugf("updating persistent volume %q to %d", pv.Name, newSize)
|
||||
c.logger.Infof("updating persistent volume %q to %d", pv.Name, newSize)
|
||||
if err := resizer.ResizeVolume(awsVolumeID, newSize); err != nil {
|
||||
return fmt.Errorf("could not resize EBS volume %q: %v", awsVolumeID, err)
|
||||
}
|
||||
c.logger.Debugf("resizing the filesystem on the volume %q", pv.Name)
|
||||
c.logger.Infof("resizing the filesystem on the volume %q", pv.Name)
|
||||
podName := getPodNameFromPersistentVolume(pv)
|
||||
if err := c.resizePostgresFilesystem(podName, []filesystems.FilesystemResizer{&filesystems.Ext234Resize{}}); err != nil {
|
||||
return fmt.Errorf("could not resize the filesystem on pod %q: %v", podName, err)
|
||||
}
|
||||
c.logger.Debugf("filesystem resize successful on volume %q", pv.Name)
|
||||
c.logger.Infof("filesystem resize successful on volume %q", pv.Name)
|
||||
pv.Spec.Capacity[v1.ResourceStorage] = newQuantity
|
||||
c.logger.Debugf("updating persistent volume definition for volume %q", pv.Name)
|
||||
c.logger.Infof("updating persistent volume definition for volume %q", pv.Name)
|
||||
if _, err := c.KubeClient.PersistentVolumes().Update(context.TODO(), pv, metav1.UpdateOptions{}); err != nil {
|
||||
return fmt.Errorf("could not update persistent volume: %q", err)
|
||||
}
|
||||
c.logger.Debugf("successfully updated persistent volume %q", pv.Name)
|
||||
c.logger.Infof("successfully updated persistent volume %q", pv.Name)
|
||||
|
||||
if !compatible {
|
||||
c.logger.Warningf("volume %q is incompatible with all available resizing providers, consider switching storage_resize_mode to pvc or off", pv.Name)
|
||||
|
|
@ -459,7 +474,7 @@ func (c *Cluster) executeEBSMigration() error {
|
|||
}
|
||||
|
||||
if !hasGp2 {
|
||||
c.logger.Infof("no EBS gp2 volumes left to migrate")
|
||||
c.logger.Debugf("no EBS gp2 volumes left to migrate")
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -93,7 +93,7 @@ func TestResizeVolumeClaim(t *testing.T) {
|
|||
|
||||
// check if listPersistentVolumeClaims returns only the PVCs matching the filter
|
||||
if len(pvcs) != len(pvcList.Items)-1 {
|
||||
t.Errorf("%s: could not find all PVCs, got %v, expected %v", testName, len(pvcs), len(pvcList.Items)-1)
|
||||
t.Errorf("%s: could not find all persistent volume claims, got %v, expected %v", testName, len(pvcs), len(pvcList.Items)-1)
|
||||
}
|
||||
|
||||
// check if PVCs were correctly resized
|
||||
|
|
@ -165,7 +165,7 @@ func CreatePVCs(namespace string, clusterName string, labels labels.Set, n int,
|
|||
Labels: labels,
|
||||
},
|
||||
Spec: v1.PersistentVolumeClaimSpec{
|
||||
Resources: v1.ResourceRequirements{
|
||||
Resources: v1.VolumeResourceRequirements{
|
||||
Requests: v1.ResourceList{
|
||||
v1.ResourceStorage: storage1Gi,
|
||||
},
|
||||
|
|
@ -216,6 +216,12 @@ func TestMigrateEBS(t *testing.T) {
|
|||
resizer.EXPECT().ExtractVolumeID(gomock.Eq("aws://eu-central-1b/ebs-volume-1")).Return("ebs-volume-1", nil)
|
||||
resizer.EXPECT().ExtractVolumeID(gomock.Eq("aws://eu-central-1b/ebs-volume-2")).Return("ebs-volume-2", nil)
|
||||
|
||||
resizer.EXPECT().GetProviderVolumeID(gomock.Any()).
|
||||
DoAndReturn(func(pv *v1.PersistentVolume) (string, error) {
|
||||
return resizer.ExtractVolumeID(pv.Spec.AWSElasticBlockStore.VolumeID)
|
||||
}).
|
||||
Times(2)
|
||||
|
||||
resizer.EXPECT().DescribeVolumes(gomock.Eq([]string{"ebs-volume-1", "ebs-volume-2"})).Return(
|
||||
[]volumes.VolumeProperties{
|
||||
{VolumeID: "ebs-volume-1", VolumeType: "gp2", Size: 100},
|
||||
|
|
@ -256,7 +262,7 @@ func initTestVolumesAndPods(client k8sutil.KubernetesClient, namespace, clustern
|
|||
Labels: labels,
|
||||
},
|
||||
Spec: v1.PersistentVolumeClaimSpec{
|
||||
Resources: v1.ResourceRequirements{
|
||||
Resources: v1.VolumeResourceRequirements{
|
||||
Requests: v1.ResourceList{
|
||||
v1.ResourceStorage: storage1Gi,
|
||||
},
|
||||
|
|
@ -322,6 +328,12 @@ func TestMigrateGp3Support(t *testing.T) {
|
|||
resizer.EXPECT().ExtractVolumeID(gomock.Eq("aws://eu-central-1b/ebs-volume-2")).Return("ebs-volume-2", nil)
|
||||
resizer.EXPECT().ExtractVolumeID(gomock.Eq("aws://eu-central-1b/ebs-volume-3")).Return("ebs-volume-3", nil)
|
||||
|
||||
resizer.EXPECT().GetProviderVolumeID(gomock.Any()).
|
||||
DoAndReturn(func(pv *v1.PersistentVolume) (string, error) {
|
||||
return resizer.ExtractVolumeID(pv.Spec.AWSElasticBlockStore.VolumeID)
|
||||
}).
|
||||
Times(3)
|
||||
|
||||
resizer.EXPECT().DescribeVolumes(gomock.Eq([]string{"ebs-volume-1", "ebs-volume-2", "ebs-volume-3"})).Return(
|
||||
[]volumes.VolumeProperties{
|
||||
{VolumeID: "ebs-volume-1", VolumeType: "gp3", Size: 100, Iops: 3000},
|
||||
|
|
@ -377,6 +389,12 @@ func TestManualGp2Gp3Support(t *testing.T) {
|
|||
resizer.EXPECT().ExtractVolumeID(gomock.Eq("aws://eu-central-1b/ebs-volume-1")).Return("ebs-volume-1", nil)
|
||||
resizer.EXPECT().ExtractVolumeID(gomock.Eq("aws://eu-central-1b/ebs-volume-2")).Return("ebs-volume-2", nil)
|
||||
|
||||
resizer.EXPECT().GetProviderVolumeID(gomock.Any()).
|
||||
DoAndReturn(func(pv *v1.PersistentVolume) (string, error) {
|
||||
return resizer.ExtractVolumeID(pv.Spec.AWSElasticBlockStore.VolumeID)
|
||||
}).
|
||||
Times(2)
|
||||
|
||||
resizer.EXPECT().DescribeVolumes(gomock.Eq([]string{"ebs-volume-1", "ebs-volume-2"})).Return(
|
||||
[]volumes.VolumeProperties{
|
||||
{VolumeID: "ebs-volume-1", VolumeType: "gp2", Size: 150, Iops: 3000},
|
||||
|
|
@ -436,6 +454,12 @@ func TestDontTouchType(t *testing.T) {
|
|||
resizer.EXPECT().ExtractVolumeID(gomock.Eq("aws://eu-central-1b/ebs-volume-1")).Return("ebs-volume-1", nil)
|
||||
resizer.EXPECT().ExtractVolumeID(gomock.Eq("aws://eu-central-1b/ebs-volume-2")).Return("ebs-volume-2", nil)
|
||||
|
||||
resizer.EXPECT().GetProviderVolumeID(gomock.Any()).
|
||||
DoAndReturn(func(pv *v1.PersistentVolume) (string, error) {
|
||||
return resizer.ExtractVolumeID(pv.Spec.AWSElasticBlockStore.VolumeID)
|
||||
}).
|
||||
Times(2)
|
||||
|
||||
resizer.EXPECT().DescribeVolumes(gomock.Eq([]string{"ebs-volume-1", "ebs-volume-2"})).Return(
|
||||
[]volumes.VolumeProperties{
|
||||
{VolumeID: "ebs-volume-1", VolumeType: "gp2", Size: 150, Iops: 3000},
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
|
|||
result.EnableTeamIdClusternamePrefix = fromCRD.EnableTeamIdClusternamePrefix
|
||||
result.EtcdHost = fromCRD.EtcdHost
|
||||
result.KubernetesUseConfigMaps = fromCRD.KubernetesUseConfigMaps
|
||||
result.DockerImage = util.Coalesce(fromCRD.DockerImage, "ghcr.io/zalando/spilo-16:3.2-p3")
|
||||
result.DockerImage = util.Coalesce(fromCRD.DockerImage, "ghcr.io/zalando/spilo-17:4.0-p2")
|
||||
result.Workers = util.CoalesceUInt32(fromCRD.Workers, 8)
|
||||
result.MinInstances = fromCRD.MinInstances
|
||||
result.MaxInstances = fromCRD.MaxInstances
|
||||
|
|
@ -60,12 +60,13 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
|
|||
result.PasswordRotationUserRetention = util.CoalesceUInt32(fromCRD.PostgresUsersConfiguration.DeepCopy().PasswordRotationUserRetention, 180)
|
||||
|
||||
// major version upgrade config
|
||||
result.MajorVersionUpgradeMode = util.Coalesce(fromCRD.MajorVersionUpgrade.MajorVersionUpgradeMode, "off")
|
||||
result.MajorVersionUpgradeMode = util.Coalesce(fromCRD.MajorVersionUpgrade.MajorVersionUpgradeMode, "manual")
|
||||
result.MajorVersionUpgradeTeamAllowList = fromCRD.MajorVersionUpgrade.MajorVersionUpgradeTeamAllowList
|
||||
result.MinimalMajorVersion = util.Coalesce(fromCRD.MajorVersionUpgrade.MinimalMajorVersion, "12")
|
||||
result.TargetMajorVersion = util.Coalesce(fromCRD.MajorVersionUpgrade.TargetMajorVersion, "16")
|
||||
result.MinimalMajorVersion = util.Coalesce(fromCRD.MajorVersionUpgrade.MinimalMajorVersion, "13")
|
||||
result.TargetMajorVersion = util.Coalesce(fromCRD.MajorVersionUpgrade.TargetMajorVersion, "17")
|
||||
|
||||
// kubernetes config
|
||||
result.EnableOwnerReferences = util.CoalesceBool(fromCRD.Kubernetes.EnableOwnerReferences, util.False())
|
||||
result.CustomPodAnnotations = fromCRD.Kubernetes.CustomPodAnnotations
|
||||
result.PodServiceAccountName = util.Coalesce(fromCRD.Kubernetes.PodServiceAccountName, "postgres-pod")
|
||||
result.PodServiceAccountDefinition = fromCRD.Kubernetes.PodServiceAccountDefinition
|
||||
|
|
@ -173,13 +174,13 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
|
|||
result.GCPCredentials = fromCRD.AWSGCP.GCPCredentials
|
||||
result.WALAZStorageAccount = fromCRD.AWSGCP.WALAZStorageAccount
|
||||
result.AdditionalSecretMount = fromCRD.AWSGCP.AdditionalSecretMount
|
||||
result.AdditionalSecretMountPath = util.Coalesce(fromCRD.AWSGCP.AdditionalSecretMountPath, "/meta/credentials")
|
||||
result.AdditionalSecretMountPath = fromCRD.AWSGCP.AdditionalSecretMountPath
|
||||
result.EnableEBSGp3Migration = fromCRD.AWSGCP.EnableEBSGp3Migration
|
||||
result.EnableEBSGp3MigrationMaxSize = util.CoalesceInt64(fromCRD.AWSGCP.EnableEBSGp3MigrationMaxSize, 1000)
|
||||
|
||||
// logical backup config
|
||||
result.LogicalBackupSchedule = util.Coalesce(fromCRD.LogicalBackup.Schedule, "30 00 * * *")
|
||||
result.LogicalBackupDockerImage = util.Coalesce(fromCRD.LogicalBackup.DockerImage, "ghcr.io/zalando/postgres-operator/logical-backup:v1.12.2")
|
||||
result.LogicalBackupDockerImage = util.Coalesce(fromCRD.LogicalBackup.DockerImage, "ghcr.io/zalando/postgres-operator/logical-backup:v1.14.0")
|
||||
result.LogicalBackupProvider = util.Coalesce(fromCRD.LogicalBackup.BackupProvider, "s3")
|
||||
result.LogicalBackupAzureStorageAccountName = fromCRD.LogicalBackup.AzureStorageAccountName
|
||||
result.LogicalBackupAzureStorageAccountKey = fromCRD.LogicalBackup.AzureStorageAccountKey
|
||||
|
|
|
|||
|
|
@ -143,7 +143,7 @@ func (c *Controller) acquireInitialListOfClusters() error {
|
|||
if list, err = c.listClusters(metav1.ListOptions{ResourceVersion: "0"}); err != nil {
|
||||
return err
|
||||
}
|
||||
c.logger.Debugf("acquiring initial list of clusters")
|
||||
c.logger.Debug("acquiring initial list of clusters")
|
||||
for _, pg := range list.Items {
|
||||
// XXX: check the cluster status field instead
|
||||
if pg.Error != "" {
|
||||
|
|
@ -392,10 +392,6 @@ func (c *Controller) warnOnDeprecatedPostgreSQLSpecParameters(spec *acidv1.Postg
|
|||
c.logger.Warningf("parameter %q is deprecated. Consider setting %q instead", deprecated, replacement)
|
||||
}
|
||||
|
||||
noeffect := func(param string, explanation string) {
|
||||
c.logger.Warningf("parameter %q takes no effect. %s", param, explanation)
|
||||
}
|
||||
|
||||
if spec.UseLoadBalancer != nil {
|
||||
deprecate("useLoadBalancer", "enableMasterLoadBalancer")
|
||||
}
|
||||
|
|
@ -403,10 +399,6 @@ func (c *Controller) warnOnDeprecatedPostgreSQLSpecParameters(spec *acidv1.Postg
|
|||
deprecate("replicaLoadBalancer", "enableReplicaLoadBalancer")
|
||||
}
|
||||
|
||||
if len(spec.MaintenanceWindows) > 0 {
|
||||
noeffect("maintenanceWindows", "Not implemented.")
|
||||
}
|
||||
|
||||
if (spec.UseLoadBalancer != nil || spec.ReplicaLoadBalancer != nil) &&
|
||||
(spec.EnableReplicaLoadBalancer != nil || spec.EnableMasterLoadBalancer != nil) {
|
||||
c.logger.Warnf("both old and new load balancer parameters are present in the manifest, ignoring old ones")
|
||||
|
|
@ -462,19 +454,22 @@ func (c *Controller) queueClusterEvent(informerOldSpec, informerNewSpec *acidv1.
|
|||
clusterError = informerNewSpec.Error
|
||||
}
|
||||
|
||||
// only allow deletion if delete annotations are set and conditions are met
|
||||
if eventType == EventDelete {
|
||||
if err := c.meetsClusterDeleteAnnotations(informerOldSpec); err != nil {
|
||||
c.logger.WithField("cluster-name", clusterName).Warnf(
|
||||
"ignoring %q event for cluster %q - manifest does not fulfill delete requirements: %s", eventType, clusterName, err)
|
||||
c.logger.WithField("cluster-name", clusterName).Warnf(
|
||||
"please, recreate Postgresql resource %q and set annotations to delete properly", clusterName)
|
||||
if currentManifest, marshalErr := json.Marshal(informerOldSpec); marshalErr != nil {
|
||||
c.logger.WithField("cluster-name", clusterName).Warnf("could not marshal current manifest:\n%+v", informerOldSpec)
|
||||
} else {
|
||||
c.logger.WithField("cluster-name", clusterName).Warnf("%s\n", string(currentManifest))
|
||||
// when owner references are used operator cannot block deletion
|
||||
if c.opConfig.EnableOwnerReferences == nil || !*c.opConfig.EnableOwnerReferences {
|
||||
// only allow deletion if delete annotations are set and conditions are met
|
||||
if err := c.meetsClusterDeleteAnnotations(informerOldSpec); err != nil {
|
||||
c.logger.WithField("cluster-name", clusterName).Warnf(
|
||||
"ignoring %q event for cluster %q - manifest does not fulfill delete requirements: %s", eventType, clusterName, err)
|
||||
c.logger.WithField("cluster-name", clusterName).Warnf(
|
||||
"please, recreate Postgresql resource %q and set annotations to delete properly", clusterName)
|
||||
if currentManifest, marshalErr := json.Marshal(informerOldSpec); marshalErr != nil {
|
||||
c.logger.WithField("cluster-name", clusterName).Warnf("could not marshal current manifest:\n%+v", informerOldSpec)
|
||||
} else {
|
||||
c.logger.WithField("cluster-name", clusterName).Warnf("%s\n", string(currentManifest))
|
||||
}
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -76,11 +76,10 @@ func (c *Controller) createOperatorCRD(desiredCrd *apiextv1.CustomResourceDefini
|
|||
context.TODO(), crd.Name, types.MergePatchType, patch, metav1.PatchOptions{}); err != nil {
|
||||
return fmt.Errorf("could not update customResourceDefinition %q: %v", crd.Name, err)
|
||||
}
|
||||
} else {
|
||||
c.logger.Infof("customResourceDefinition %q has been registered", crd.Name)
|
||||
}
|
||||
c.logger.Infof("customResourceDefinition %q is registered", crd.Name)
|
||||
|
||||
return wait.Poll(c.config.CRDReadyWaitInterval, c.config.CRDReadyWaitTimeout, func() (bool, error) {
|
||||
return wait.PollUntilContextTimeout(context.TODO(), c.config.CRDReadyWaitInterval, c.config.CRDReadyWaitTimeout, false, func(ctx context.Context) (bool, error) {
|
||||
c, err := c.KubeClient.CustomResourceDefinitions().Get(context.TODO(), desiredCrd.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
return false, err
|
||||
|
|
|
|||
|
|
@ -132,7 +132,7 @@ func TestOldInfrastructureRoleFormat(t *testing.T) {
|
|||
for _, test := range testTable {
|
||||
roles, err := utilTestController.getInfrastructureRoles(
|
||||
[]*config.InfrastructureRole{
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: test.secretName,
|
||||
UserKey: "user",
|
||||
PasswordKey: "password",
|
||||
|
|
@ -163,7 +163,7 @@ func TestNewInfrastructureRoleFormat(t *testing.T) {
|
|||
// one secret with one configmap
|
||||
{
|
||||
[]spec.NamespacedName{
|
||||
spec.NamespacedName{
|
||||
{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesNewSecretName,
|
||||
},
|
||||
|
|
@ -187,11 +187,11 @@ func TestNewInfrastructureRoleFormat(t *testing.T) {
|
|||
// multiple standalone secrets
|
||||
{
|
||||
[]spec.NamespacedName{
|
||||
spec.NamespacedName{
|
||||
{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: "infrastructureroles-new-test1",
|
||||
},
|
||||
spec.NamespacedName{
|
||||
{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: "infrastructureroles-new-test2",
|
||||
},
|
||||
|
|
@ -248,7 +248,7 @@ func TestInfrastructureRoleDefinitions(t *testing.T) {
|
|||
// only new CRD format
|
||||
{
|
||||
[]*config.InfrastructureRole{
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: spec.NamespacedName{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesNewSecretName,
|
||||
|
|
@ -262,7 +262,7 @@ func TestInfrastructureRoleDefinitions(t *testing.T) {
|
|||
spec.NamespacedName{},
|
||||
"",
|
||||
[]*config.InfrastructureRole{
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: spec.NamespacedName{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesNewSecretName,
|
||||
|
|
@ -280,7 +280,7 @@ func TestInfrastructureRoleDefinitions(t *testing.T) {
|
|||
spec.NamespacedName{},
|
||||
"secretname: infrastructureroles-new-test, userkey: test-user, passwordkey: test-password, rolekey: test-role",
|
||||
[]*config.InfrastructureRole{
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: spec.NamespacedName{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesNewSecretName,
|
||||
|
|
@ -298,7 +298,7 @@ func TestInfrastructureRoleDefinitions(t *testing.T) {
|
|||
spec.NamespacedName{},
|
||||
"secretname: infrastructureroles-new-test, userkey: test-user, passwordkey: test-password, defaultrolevalue: test-role",
|
||||
[]*config.InfrastructureRole{
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: spec.NamespacedName{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesNewSecretName,
|
||||
|
|
@ -319,7 +319,7 @@ func TestInfrastructureRoleDefinitions(t *testing.T) {
|
|||
},
|
||||
"",
|
||||
[]*config.InfrastructureRole{
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: spec.NamespacedName{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesOldSecretName,
|
||||
|
|
@ -334,7 +334,7 @@ func TestInfrastructureRoleDefinitions(t *testing.T) {
|
|||
// both formats for CRD
|
||||
{
|
||||
[]*config.InfrastructureRole{
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: spec.NamespacedName{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesNewSecretName,
|
||||
|
|
@ -351,7 +351,7 @@ func TestInfrastructureRoleDefinitions(t *testing.T) {
|
|||
},
|
||||
"",
|
||||
[]*config.InfrastructureRole{
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: spec.NamespacedName{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesNewSecretName,
|
||||
|
|
@ -361,7 +361,7 @@ func TestInfrastructureRoleDefinitions(t *testing.T) {
|
|||
RoleKey: "test-role",
|
||||
Template: false,
|
||||
},
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: spec.NamespacedName{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesOldSecretName,
|
||||
|
|
@ -382,7 +382,7 @@ func TestInfrastructureRoleDefinitions(t *testing.T) {
|
|||
},
|
||||
"secretname: infrastructureroles-new-test, userkey: test-user, passwordkey: test-password, rolekey: test-role",
|
||||
[]*config.InfrastructureRole{
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: spec.NamespacedName{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesNewSecretName,
|
||||
|
|
@ -392,7 +392,7 @@ func TestInfrastructureRoleDefinitions(t *testing.T) {
|
|||
RoleKey: "test-role",
|
||||
Template: false,
|
||||
},
|
||||
&config.InfrastructureRole{
|
||||
{
|
||||
SecretName: spec.NamespacedName{
|
||||
Namespace: v1.NamespaceDefault,
|
||||
Name: testInfrastructureRolesOldSecretName,
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
Copyright 2024 Compose, Zalando SE
|
||||
Copyright 2025 Compose, Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue