Compare commits

...

105 Commits

Author SHA1 Message Date
Felix Kunde 1af4c50ed0
bump to v1.15.0 (#2965)
* bump to v1.15.0
* more linter hints
* update dependencies of kubectl-pg plugin
2025-10-21 11:56:33 +02:00
Felix Kunde 3bc244fe39
bump dependencies and reflect linter suggestions (#2963) 2025-10-16 10:23:36 +02:00
Remi Rampin 8c2a290a12
DOC: Minikube has many drivers now, incl. Docker (#2949)
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2025-10-14 13:50:11 +02:00
Remi Rampin 3a85466cfd
DOC: Fix formatting of bullet points (#2948)
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2025-10-14 13:49:06 +02:00
Eng Zer Jun eddf521227
Replace `golang.org/x/exp` with stdlib (#2857)
* Replace `golang.org/x/exp` with stdlib

These experimental packages are now available in the Go standard
library since Go 1.21.

	1. golang.org/x/exp/slices -> slices [1]
	2. golang.org/x/exp/maps -> maps [2]

[1]: https://go.dev/doc/go1.21#slices
[2]: https://go.dev/doc/go1.21#maps

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

* Run go mod tidy

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

---------

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2025-10-14 11:59:48 +02:00
Felix Kunde 8ba57b28f5
extend RBAC in prepatation to switch to configmap-based cluster management (#2961) 2025-10-14 10:59:43 +02:00
Felix Kunde dc29425969
include external traffic policy comparison into service diffing (#2956) 2025-09-23 14:30:06 +02:00
Polina Bungina bcd729b2cc
Add selector to master service when switching to CM (#2955)
Add service selector comparison to compareServices
This is necessary for the proper switch of `kubernetes_use_configmaps` configuration value, as master service should have different label selector setup for those.
2025-09-19 14:44:17 +02:00
Alexander Gramovich d98fc2753a
logical-backup:gcs_upload: try to use gcp metadata if LOGICAL_GOOGLE_APPLICATION_CREDENTIALS is not set (#2837)
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2025-09-17 16:01:28 +02:00
dependabot[bot] cce2633192
Bump requests from 2.32.2 to 2.32.4 in /ui (#2922)
Bumps [requests](https://github.com/psf/requests) from 2.32.2 to 2.32.4.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.32.2...v2.32.4)

---
updated-dependencies:
- dependency-name: requests
  dependency-version: 2.32.4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-17 16:01:02 +02:00
Morten Lied Johansen ad7e590916
Skip creation of OwnerReference if user is in a different namespace (#2912)
Instead of doing a string compare on the username, check the actual namespace of the user to determine if an owner reference can be created.
2025-09-17 15:57:36 +02:00
Jociele Padilha fa4bc21538
upgrade Go from 1.23.4 to 1.25.0 (#2945)
* upgrade go to 1.25
* add minor version to be Go 1.25.0
* revert the Go version on README to keep the history of the release
2025-08-19 14:40:39 +02:00
Mario Trangoni 51135b07db
docs: Fix issues found by codespell (#2896)
Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2025-06-03 17:34:05 +02:00
Ida Novindasari ccb52c094d
[UI] Remove deprecated WAL-E library and enable WAL-G backup support in UI backend (#2915) 2025-05-20 16:31:26 +02:00
Polina Bungina 68c4b49636
Fix wrong condition for bootstrap labels (#2875) 2025-03-10 17:05:27 +01:00
Polina Bungina c7a586d0f8
Configure (upcoming) Patroni bootstrap labels feature (#2872)
Set the value from the critical-operation-pdb's selector if PDBs are enabled
2025-03-10 10:16:01 +01:00
Felix Kunde 746df0d33d
do not remove publications of slot defined in manifest (#2868)
* do not remove publications of slot defined in manifest
* improve condition to sync streams
* init publication tables map when adding manifest slots
* need to update c.Stream when there is no update
2025-02-26 17:31:37 +01:00
Felix Kunde 2a4be1cb39
fix creating secrets for rotation users (#2863)
* fix creating secrets for rotation users
* rework annotation comparison on update to decide on when to call syncSecrets
2025-02-14 09:44:09 +01:00
Polina Bungina c8063eb78a
Protect Pods from disruptions during upgrades (#2844)
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2025-01-30 10:41:58 +01:00
Polina Bungina a56ecaace7
Critical operation PDB (#2830)
Create the second PDB to cover Pods with a special "critical operation" label set.

This label is going to be assigned to all pg cluster's Pods by the Operator during a PG major version upgrade, by Patroni during a cluster/replica bootstrap. It can also be set manually or by any other automation tool.
2025-01-29 12:41:08 +01:00
Polina Bungina f49b4f1e97
Ensure podAnnotations are removed from pods if reset in the config (#2826) 2025-01-24 16:53:14 +01:00
Polina Bungina b0cfeb30ea
Partially revert #2810 (#2849)
Only schedule switchover for pod migration, consider mainWindow for PGVERSION env change
2025-01-23 16:35:33 +01:00
Polina Bungina e04b91d8af
Only check maintenance window for upgrade after pg version recheck (#2842)
This way we avoid misleading "skipping major version upgrade, not in maintenance window" log line when c.currentMajorVersion is not initialized (==0)
2025-01-17 14:29:52 +01:00
Polina Bungina 8522331cf2
Extend MaintenanceWindows parameter usage (#2810)
Consider maintenance window when migrating master pods and replacing pods (rolling update)
2025-01-15 18:04:36 +01:00
Lukas Reichart 46d5ebef6d
Update logical backup docker image (#2829) 2025-01-07 09:10:22 +01:00
Ida Novindasari 4430aba3f3
update codegen (#2832) 2025-01-03 16:18:17 +01:00
Felix Kunde 6035fdd58e
bump operator to 1.14.0 (#2827) 2024-12-23 12:12:33 +01:00
Mario Trangoni df3f68bcfb
manifests/minimal-master-replica-svcmonitor.yaml: Update postgres-exporter image (#2777)
Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2024-12-23 11:10:44 +01:00
Felix Kunde 265f2a0f1c
add sidecar command examples and update codegen (#2825) 2024-12-23 09:58:48 +01:00
Felix Kunde 9b103e764e
bump to go 1.23.4 (#2824) 2024-12-23 09:54:51 +01:00
Tabby b276cd2f94
Feat: Support Running Sidecar with a Command. (#2449)
* Feat: Support Running Sidecard with a Command.

This PR addresses issue #2448 . Some containers may not have entry points, if this is the case they would need to be run using a command. This change extends the definition of sidecar so that there is an optional command field. If the field is present then the container will be run using that command. This is a two line change that is fully backward compatible.
2024-12-23 09:08:35 +01:00
Christoffer Anselm 548e387745
Fix deployment extraEnvs indentation in operator chart (#2814)
* Fix operator extraEnvs indentation

Fix bad operator extraEnvs indentation by matching the statement to how other lists are expanded in the deployment template

* Replace nindent by indent to fully mirror the other similar lines in the file

---------

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2024-12-23 08:59:54 +01:00
Demch1k d97c271b84
Add abitility to set QPS and Burst limits for api client (#2667)
* Add abitility to set QPS and Burst limits for api client 

---------

Co-authored-by: Ivan Sokoryan <i.sokoryan@robo.cash>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2024-12-23 08:53:27 +01:00
Ida Novindasari 470a1eab89
Add support for pg17 and remove pg12 (#2773)
* Add support for pg17
* use new gcov2lcov-action
* Use ghcr spilo-17
* Update SPILO_CURRENT and SPILO_LAZY
* Update e2e/run.sh

---------

Co-authored-by: Polina Bungina <27892524+hughcapet@users.noreply.github.com>
2024-12-20 11:22:52 +01:00
Felix Kunde 34df486f00
fix flaky comparison unit test of retruned errors (#2822) 2024-12-19 17:35:01 +01:00
zyue110026 bb6242e3c9
fix: replicaCount not being respect (#2708)
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2024-12-19 14:12:15 +01:00
cosimomeli eef49500a5
Add support for EBS CSI Driver (#2677)
* Add support for EBS CSI Driver
2024-12-19 12:32:09 +01:00
dependabot[bot] e7cc4f9120
Bump golang.org/x/crypto from 0.26.0 to 0.31.0 in /kubectl-pg (#2819)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.26.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.26.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-18 16:03:20 +01:00
Felix Kunde 8cc6796537
fix comparing stream annotations and improve unit test (#2820) 2024-12-18 11:22:08 +01:00
dependabot[bot] 5450113eb5
Bump golang.org/x/crypto from 0.26.0 to 0.31.0 (#2816)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.26.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.26.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-17 14:07:15 +01:00
Felix Kunde d44bfabe78
do not use extra labels to list stream CRDs (#2803)
* do not use extra labels to list stream CRDs
* add diff on labels for streams + unit test coverage
2024-12-17 08:54:37 +01:00
Felix Kunde 80ef38f7f0
add resource annotation and ignore recovery type (#2817)
* add resource annotation and ignore recovery type
* Update docs/reference/cluster_manifest.md

---------

Co-authored-by: Ida Novindasari <idanovinda@gmail.com>
2024-12-16 18:17:19 +01:00
Felix Kunde 301462c415
remove streams delete and extend unit tests (#2737) 2024-12-16 18:13:52 +01:00
Felix Kunde 4929dd204c
Update major version upgrade docs (#2807)
* Update major version upgrade logs
2024-12-16 11:22:40 +01:00
Ida Novindasari fc9a26040a
Integrate spilo with Patroni 4 (#2818) 2024-12-16 11:11:22 +01:00
dependabot[bot] c206eb38a8
Bump werkzeug from 3.0.3 to 3.0.6 in /ui (#2793)
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 3.0.3 to 3.0.6.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/3.0.3...3.0.6)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-06 11:46:53 +01:00
fahed dorgaa acdb957d8e
fix switch over candidate retrieving (#2760)
* fix switch over candidate retrieving

Signed-off-by: fahed dorgaa <fahed.dorgaa.ext@corp.ovh.com>

---------

Signed-off-by: fahed dorgaa <fahed.dorgaa.ext@corp.ovh.com>
Co-authored-by: fahed dorgaa <fahed.dorgaa.ext@corp.ovh.com>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2024-11-01 17:06:20 +01:00
Felix Kunde 8231797efa
add cluster field for PVCs (#2785)
* add cluster field for PVCs
* sync volumes on cluster creation
* fully spell pvc in log messages
2024-10-31 14:08:50 +01:00
Martin Kucin 45e9227f55
fix(postgres-operator/deployment): Set 'nindent' to 8 for 'extraEnvs' (#2783)
Co-authored-by: martin.kucin <martin.kucin@yunextraffic.com>
2024-10-30 11:11:22 +01:00
Felix Kunde 002d0f94a1
quote schema names in case they use special characters and remove strings.Builder (#2782) 2024-10-17 16:52:24 +02:00
Motte f5e122e8ef
Fix resource constraints (#2735)
* Add empty string cases to patterns for pod resources
* Added empty strings test case
* Restored k8sres.go and changed test to zeros
* Updated validation pattern in manifests/operatorconfiguration.crd.yaml and pkg/apis/acid.zalan.do/v1/crds.go
2024-10-16 17:19:07 +02:00
Prasad Krishnan d21466dbc4
update clusterrole.yaml (#2762)
* update clusterrole.yaml
* Update charts/postgres-operator/templates/clusterrole.yaml

---------

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2024-10-16 17:18:01 +02:00
Felix Kunde bb73334682
quote admin user to allow names with special characters (#2774) 2024-10-16 17:14:44 +02:00
Polina Bungina 41f5fe1dc9
More major upgrade prechecks (#2775)
Skip when 
- it is a standby clusters
- there is no master in the cluster
2024-10-15 14:05:39 +02:00
Polina Bungina 3ca86678cc
Add major upgrade prechecks (#2772)
Don't fail major upgrade (don't set annotation) if replica(s) are not
(yet) streaming or replication lag is too high
2024-10-11 17:11:46 +02:00
Ida Novindasari c25dc57b96
only skip upgrade if failed before after recheck version (#2755) 2024-09-10 10:32:56 +02:00
Ida Novindasari 2e398120d2
Implement major upgrade result annotations (#2727)
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
Co-authored-by: Polina Bungina <27892524+hughcapet@users.noreply.github.com>
2024-08-28 15:26:12 +02:00
Felix Kunde a09b7655c9
update K8s version and reflect necessary changes (#2740) 2024-08-27 18:13:39 +02:00
Felix Kunde 2ae51fb9ce
reflect linter feedback, remove unused argumnents and redundant type from arrays (#2739)
* reflect linter feedback, remove unused argumnents and redundant literal definitions
* add logical backup to TestCreate unit test
2024-08-27 17:56:07 +02:00
Felix Kunde a08d1679f2
align sync and update logs (#2738) 2024-08-27 09:58:32 +02:00
Felix Kunde cc9074c184
Bump operator to v1.13.0 (#2729)
* bump operator to v1.13.0
* align configmap with CRD config
* remove default from CRD config option additional_secret_mount_path
* enable automatic major version upgrades by default
2024-08-22 12:16:27 +02:00
Rob Nickmans cb06a1ec89
fix: add secret only when not in secret file (#2732)
* fix: add secret only when not in secret file
* fix indentation

---------

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2024-08-20 17:35:09 +02:00
Polina Bungina 2582b934bf
MaintenanceWindow CRD validation reflects the implementation (#2731) 2024-08-20 14:43:12 +02:00
Felix Kunde 2f7e3ee847
fix stream duplication on operator restart (#2733)
* fix stream duplication on operator restart
* add try except to streams e2e test
2024-08-20 14:38:07 +02:00
Felix Kunde c7ee34ed12
fix sync streams and add diffs for annotations and owner references (#2728)
* extend and improve hasSlotsInSync unit test
* fix sync streams and add diffs for annotations and owner references
* incl. current annotations as desired where we do not fully control them
* added one more unit test and fixed sub test names
* pass maintenance windows to function and update unit test
2024-08-14 12:56:14 +02:00
fahed dorgaa aad03f71ea
fix golangci-lint issues (#2715)
Signed-off-by: fahed dorgaa <fahed.dorgaa@gmail.com>
Co-authored-by: fahed dorgaa <fahed.dorgaa.ext@corp.ovh.com>
Co-authored-by: Matthias Adler <macedigital@users.noreply.github.com>
2024-08-14 12:54:44 +02:00
Felix Kunde 25ccc87317
sync all resources to cluster fields (#2713)
* sync all resources to cluster fields (CronJob, Streams, Patroni resources)
* separated sync and delete logic for Patroni resources
* align delete streams and secrets logic with other resources
* rename gatherApplicationIds to getDistinctApplicationIds
* improve slot check before syncing streams CRD
* add ownerReferences and annotations diff to Patroni objects
* add extra sync code for config service so it does not get too ugly
* some bugfixes when comparing annotations and return err on found
* sync Patroni resources on update event and extended unit tests
* add config service/endpoint owner references check to e2e tes
2024-08-13 10:06:46 +02:00
Felix Kunde 31f92a1aa0
extend inherited annotations unit test to include logical backup cron job (#2723)
* extend inherited annotations test to logical backup cron job
* sync on updated when enabled, not only on schedule changes
2024-08-12 13:12:51 +02:00
Felix Kunde a87307e56b
Feat: enable owner references (#2688)
* feat(498): Add ownerReferences to managed entities
* empty owner reference for cross namespace secret and more tests
* update ownerReferences of existing resources
* removing ownerReference requires Update API call
* CR ownerReference on PVC blocks pvc retention policy of statefulset
* make ownerreferences optional and disabled by default
* update unit test to check len ownerReferences
* update codegen
* add owner references e2e test
* update unit test
* add block_owner_deletion field to test owner reference
* fix typos and update docs once more
* reflect code feedback

---------

Co-authored-by: Max Begenau <max@begenau.com>
2024-08-09 17:58:25 +02:00
Felix Kunde d5a88f571a
let operator fix publications without tables (#2722) 2024-08-09 17:20:05 +02:00
Felix Kunde 85b8058029
bump spilo to 16-3.3, drop support for pg11 (#2706)
* bump spilo to 16-3.3, drop support for pg11
* update README
2024-08-09 14:47:23 +02:00
Ida Novindasari e6ae9e3772
Implement per-cluster maintenance window for Postgres automatic upgrade (#2710)
* implement maintenance window for major version upgrade 
* e2e test: fix major version upgrade test and extend with the time window
* unit test: add iteration to test isInMaintenanceWindow
* UI: show the window and enable edit via UI
2024-08-09 14:07:35 +02:00
Cédric de Saint Martin ce15d10aa3
feat: Add extraEnvs to operator helm chart (#2671)
Signed-off-by: Cédric de Saint Martin <cdesaintmartin@wiremind.io>
2024-08-06 12:31:17 +02:00
Ida Novindasari 94d36327ba
stream: slot and FES should not be created if the publication creation fails (#2704)
* slot should not be created if the publication creation fails
* not create FES resource when slot doesn't exist
2024-08-02 15:09:37 +02:00
Ida Novindasari 31f474a95c
Enable slot and publication deletion when stream application is removed (#2684)
* refactor syncing publication section
* update createOrUpdateStream function to allow resource deletion when removed from manifest
* add minimal FES CRD to enable FES resources creation for E2E test
* fix bug of removing manifest slots in syncStream
* e2e test: fixing typo with major upgrade test
* e2e test: should create and delete FES resource
* e2e test: should not delete manual created resources
* e2e test: enable cluster role for FES with patching instead of deploying in manifest
2024-07-25 12:00:23 +02:00
Felix Kunde 73f72414f6
bump go version to 1.22.5 (#2699) 2024-07-23 13:25:29 +02:00
Felix Kunde e71891e2bd
improve logical backup comparison unit test and improve container sync (#2686)
* improve logical backup comparison unit test and improve container sync
* add new comparison function for volume mounts + unit test
2024-07-08 14:06:14 +02:00
Felix Kunde 37d6993439
remove stream resources after drop from Postgres manifest (#2563)
* remove stream resources after drop from Postgres manifest
2024-06-27 14:30:52 +02:00
Matthias Adler 7cdc23fff0
chore: simplify delivery-yaml for building operator (#2673)
Commit switches builder image to `cdp-runtime/go`, removing the need to install `go` manually.

Also, commit splits "build-postgres-operator" pipeline into 2 distinct steps.

1. Run unit tests based on locally checked out code including set up of dependencies and generated code.
2. Build Docker image if tests are successful
2024-06-26 18:39:20 +02:00
Polina Bungina 47efca33c9
Improve inherited annotations (#2657)
* Annotate PVC on Sync/Update, not only change PVC template
* Don't rotate pods when only annotations changed
* Annotate Logical Backup's and Pooler's pods
* Annotate PDB, Endpoints created by the Operator, Secrets, Logical Backup jobs

Inherited annotations are only added/updated, not removed
2024-06-26 13:10:37 +02:00
Matthias Adler 2ef7d58578
chore: update package dependencies when building image (#2665)
* chore: update package dependencies when building image

Install available updates alongside installation of packages to remove known vulnerabilities from images.

Example for issues in plain alpine:3 image (v3.20):

```sh
$ grype alpine:3
 ✔ Vulnerability DB                [updated]
 ✔ Loaded image                                                            alpine:3
 ✔ Parsed image                    sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f
 ✔ Cataloged contents              dac15f325cac528994a5efe78787cd03bdd796979bda52fd
   ├── ✔ Packages                        [14 packages]
   ├── ✔ File digests                    [77 files]
   ├── ✔ File metadata                   [77 locations]
   └── ✔ Executables                     [17 executables]
 ✔ Scanned for vulnerabilities     [8 vulnerability matches]
   ├── by severity: 0 critical, 0 high, 6 medium, 0 low, 0 negligible (2 unknown)
   └── by status:   8 fixed, 0 not-fixed, 0 ignored
NAME           INSTALLED   FIXED-IN    TYPE  VULNERABILITY   SEVERITY
busybox        1.36.1-r28  1.36.1-r29  apk   CVE-2023-42365  Medium
busybox        1.36.1-r28  1.36.1-r29  apk   CVE-2023-42364  Medium
busybox-binsh  1.36.1-r28  1.36.1-r29  apk   CVE-2023-42365  Medium
busybox-binsh  1.36.1-r28  1.36.1-r29  apk   CVE-2023-42364  Medium
libcrypto3     3.3.0-r2    3.3.0-r3    apk   CVE-2024-4741   Unknown
libssl3        3.3.0-r2    3.3.0-r3    apk   CVE-2024-4741   Unknown
ssl_client     1.36.1-r28  1.36.1-r29  apk   CVE-2023-42365  Medium
ssl_client     1.36.1-r28  1.36.1-r29  apk   CVE-2023-42364  Medium
```

Issue would be solved by also upgrading installed packages:

```sh
$ apk -U upgrade --no-cache
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/community/x86_64/APKINDEX.tar.gz
(1/5) Upgrading busybox (1.36.1-r28 -> 1.36.1-r29)
Executing busybox-1.36.1-r29.post-upgrade
(2/5) Upgrading busybox-binsh (1.36.1-r28 -> 1.36.1-r29)
(3/5) Upgrading libcrypto3 (3.3.0-r2 -> 3.3.1-r0)
(4/5) Upgrading libssl3 (3.3.0-r2 -> 3.3.1-r0)
(5/5) Upgrading ssl_client (1.36.1-r28 -> 1.36.1-r29)
Executing busybox-1.36.1-r29.trigger
OK: 8 MiB in 14 packages
```

Furthermore, this commit reduces accidental complexity from the Docker build process.
Most notably, use pre-made official golang images for building postgres-operator.

* Update docker/DebugDockerfile

---------

Co-authored-by: Ida Novindasari <idanovinda@gmail.com>
2024-06-18 17:21:03 +02:00
Felix Kunde 7c7aa96935
bump to v1.12.2 (#2664) 2024-06-14 10:53:17 +02:00
Matthias Adler eee1ef2e21
Align Docker images in ghcr pipeline with images assumed in Dockerfile (#2663)
* PG-UI switch to official Python image 

Commit changes build argument for Github Actions to use the same [official Python image][1] as the one used for building "postgres-operator-ui" since commit d60b424d79.

Should fix problem with `publish_ghcr_image` workflow. 

[1]: https://hub.docker.com/_/python

* Use latest Alpine version for Postgres-Operator

Similar to commit 601ce0d321, align image version in Github Actions pipeline with assumed default version in Dockerfile, using latest [Alpine 3](https://hub.docker.com/_/alpine).
2024-06-14 10:25:55 +02:00
Felix Kunde 2e1583e9c0
bump to v1.12.1 (#2658)
* bump to v1.12.1
* align Python version in setup.py with base image
2024-06-13 10:40:07 +02:00
Hemakshi Sachdev 032743b8f0
Fix (#2644) - Add name tags to InfrastructureRole struct (#2659) 2024-06-12 11:12:28 +02:00
Matthias Adler 1f47f59267
fix: use nodejs-lts image for building frontend code (#2653)
* fix: use nodejs-lts image for building frontend code

Node v14 is end-of-life and should no longer be used. Commit changes Makefile to pull in latest node-lts instead.

Also, use local temporary folder for storing npm generated files to workaround permission issue with old npm version, e.g. emitting these errors:

```
npm ERR! code EACCES
npm ERR! syscall mkdir
npm ERR! path /.npm
npm ERR! errno -13
npm ERR!
npm ERR! Your cache folder contains root-owned files, due to a bug in
npm ERR! previous versions of npm which has since been addressed.
```

Both changes should also fix issue https://github.com/zalando/postgres-operator/issues/2651

* fix: add frontend build step

Commit d60b424d79 accidentally removed build steps that are important for building frontend.

This commit restores previous behavior, but switches to nodejs-lts image for building frontend code.

Should restore `app.js` presence in ghcr image, see https://github.com/zalando/postgres-operator/issues/2651.
2024-06-05 15:09:44 +02:00
Felix Kunde 393439fdc3
update K8s version in makefile (#2647) 2024-06-05 14:36:49 +02:00
Felix Kunde 6cde8e8c0b
Bump to v1.12.0 (#2639)
* bump tp v1.12.0
* code-generator and apiextensions-apiserver still on to 0.25.9 to allow code-generation on GH
* bump go in github action and mini fix in UI
* update UI Dockerfile

---------

Co-authored-by: Ida Novindasari <idanovinda@gmail.com>
2024-05-31 15:29:29 +02:00
Hemakshi Sachdev 34f9cfbcda
Fix (#2644) - Add json tags to InfrastructureRole struct (#2645)
* Fix (#2644) - Add json tags to InfrastructureRole struct
* Fix (#2644) - Add name tags to InfrastructureRole struct

---------

Co-authored-by: Hemakshi Sachdev <hsachdev@purestorage.com>
2024-05-31 14:17:26 +02:00
Felix Kunde d60b424d79
[UI] use only one logger adapter and update Dockerfile (#2646)
* [UI] use only one logger adapter and update Dockerfile
* remove setLevel on logger
2024-05-31 11:24:31 +02:00
Pratheek Rebala 1210ceca72
Allow scheduling constraints for operator-ui pods (#2326) 2024-05-24 16:27:00 +02:00
Felix Kunde b550f8ae39
fix unit test for new subPathExpr feature (#2638)
* fix unit test for new subPathExpr feature
* add subPathExpr flag to CRD and re-sort
2024-05-24 15:07:17 +02:00
Samuel Mutel 7bcb73a402
feat: Add SubPathExpr option for additionalVolumes (#2463) 2024-05-24 11:55:22 +02:00
Ida Novindasari 1839baaad3
[UI] Remove manual authentication for login user (#2635)
* Remove manual authentication
* update python libraries
* remove psycopg2 and bring back wal-e
* remove unused vars
2024-05-23 10:51:46 +02:00
Felix Kunde 1b08ee1acf
switch to ghcr image in helm chart and examples (#2634)
* switch to ghcr image in helm chart and examples
* change logical backup config for helm chart
* change internal default for logical backup image config to ghcr, too
2024-05-21 17:43:37 +02:00
Vasily Oleynikov 843d3e1caa
Fix logical backup job toleration (#2018)
* Fix logical backup job toleration (now cluster and operator-wide instructions will be not ignored)

---------

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2024-05-17 14:45:21 +02:00
Motte 13d6594cdf
Secrets deletion config (#2582)
* Secrets deletion config
* Update e2e/tests/test_e2e.py

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>

---------

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2024-05-10 16:31:21 +02:00
Nick Douma 8ee5231648
Apply template on all keys of operatorconfiguration (#2608) 2024-05-10 16:30:29 +02:00
Felix Kunde 5357062857
add logical backup retention as manifest option (#2621)
* add logical backup retention as manifest option
* added unit test for logical backup envvar generation
2024-04-29 10:58:52 +02:00
dependabot[bot] d70cdf1f10
Bump golang.org/x/net from 0.19.0 to 0.23.0 in /kubectl-pg (#2613)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.19.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-25 15:55:55 +02:00
dependabot[bot] d4c3e236ed
Bump golang.org/x/net from 0.20.0 to 0.23.0 (#2614)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.20.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.20.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-25 15:55:41 +02:00
Felix Kunde 83878fe447
make bucket prefix for logical backup configurable (#2609)
* make bucket prefix for logical backup configurable
* include container comparison in logical backup diff
* add unit test and update description for compareContainers
* don't rely on users putting / in the config - reflect other comments from review
2024-04-23 14:24:04 +02:00
Felix Kunde 6ddafadc09
add pdb_master_label_selector to helm chart and example manifests (#2595)
* add pdb_master_label_selector to helm chart and example manifests
2024-03-28 12:06:35 +01:00
Felix Kunde be28f3a017
update helm chart with #2584 (#2585) 2024-03-18 11:05:40 +01:00
Thore 4cd4bee383
Fix duplicate key issue when using configmap config target (#2584) 2024-03-18 10:55:42 +01:00
179 changed files with 6593 additions and 3368 deletions

View File

@ -9,7 +9,7 @@ assignees: ''
Please, answer some short questions which should help us to understand your problem / question better? Please, answer some short questions which should help us to understand your problem / question better?
- **Which image of the operator are you using?** e.g. registry.opensource.zalan.do/acid/postgres-operator:v1.11.0 - **Which image of the operator are you using?** e.g. ghcr.io/zalando/postgres-operator:v1.13.0
- **Where do you run it - cloud or metal? Kubernetes or OpenShift?** [AWS K8s | GCP ... | Bare Metal K8s] - **Where do you run it - cloud or metal? Kubernetes or OpenShift?** [AWS K8s | GCP ... | Bare Metal K8s]
- **Are you running Postgres Operator in production?** [yes | no] - **Are you running Postgres Operator in production?** [yes | no]
- **Type of issue?** [Bug report, question, feature request, etc.] - **Type of issue?** [Bug report, question, feature request, etc.]

View File

@ -23,7 +23,7 @@ jobs:
- uses: actions/setup-go@v2 - uses: actions/setup-go@v2
with: with:
go-version: "^1.21.7" go-version: "^1.25.3"
- name: Run unit tests - name: Run unit tests
run: make deps mocks test run: make deps mocks test
@ -65,7 +65,7 @@ jobs:
context: . context: .
file: docker/Dockerfile file: docker/Dockerfile
push: true push: true
build-args: BASE_IMAGE=alpine:3.15 build-args: BASE_IMAGE=alpine:3
tags: "${{ steps.image.outputs.OPERATOR_IMAGE }}" tags: "${{ steps.image.outputs.OPERATOR_IMAGE }}"
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
@ -74,14 +74,14 @@ jobs:
with: with:
context: ui context: ui
push: true push: true
build-args: BASE_IMAGE=alpine:3.15 build-args: BASE_IMAGE=python:3.11-slim
tags: "${{ steps.image_ui.outputs.UI_IMAGE }}" tags: "${{ steps.image_ui.outputs.UI_IMAGE }}"
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
- name: Build and push multiarch logical-backup image to ghcr - name: Build and push multiarch logical-backup image to ghcr
uses: docker/build-push-action@v3 uses: docker/build-push-action@v3
with: with:
context: docker/logical-backup context: logical-backup
push: true push: true
build-args: BASE_IMAGE=ubuntu:22.04 build-args: BASE_IMAGE=ubuntu:22.04
tags: "${{ steps.image_lb.outputs.BACKUP_IMAGE }}" tags: "${{ steps.image_lb.outputs.BACKUP_IMAGE }}"

View File

@ -14,7 +14,7 @@ jobs:
- uses: actions/checkout@v1 - uses: actions/checkout@v1
- uses: actions/setup-go@v2 - uses: actions/setup-go@v2
with: with:
go-version: "^1.21.7" go-version: "^1.25.3"
- name: Make dependencies - name: Make dependencies
run: make deps mocks run: make deps mocks
- name: Code generation - name: Code generation

View File

@ -14,7 +14,7 @@ jobs:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- uses: actions/setup-go@v2 - uses: actions/setup-go@v2
with: with:
go-version: "^1.21.7" go-version: "^1.25.3"
- name: Make dependencies - name: Make dependencies
run: make deps mocks run: make deps mocks
- name: Compile - name: Compile
@ -22,7 +22,7 @@ jobs:
- name: Run unit tests - name: Run unit tests
run: go test -race -covermode atomic -coverprofile=coverage.out ./... run: go test -race -covermode atomic -coverprofile=coverage.out ./...
- name: Convert coverage to lcov - name: Convert coverage to lcov
uses: jandelgado/gcov2lcov-action@v1.0.9 uses: jandelgado/gcov2lcov-action@v1.1.1
- name: Coveralls - name: Coveralls
uses: coverallsapp/github-action@master uses: coverallsapp/github-action@master
with: with:

4
.gitignore vendored
View File

@ -102,3 +102,7 @@ e2e/tls
*.pot *.pot
mocks mocks
ui/.npm/
.DS_Store

View File

@ -1,2 +1,2 @@
# global owners # global owners
* @sdudoladov @Jan-M @FxKu @jopadi @idanovinda @hughcapet @macedigital * @sdudoladov @Jan-M @FxKu @jopadi @idanovinda @hughcapet

View File

@ -1,6 +1,6 @@
The MIT License (MIT) The MIT License (MIT)
Copyright (c) 2024 Zalando SE Copyright (c) 2025 Zalando SE
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@ -4,4 +4,3 @@ Jan Mussler <jan.mussler@zalando.de>
Jociele Padilha <jociele.padilha@zalando.de> Jociele Padilha <jociele.padilha@zalando.de>
Ida Novindasari <ida.novindasari@zalando.de> Ida Novindasari <ida.novindasari@zalando.de>
Polina Bungina <polina.bungina@zalando.de> Polina Bungina <polina.bungina@zalando.de>
Matthias Adler <matthias.adler@zalando.de>

View File

@ -43,7 +43,7 @@ ifndef GOPATH
endif endif
PATH := $(GOPATH)/bin:$(PATH) PATH := $(GOPATH)/bin:$(PATH)
SHELL := env PATH=$(PATH) $(SHELL) SHELL := env PATH="$(PATH)" $(SHELL)
default: local default: local
@ -69,7 +69,7 @@ docker: ${DOCKERDIR}/${DOCKERFILE}
docker build --rm -t "$(IMAGE):$(TAG)$(CDP_TAG)$(DEBUG_FRESH)$(DEBUG_POSTFIX)" -f "${DOCKERDIR}/${DOCKERFILE}" --build-arg VERSION="${VERSION}" . docker build --rm -t "$(IMAGE):$(TAG)$(CDP_TAG)$(DEBUG_FRESH)$(DEBUG_POSTFIX)" -f "${DOCKERDIR}/${DOCKERFILE}" --build-arg VERSION="${VERSION}" .
indocker-race: indocker-race:
docker run --rm -v "${GOPATH}":"${GOPATH}" -e GOPATH="${GOPATH}" -e RACE=1 -w ${PWD} golang:1.21.7 bash -c "make linux" docker run --rm -v "${GOPATH}":"${GOPATH}" -e GOPATH="${GOPATH}" -e RACE=1 -w ${PWD} golang:1.25.3 bash -c "make linux"
push: push:
docker push "$(IMAGE):$(TAG)$(CDP_TAG)" docker push "$(IMAGE):$(TAG)$(CDP_TAG)"
@ -78,7 +78,7 @@ mocks:
GO111MODULE=on go generate ./... GO111MODULE=on go generate ./...
tools: tools:
GO111MODULE=on go get -d k8s.io/client-go@kubernetes-1.28.7 GO111MODULE=on go get k8s.io/client-go@kubernetes-1.32.9
GO111MODULE=on go install github.com/golang/mock/mockgen@v1.6.0 GO111MODULE=on go install github.com/golang/mock/mockgen@v1.6.0
GO111MODULE=on go mod tidy GO111MODULE=on go mod tidy

View File

@ -17,6 +17,7 @@ pipelines with no access to Kubernetes API directly, promoting infrastructure as
* Live volume resize without pod restarts (AWS EBS, PVC) * Live volume resize without pod restarts (AWS EBS, PVC)
* Database connection pooling with PGBouncer * Database connection pooling with PGBouncer
* Support fast in place major version upgrade. Supports global upgrade of all clusters. * Support fast in place major version upgrade. Supports global upgrade of all clusters.
* Pod protection during boostrap phase and configurable maintenance windows
* Restore and cloning Postgres clusters on AWS, GCS and Azure * Restore and cloning Postgres clusters on AWS, GCS and Azure
* Additionally logical backups to S3 or GCS bucket can be configured * Additionally logical backups to S3 or GCS bucket can be configured
* Standby cluster from S3 or GCS WAL archive * Standby cluster from S3 or GCS WAL archive
@ -24,30 +25,34 @@ pipelines with no access to Kubernetes API directly, promoting infrastructure as
* Basic credential and user management on K8s, eases application deployments * Basic credential and user management on K8s, eases application deployments
* Support for custom TLS certificates * Support for custom TLS certificates
* UI to create and edit Postgres cluster manifests * UI to create and edit Postgres cluster manifests
* Support for AWS EBS gp2 to gp3 migration, supporting iops and throughput configuration
* Compatible with OpenShift * Compatible with OpenShift
### PostgreSQL features ### PostgreSQL features
* Supports PostgreSQL 16, starting from 11+ * Supports PostgreSQL 17, starting from 13+
* Streaming replication cluster via Patroni * Streaming replication cluster via Patroni
* Point-In-Time-Recovery with * Point-In-Time-Recovery with
[pg_basebackup](https://www.postgresql.org/docs/16/app-pgbasebackup.html) / [pg_basebackup](https://www.postgresql.org/docs/17/app-pgbasebackup.html) /
[WAL-E](https://github.com/wal-e/wal-e) via [Spilo](https://github.com/zalando/spilo) [WAL-G](https://github.com/wal-g/wal-g) or [WAL-E](https://github.com/wal-e/wal-e) via [Spilo](https://github.com/zalando/spilo)
* Preload libraries: [bg_mon](https://github.com/CyberDem0n/bg_mon), * Preload libraries: [bg_mon](https://github.com/CyberDem0n/bg_mon),
[pg_stat_statements](https://www.postgresql.org/docs/16/pgstatstatements.html), [pg_stat_statements](https://www.postgresql.org/docs/17/pgstatstatements.html),
[pgextwlist](https://github.com/dimitri/pgextwlist), [pgextwlist](https://github.com/dimitri/pgextwlist),
[pg_auth_mon](https://github.com/RafiaSabih/pg_auth_mon) [pg_auth_mon](https://github.com/RafiaSabih/pg_auth_mon)
* Incl. popular Postgres extensions such as * Incl. popular Postgres extensions such as
[decoderbufs](https://github.com/debezium/postgres-decoderbufs), [decoderbufs](https://github.com/debezium/postgres-decoderbufs),
[hypopg](https://github.com/HypoPG/hypopg), [hypopg](https://github.com/HypoPG/hypopg),
[pg_cron](https://github.com/citusdata/pg_cron), [pg_cron](https://github.com/citusdata/pg_cron),
[pg_repack](https://github.com/reorg/pg_repack),
[pg_partman](https://github.com/pgpartman/pg_partman), [pg_partman](https://github.com/pgpartman/pg_partman),
[pg_stat_kcache](https://github.com/powa-team/pg_stat_kcache), [pg_stat_kcache](https://github.com/powa-team/pg_stat_kcache),
[pg_audit](https://github.com/pgaudit/pgaudit),
[pgfaceting](https://github.com/cybertec-postgresql/pgfaceting),
[pgq](https://github.com/pgq/pgq), [pgq](https://github.com/pgq/pgq),
[pgvector](https://github.com/pgvector/pgvector), [pgvector](https://github.com/pgvector/pgvector),
[plpgsql_check](https://github.com/okbob/plpgsql_check), [plpgsql_check](https://github.com/okbob/plpgsql_check),
[plproxy](https://github.com/plproxy/plproxy),
[postgis](https://postgis.net/), [postgis](https://postgis.net/),
[roaringbitmap](https://github.com/ChenHuajun/pg_roaringbitmap),
[set_user](https://github.com/pgaudit/set_user) and [set_user](https://github.com/pgaudit/set_user) and
[timescaledb](https://github.com/timescale/timescaledb) [timescaledb](https://github.com/timescale/timescaledb)
@ -58,12 +63,12 @@ production for over five years.
| Release | Postgres versions | K8s versions | Golang | | Release | Postgres versions | K8s versions | Golang |
| :-------- | :---------------: | :---------------: | :-----: | | :-------- | :---------------: | :---------------: | :-----: |
| v1.11.* | 11 &rarr; 16 | 1.21 &rarr; 1.28 | 1.21.7 | | v1.15.0 | 13 &rarr; 17 | 1.27+ | 1.25.3 |
| v1.10.* | 10 &rarr; 15 | 1.21 &rarr; 1.28 | 1.19.8 | | v1.14.0 | 13 &rarr; 17 | 1.27+ | 1.23.4 |
| v1.9.0 | 10 &rarr; 15 | 1.21 &rarr; 1.28 | 1.18.9 | | v1.13.0 | 12 &rarr; 16 | 1.27+ | 1.22.5 |
| v1.8.* | 9.5 &rarr; 14 | 1.20 &rarr; 1.24 | 1.17.4 | | v1.12.0 | 11 &rarr; 16 | 1.27+ | 1.22.3 |
| v1.7.1 | 9.5 &rarr; 14 | 1.20 &rarr; 1.24 | 1.16.9 | | v1.11.0 | 11 &rarr; 16 | 1.27+ | 1.21.7 |
| v1.10.1 | 10 &rarr; 15 | 1.21+ | 1.19.8 |
## Getting started ## Getting started

View File

@ -1,7 +1,7 @@
apiVersion: v2 apiVersion: v2
name: postgres-operator-ui name: postgres-operator-ui
version: 1.11.0 version: 1.15.0
appVersion: 1.11.0 appVersion: 1.15.0
home: https://github.com/zalando/postgres-operator home: https://github.com/zalando/postgres-operator
description: Postgres Operator UI provides a graphical interface for a convenient database-as-a-service user experience description: Postgres Operator UI provides a graphical interface for a convenient database-as-a-service user experience
keywords: keywords:

View File

@ -1,9 +1,101 @@
apiVersion: v1 apiVersion: v1
entries: entries:
postgres-operator-ui: postgres-operator-ui:
- apiVersion: v2
appVersion: 1.15.0
created: "2025-10-16T11:34:57.912432565+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience
digest: d82b5fb7c3d4fd8b106343b2f9472cba5e6050315ab3c520a79366f2b2f20c7a
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- ui
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator-ui
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-ui-1.15.0.tgz
version: 1.15.0
- apiVersion: v2
appVersion: 1.14.0
created: "2025-10-16T11:34:57.906677165+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience
digest: e87ed898079a852957a67a4caf3fbd27b9098e413f5d961b7a771a6ae8b3e17c
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- ui
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator-ui
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-ui-1.14.0.tgz
version: 1.14.0
- apiVersion: v2
appVersion: 1.13.0
created: "2025-10-16T11:34:57.904106882+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience
digest: e0444e516b50f82002d1a733527813c51759a627cefdd1005cea73659f824ea8
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- ui
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator-ui
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-ui-1.13.0.tgz
version: 1.13.0
- apiVersion: v2
appVersion: 1.12.2
created: "2025-10-16T11:34:57.901526106+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience
digest: cbcef400c23ccece27d97369ad629278265c013e0a45c0b7f33e7568a082fedd
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- ui
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator-ui
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-ui-1.12.2.tgz
version: 1.12.2
- apiVersion: v2 - apiVersion: v2
appVersion: 1.11.0 appVersion: 1.11.0
created: "2024-03-14T17:12:46.692800586+01:00" created: "2025-10-16T11:34:57.898843691+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience database-as-a-service user experience
digest: a45f2284045c2a9a79750a36997386444f39b01ac722b17c84b431457577a3a2 digest: a45f2284045c2a9a79750a36997386444f39b01ac722b17c84b431457577a3a2
@ -26,7 +118,7 @@ entries:
version: 1.11.0 version: 1.11.0
- apiVersion: v2 - apiVersion: v2
appVersion: 1.10.1 appVersion: 1.10.1
created: "2024-03-14T17:12:46.691746076+01:00" created: "2025-10-16T11:34:57.896283083+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience database-as-a-service user experience
digest: 2e5e7a82aebee519ec57c6243eb8735124aa4585a3a19c66ffd69638fbeb11ce digest: 2e5e7a82aebee519ec57c6243eb8735124aa4585a3a19c66ffd69638fbeb11ce
@ -47,119 +139,4 @@ entries:
urls: urls:
- postgres-operator-ui-1.10.1.tgz - postgres-operator-ui-1.10.1.tgz
version: 1.10.1 version: 1.10.1
- apiVersion: v2 generated: "2025-10-16T11:34:57.893034861+02:00"
appVersion: 1.10.0
created: "2024-03-14T17:12:46.690807634+01:00"
description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience
digest: 47413650e3188539ae778a601998efa2c4f80b8aa16e3668a2fc7b72e014b605
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- ui
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator-ui
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-ui-1.10.0.tgz
version: 1.10.0
- apiVersion: v2
appVersion: 1.9.0
created: "2024-03-14T17:12:46.696626932+01:00"
description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience
digest: df434af6c8b697fe0631017ecc25e3c79e125361ae6622347cea41a545153bdc
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- ui
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator-ui
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-ui-1.9.0.tgz
version: 1.9.0
- apiVersion: v2
appVersion: 1.8.2
created: "2024-03-14T17:12:46.69565936+01:00"
description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience
digest: fbfc90fa8fd007a08a7c02e0ec9108bb8282cbb42b8c976d88f2193d6edff30c
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- ui
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator-ui
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-ui-1.8.2.tgz
version: 1.8.2
- apiVersion: v2
appVersion: 1.8.1
created: "2024-03-14T17:12:46.694691362+01:00"
description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience
digest: d26342e385ea51a0fbfbe23477999863e9489664ae803ea5c56da8897db84d24
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- ui
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator-ui
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-ui-1.8.1.tgz
version: 1.8.1
- apiVersion: v1
appVersion: 1.8.0
created: "2024-03-14T17:12:46.693750873+01:00"
description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience
digest: d4a7b40c23fd167841cc28342afdbd5ecc809181913a5c31061c83139187f148
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- ui
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator-ui
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-ui-1.8.0.tgz
version: 1.8.0
generated: "2024-03-14T17:12:46.689654615+01:00"

View File

@ -9,7 +9,7 @@ metadata:
name: {{ template "postgres-operator-ui.fullname" . }} name: {{ template "postgres-operator-ui.fullname" . }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
spec: spec:
replicas: 1 replicas: {{ .Values.replicaCount }}
selector: selector:
matchLabels: matchLabels:
app.kubernetes.io/name: {{ template "postgres-operator-ui.name" . }} app.kubernetes.io/name: {{ template "postgres-operator-ui.name" . }}
@ -84,13 +84,22 @@ spec:
"limit_iops": 16000, "limit_iops": 16000,
"limit_throughput": 1000, "limit_throughput": 1000,
"postgresql_versions": [ "postgresql_versions": [
"17",
"16", "16",
"15", "15",
"14", "14",
"13", "13"
"12"
] ]
} }
{{- if .Values.extraEnvs }} {{- if .Values.extraEnvs }}
{{- .Values.extraEnvs | toYaml | nindent 12 }} {{- .Values.extraEnvs | toYaml | nindent 12 }}
{{- end }} {{- end }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}

View File

@ -6,9 +6,9 @@ replicaCount: 1
# configure ui image # configure ui image
image: image:
registry: registry.opensource.zalan.do registry: ghcr.io
repository: acid/postgres-operator-ui repository: zalando/postgres-operator-ui
tag: v1.11.0 tag: v1.15.0
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
# Optionally specify an array of imagePullSecrets. # Optionally specify an array of imagePullSecrets.
@ -62,8 +62,6 @@ podAnnotations:
extraEnvs: extraEnvs:
[] []
# Exemple of settings to make snapshot view working in the ui when using AWS # Exemple of settings to make snapshot view working in the ui when using AWS
# - name: WALE_S3_ENDPOINT
# value: https+path://s3.us-east-1.amazonaws.com:443
# - name: SPILO_S3_BACKUP_PREFIX # - name: SPILO_S3_BACKUP_PREFIX
# value: spilo/ # value: spilo/
# - name: AWS_ACCESS_KEY_ID # - name: AWS_ACCESS_KEY_ID
@ -83,8 +81,6 @@ extraEnvs:
# key: AWS_DEFAULT_REGION # key: AWS_DEFAULT_REGION
# - name: SPILO_S3_BACKUP_BUCKET # - name: SPILO_S3_BACKUP_BUCKET
# value: <s3 bucket used by the operator> # value: <s3 bucket used by the operator>
# - name: "USE_AWS_INSTANCE_PROFILE"
# value: "true"
# configure UI service # configure UI service
service: service:
@ -111,3 +107,18 @@ ingress:
# - secretName: ui-tls # - secretName: ui-tls
# hosts: # hosts:
# - ui.exmaple.org # - ui.exmaple.org
# priority class for operator-ui pod
priorityClassName: ""
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []

View File

@ -1,7 +1,7 @@
apiVersion: v2 apiVersion: v2
name: postgres-operator name: postgres-operator
version: 1.11.0 version: 1.15.0
appVersion: 1.11.0 appVersion: 1.15.0
home: https://github.com/zalando/postgres-operator home: https://github.com/zalando/postgres-operator
description: Postgres Operator creates and manages PostgreSQL clusters running in Kubernetes description: Postgres Operator creates and manages PostgreSQL clusters running in Kubernetes
keywords: keywords:

View File

@ -68,7 +68,7 @@ spec:
type: string type: string
docker_image: docker_image:
type: string type: string
default: "ghcr.io/zalando/spilo-16:3.2-p2" default: "ghcr.io/zalando/spilo-17:4.0-p3"
enable_crd_registration: enable_crd_registration:
type: boolean type: boolean
default: true default: true
@ -160,17 +160,17 @@ spec:
properties: properties:
major_version_upgrade_mode: major_version_upgrade_mode:
type: string type: string
default: "off" default: "manual"
major_version_upgrade_team_allow_list: major_version_upgrade_team_allow_list:
type: array type: array
items: items:
type: string type: string
minimal_major_version: minimal_major_version:
type: string type: string
default: "12" default: "13"
target_major_version: target_major_version:
type: string type: string
default: "16" default: "17"
kubernetes: kubernetes:
type: object type: object
properties: properties:
@ -211,6 +211,9 @@ spec:
enable_init_containers: enable_init_containers:
type: boolean type: boolean
default: true default: true
enable_owner_references:
type: boolean
default: false
enable_persistent_volume_claim_deletion: enable_persistent_volume_claim_deletion:
type: boolean type: boolean
default: true default: true
@ -223,6 +226,9 @@ spec:
enable_readiness_probe: enable_readiness_probe:
type: boolean type: boolean
default: false default: false
enable_secrets_deletion:
type: boolean
default: true
enable_sidecars: enable_sidecars:
type: boolean type: boolean
default: true default: true
@ -281,6 +287,9 @@ spec:
oauth_token_secret_name: oauth_token_secret_name:
type: string type: string
default: "postgresql-operator" default: "postgresql-operator"
pdb_master_label_selector:
type: boolean
default: true
pdb_name_format: pdb_name_format:
type: string type: string
default: "postgres-{cluster}-pdb" default: "postgres-{cluster}-pdb"
@ -367,28 +376,28 @@ spec:
properties: properties:
default_cpu_limit: default_cpu_limit:
type: string type: string
pattern: '^(\d+m|\d+(\.\d{1,3})?)$' pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
default_cpu_request: default_cpu_request:
type: string type: string
pattern: '^(\d+m|\d+(\.\d{1,3})?)$' pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
default_memory_limit: default_memory_limit:
type: string type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$' pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
default_memory_request: default_memory_request:
type: string type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$' pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
max_cpu_request: max_cpu_request:
type: string type: string
pattern: '^(\d+m|\d+(\.\d{1,3})?)$' pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
max_memory_request: max_memory_request:
type: string type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$' pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
min_cpu_limit: min_cpu_limit:
type: string type: string
pattern: '^(\d+m|\d+(\.\d{1,3})?)$' pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
min_memory_limit: min_memory_limit:
type: string type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$' pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
timeouts: timeouts:
type: object type: object
properties: properties:
@ -463,7 +472,6 @@ spec:
type: string type: string
additional_secret_mount_path: additional_secret_mount_path:
type: string type: string
default: "/meta/credentials"
aws_region: aws_region:
type: string type: string
default: "eu-central-1" default: "eu-central-1"
@ -502,7 +510,7 @@ spec:
pattern: '^(\d+m|\d+(\.\d{1,3})?)$' pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
logical_backup_docker_image: logical_backup_docker_image:
type: string type: string
default: "registry.opensource.zalan.do/acid/logical-backup:v1.11.0" default: "ghcr.io/zalando/postgres-operator/logical-backup:v1.13.0"
logical_backup_google_application_credentials: logical_backup_google_application_credentials:
type: string type: string
logical_backup_job_prefix: logical_backup_job_prefix:
@ -525,6 +533,8 @@ spec:
type: string type: string
logical_backup_s3_bucket: logical_backup_s3_bucket:
type: string type: string
logical_backup_s3_bucket_prefix:
type: string
logical_backup_s3_endpoint: logical_backup_s3_endpoint:
type: string type: string
logical_backup_s3_region: logical_backup_s3_region:

View File

@ -87,10 +87,14 @@ spec:
- mountPath - mountPath
- volumeSource - volumeSource
properties: properties:
isSubPathExpr:
type: boolean
name: name:
type: string type: string
mountPath: mountPath:
type: string type: string
subPath:
type: string
targetContainers: targetContainers:
type: array type: array
nullable: true nullable: true
@ -99,8 +103,6 @@ spec:
volumeSource: volumeSource:
type: object type: object
x-kubernetes-preserve-unknown-fields: true x-kubernetes-preserve-unknown-fields: true
subPath:
type: string
allowedSourceRanges: allowedSourceRanges:
type: array type: array
nullable: true nullable: true
@ -215,6 +217,8 @@ spec:
items: items:
type: object type: object
x-kubernetes-preserve-unknown-fields: true x-kubernetes-preserve-unknown-fields: true
logicalBackupRetention:
type: string
logicalBackupSchedule: logicalBackupSchedule:
type: string type: string
pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$' pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
@ -222,7 +226,7 @@ spec:
type: array type: array
items: items:
type: string type: string
pattern: '^\ *((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))-((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))\ *$' pattern: '^\ *((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))-((2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))\ *$'
masterServiceAnnotations: masterServiceAnnotations:
type: object type: object
additionalProperties: additionalProperties:
@ -371,12 +375,11 @@ spec:
version: version:
type: string type: string
enum: enum:
- "11"
- "12"
- "13" - "13"
- "14" - "14"
- "15" - "15"
- "16" - "16"
- "17"
parameters: parameters:
type: object type: object
additionalProperties: additionalProperties:
@ -511,6 +514,9 @@ spec:
type: string type: string
batchSize: batchSize:
type: integer type: integer
cpu:
type: string
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
database: database:
type: string type: string
enableRecovery: enableRecovery:
@ -519,6 +525,9 @@ spec:
type: object type: object
additionalProperties: additionalProperties:
type: string type: string
memory:
type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
tables: tables:
type: object type: object
additionalProperties: additionalProperties:
@ -530,6 +539,8 @@ spec:
type: string type: string
idColumn: idColumn:
type: string type: string
ignoreRecovery:
type: boolean
payloadColumn: payloadColumn:
type: string type: string
recoveryEventType: recoveryEventType:
@ -632,6 +643,8 @@ spec:
required: required:
- size - size
properties: properties:
isSubPathExpr:
type: boolean
iops: iops:
type: integer type: integer
selector: selector:

View File

@ -2,11 +2,99 @@ apiVersion: v1
entries: entries:
postgres-operator: postgres-operator:
- apiVersion: v2 - apiVersion: v2
appVersion: 1.11.0 appVersion: 1.15.0
created: "2024-03-14T17:11:54.311938906+01:00" created: "2025-10-16T11:35:38.533627038+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes in Kubernetes
digest: f12f5ae9282dd77d37e3bfd0aa47be58ed0b2f02056889d8f1111bdb2b9fe286 digest: 002dd47647bf51fbba023bd1762d807be478cf37de7a44b80cd01ac1f20bd94a
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-1.15.0.tgz
version: 1.15.0
- apiVersion: v2
appVersion: 1.14.0
created: "2025-10-16T11:35:38.52489216+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes
digest: 36e1571f3f455b213f16cdda7b1158648e8e84deb804ba47ed6b9b6d19263ba8
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-1.14.0.tgz
version: 1.14.0
- apiVersion: v2
appVersion: 1.13.0
created: "2025-10-16T11:35:38.517347652+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes
digest: a839601689aea0a7e6bc0712a5244d435683cf3314c95794097ff08540e1dfef
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-1.13.0.tgz
version: 1.13.0
- apiVersion: v2
appVersion: 1.12.2
created: "2025-10-16T11:35:38.510819005+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes
digest: 65858d14a40d7fd90c32bd9fc60021acc9555c161079f43a365c70171eaf21d8
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-1.12.2.tgz
version: 1.12.2
- apiVersion: v2
appVersion: 1.11.0
created: "2025-10-16T11:35:38.503781253+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes
digest: 3914b5e117bda0834f05c9207f007e2ac372864cf6e86dcc2e1362bbe46c14d9
home: https://github.com/zalando/postgres-operator home: https://github.com/zalando/postgres-operator
keywords: keywords:
- postgres - postgres
@ -25,7 +113,7 @@ entries:
version: 1.11.0 version: 1.11.0
- apiVersion: v2 - apiVersion: v2
appVersion: 1.10.1 appVersion: 1.10.1
created: "2024-03-14T17:11:54.3101439+01:00" created: "2025-10-16T11:35:38.494366224+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes in Kubernetes
digest: cc3baa41753da92466223d0b334df27e79c882296577b404a8e9071411fcf19c digest: cc3baa41753da92466223d0b334df27e79c882296577b404a8e9071411fcf19c
@ -45,114 +133,4 @@ entries:
urls: urls:
- postgres-operator-1.10.1.tgz - postgres-operator-1.10.1.tgz
version: 1.10.1 version: 1.10.1
- apiVersion: v2 generated: "2025-10-16T11:35:38.487472753+02:00"
appVersion: 1.10.0
created: "2024-03-14T17:11:54.308561116+01:00"
description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes
digest: 60fc5c8059dfed175d14e1034b40997d9c59d33ec8ea158c0597f7228ab04b51
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-1.10.0.tgz
version: 1.10.0
- apiVersion: v2
appVersion: 1.9.0
created: "2024-03-14T17:11:54.3194627+01:00"
description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes
digest: 64df90c898ca591eb3a330328173ffaadfbf9ddd474d8c42ed143edc9e3f4276
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-1.9.0.tgz
version: 1.9.0
- apiVersion: v2
appVersion: 1.8.2
created: "2024-03-14T17:11:54.317846817+01:00"
description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes
digest: f77ffad2e98b72a621e5527015cf607935d3ed688f10ba4b626435acb9631b5b
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-1.8.2.tgz
version: 1.8.2
- apiVersion: v2
appVersion: 1.8.1
created: "2024-03-14T17:11:54.315242584+01:00"
description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes
digest: ee0c3bb6ba72fa4289ba3b1c6060e5b312dd023faba2a61b4cb7d9e5e2cc57a5
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-1.8.1.tgz
version: 1.8.1
- apiVersion: v1
appVersion: 1.8.0
created: "2024-03-14T17:11:54.313632778+01:00"
description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes
digest: 3ae232cf009e09aa2ad11c171484cd2f1b72e63c59735e58fbe2b6eb842f4c86
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-1.8.0.tgz
version: 1.8.0
generated: "2024-03-14T17:11:54.305930529+01:00"

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -70,8 +70,8 @@ Flatten nested config options when ConfigMap is used as ConfigTarget
{{- $list := list }} {{- $list := list }}
{{- range $subKey, $subValue := $value }} {{- range $subKey, $subValue := $value }}
{{- $list = append $list (printf "%s:%s" $subKey $subValue) }} {{- $list = append $list (printf "%s:%s" $subKey $subValue) }}
{{ $key }}: {{ join "," $list | quote }}
{{- end }} {{- end }}
{{ $key }}: {{ join "," $list | quote }}
{{- else }} {{- else }}
{{ $key }}: {{ $value | quote }} {{ $key }}: {{ $value | quote }}
{{- end }} {{- end }}

View File

@ -120,6 +120,7 @@ rules:
- create - create
- delete - delete
- get - get
- patch
- update - update
# to check nodes for node readiness label # to check nodes for node readiness label
- apiGroups: - apiGroups:
@ -139,8 +140,8 @@ rules:
- delete - delete
- get - get
- list - list
{{- if toString .Values.configKubernetes.storage_resize_mode | eq "pvc" }}
- patch - patch
{{- if or (toString .Values.configKubernetes.storage_resize_mode | eq "pvc") (toString .Values.configKubernetes.storage_resize_mode | eq "mixed") }}
- update - update
{{- end }} {{- end }}
# to read existing PVs. Creation should be done via dynamic provisioning # to read existing PVs. Creation should be done via dynamic provisioning
@ -196,6 +197,7 @@ rules:
- get - get
- list - list
- patch - patch
- update
# to CRUD cron jobs for logical backups # to CRUD cron jobs for logical backups
- apiGroups: - apiGroups:
- batch - batch

View File

@ -52,6 +52,9 @@ spec:
{{- if .Values.controllerID.create }} {{- if .Values.controllerID.create }}
- name: CONTROLLER_ID - name: CONTROLLER_ID
value: {{ template "postgres-operator.controllerID" . }} value: {{ template "postgres-operator.controllerID" . }}
{{- end }}
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 8 }}
{{- end }} {{- end }}
resources: resources:
{{ toYaml .Values.resources | indent 10 }} {{ toYaml .Values.resources | indent 10 }}

View File

@ -14,7 +14,7 @@ configuration:
users: users:
{{ tpl (toYaml .Values.configUsers) . | indent 4 }} {{ tpl (toYaml .Values.configUsers) . | indent 4 }}
major_version_upgrade: major_version_upgrade:
{{ toYaml .Values.configMajorVersionUpgrade | indent 4 }} {{ tpl (toYaml .Values.configMajorVersionUpgrade) . | indent 4 }}
kubernetes: kubernetes:
{{- if .Values.podPriorityClassName.name }} {{- if .Values.podPriorityClassName.name }}
pod_priority_class_name: {{ .Values.podPriorityClassName.name }} pod_priority_class_name: {{ .Values.podPriorityClassName.name }}
@ -23,23 +23,23 @@ configuration:
oauth_token_secret_name: {{ template "postgres-operator.fullname" . }} oauth_token_secret_name: {{ template "postgres-operator.fullname" . }}
{{ tpl (toYaml .Values.configKubernetes) . | indent 4 }} {{ tpl (toYaml .Values.configKubernetes) . | indent 4 }}
postgres_pod_resources: postgres_pod_resources:
{{ toYaml .Values.configPostgresPodResources | indent 4 }} {{ tpl (toYaml .Values.configPostgresPodResources) . | indent 4 }}
timeouts: timeouts:
{{ toYaml .Values.configTimeouts | indent 4 }} {{ tpl (toYaml .Values.configTimeouts) . | indent 4 }}
load_balancer: load_balancer:
{{ toYaml .Values.configLoadBalancer | indent 4 }} {{ tpl (toYaml .Values.configLoadBalancer) . | indent 4 }}
aws_or_gcp: aws_or_gcp:
{{ toYaml .Values.configAwsOrGcp | indent 4 }} {{ tpl (toYaml .Values.configAwsOrGcp) . | indent 4 }}
logical_backup: logical_backup:
{{ toYaml .Values.configLogicalBackup | indent 4 }} {{ tpl (toYaml .Values.configLogicalBackup) . | indent 4 }}
debug: debug:
{{ toYaml .Values.configDebug | indent 4 }} {{ tpl (toYaml .Values.configDebug) . | indent 4 }}
teams_api: teams_api:
{{ tpl (toYaml .Values.configTeamsApi) . | indent 4 }} {{ tpl (toYaml .Values.configTeamsApi) . | indent 4 }}
logging_rest_api: logging_rest_api:
{{ toYaml .Values.configLoggingRestApi | indent 4 }} {{ tpl (toYaml .Values.configLoggingRestApi) . | indent 4 }}
connection_pooler: connection_pooler:
{{ toYaml .Values.configConnectionPooler | indent 4 }} {{ tpl (toYaml .Values.configConnectionPooler) . | indent 4 }}
patroni: patroni:
{{ toYaml .Values.configPatroni | indent 4 }} {{ tpl (toYaml .Values.configPatroni) . | indent 4 }}
{{- end }} {{- end }}

View File

@ -1,7 +1,7 @@
image: image:
registry: registry.opensource.zalan.do registry: ghcr.io
repository: acid/postgres-operator repository: zalando/postgres-operator
tag: v1.11.0 tag: v1.15.0
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
# Optionally specify an array of imagePullSecrets. # Optionally specify an array of imagePullSecrets.
@ -38,7 +38,7 @@ configGeneral:
# etcd connection string for Patroni. Empty uses K8s-native DCS. # etcd connection string for Patroni. Empty uses K8s-native DCS.
etcd_host: "" etcd_host: ""
# Spilo docker image # Spilo docker image
docker_image: ghcr.io/zalando/spilo-16:3.2-p2 docker_image: ghcr.io/zalando/spilo-17:4.0-p3
# key name for annotation to ignore globally configured instance limits # key name for annotation to ignore globally configured instance limits
# ignore_instance_limits_annotation_key: "" # ignore_instance_limits_annotation_key: ""
@ -83,15 +83,15 @@ configUsers:
configMajorVersionUpgrade: configMajorVersionUpgrade:
# "off": no upgrade, "manual": manifest triggers action, "full": minimal version violation triggers too # "off": no upgrade, "manual": manifest triggers action, "full": minimal version violation triggers too
major_version_upgrade_mode: "off" major_version_upgrade_mode: "manual"
# upgrades will only be carried out for clusters of listed teams when mode is "off" # upgrades will only be carried out for clusters of listed teams when mode is "off"
# major_version_upgrade_team_allow_list: # major_version_upgrade_team_allow_list:
# - acid # - acid
# minimal Postgres major version that will not automatically be upgraded # minimal Postgres major version that will not automatically be upgraded
minimal_major_version: "12" minimal_major_version: "13"
# target Postgres major version when upgrading clusters automatically # target Postgres major version when upgrading clusters automatically
target_major_version: "16" target_major_version: "17"
configKubernetes: configKubernetes:
# list of additional capabilities for postgres container # list of additional capabilities for postgres container
@ -129,6 +129,8 @@ configKubernetes:
enable_finalizers: false enable_finalizers: false
# enables initContainers to run actions before Spilo is started # enables initContainers to run actions before Spilo is started
enable_init_containers: true enable_init_containers: true
# toggles if child resources should have an owner reference to the postgresql CR
enable_owner_references: false
# toggles if operator should delete PVCs on cluster deletion # toggles if operator should delete PVCs on cluster deletion
enable_persistent_volume_claim_deletion: true enable_persistent_volume_claim_deletion: true
# toggles pod anti affinity on the Postgres pods # toggles pod anti affinity on the Postgres pods
@ -137,6 +139,8 @@ configKubernetes:
enable_pod_disruption_budget: true enable_pod_disruption_budget: true
# toogles readiness probe for database pods # toogles readiness probe for database pods
enable_readiness_probe: false enable_readiness_probe: false
# toggles if operator should delete secrets on cluster deletion
enable_secrets_deletion: true
# enables sidecar containers to run alongside Spilo in the same pod # enables sidecar containers to run alongside Spilo in the same pod
enable_sidecars: true enable_sidecars: true
@ -169,7 +173,9 @@ configKubernetes:
# namespaced name of the secret containing the OAuth2 token to pass to the teams API # namespaced name of the secret containing the OAuth2 token to pass to the teams API
# oauth_token_secret_name: postgresql-operator # oauth_token_secret_name: postgresql-operator
# defines the template for PDB (Pod Disruption Budget) names # toggle if `spilo-role=master` selector should be added to the PDB (Pod Disruption Budget)
pdb_master_label_selector: true
# defines the template for PDB names
pdb_name_format: "postgres-{cluster}-pdb" pdb_name_format: "postgres-{cluster}-pdb"
# specify the PVC retention policy when scaling down and/or deleting # specify the PVC retention policy when scaling down and/or deleting
persistent_volume_claim_retention_policy: persistent_volume_claim_retention_policy:
@ -358,7 +364,7 @@ configLogicalBackup:
# logical_backup_memory_request: "" # logical_backup_memory_request: ""
# image for pods of the logical backup job (example runs pg_dumpall) # image for pods of the logical backup job (example runs pg_dumpall)
logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.11.0" logical_backup_docker_image: "ghcr.io/zalando/postgres-operator/logical-backup:v1.15.0"
# path of google cloud service account json file # path of google cloud service account json file
# logical_backup_google_application_credentials: "" # logical_backup_google_application_credentials: ""
@ -370,6 +376,8 @@ configLogicalBackup:
logical_backup_s3_access_key_id: "" logical_backup_s3_access_key_id: ""
# S3 bucket to store backup results # S3 bucket to store backup results
logical_backup_s3_bucket: "my-bucket-url" logical_backup_s3_bucket: "my-bucket-url"
# S3 bucket prefix to use
logical_backup_s3_bucket_prefix: "spilo"
# S3 region of bucket # S3 region of bucket
logical_backup_s3_region: "" logical_backup_s3_region: ""
# S3 endpoint url when not using AWS # S3 endpoint url when not using AWS
@ -498,6 +506,24 @@ readinessProbe:
initialDelaySeconds: 5 initialDelaySeconds: 5
periodSeconds: 10 periodSeconds: 10
# configure extra environment variables
# Extra environment variables are writen in kubernetes format and added "as is" to the pod's env variables
# https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
# https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables
extraEnvs:
[]
# Exemple of settings maximum amount of memory / cpu that can be used by go process (to match resources.limits)
# - name: MY_VAR
# value: my-value
# - name: GOMAXPROCS
# valueFrom:
# resourceFieldRef:
# resource: limits.cpu
# - name: GOMEMLIMIT
# valueFrom:
# resourceFieldRef:
# resource: limits.memory
# Affinity for pod assignment # Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity # Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {} affinity: {}

View File

@ -35,6 +35,8 @@ func init() {
flag.BoolVar(&outOfCluster, "outofcluster", false, "Whether the operator runs in- our outside of the Kubernetes cluster.") flag.BoolVar(&outOfCluster, "outofcluster", false, "Whether the operator runs in- our outside of the Kubernetes cluster.")
flag.BoolVar(&config.NoDatabaseAccess, "nodatabaseaccess", false, "Disable all access to the database from the operator side.") flag.BoolVar(&config.NoDatabaseAccess, "nodatabaseaccess", false, "Disable all access to the database from the operator side.")
flag.BoolVar(&config.NoTeamsAPI, "noteamsapi", false, "Disable all access to the teams API") flag.BoolVar(&config.NoTeamsAPI, "noteamsapi", false, "Disable all access to the teams API")
flag.IntVar(&config.KubeQPS, "kubeqps", 10, "Kubernetes api requests per second.")
flag.IntVar(&config.KubeBurst, "kubeburst", 20, "Kubernetes api requests burst limit.")
flag.Parse() flag.Parse()
config.EnableJsonLogging = os.Getenv("ENABLE_JSON_LOGGING") == "true" config.EnableJsonLogging = os.Getenv("ENABLE_JSON_LOGGING") == "true"
@ -83,6 +85,9 @@ func main() {
log.Fatalf("couldn't get REST config: %v", err) log.Fatalf("couldn't get REST config: %v", err)
} }
config.RestConfig.QPS = float32(config.KubeQPS)
config.RestConfig.Burst = config.KubeBurst
c := controller.NewController(&config, "") c := controller.NewController(&config, "")
c.Run(stop, wg) c.Run(stop, wg)

View File

@ -5,27 +5,18 @@ pipeline:
vm_config: vm_config:
type: linux type: linux
size: large size: large
image: cdp-runtime/go
cache: cache:
paths: paths:
- /go/pkg/mod - /go/pkg/mod # pkg cache for Go modules
- ~/.cache/go-build # Go build cache
commands: commands:
- desc: 'Update' - desc: Run unit tests
cmd: | cmd: |
apt-get update make deps mocks test
- desc: 'Install required build software'
- desc: Build Docker image
cmd: | cmd: |
apt-get install -y make git apt-transport-https ca-certificates curl build-essential python3 python3-pip
- desc: 'Install go'
cmd: |
cd /tmp
wget -q https://storage.googleapis.com/golang/go1.21.7.linux-amd64.tar.gz -O go.tar.gz
tar -xf go.tar.gz
mv go /usr/local
ln -s /usr/local/go/bin/go /usr/bin/go
go version
- desc: 'Build docker image'
cmd: |
export PATH=$PATH:$HOME/go/bin
IS_PR_BUILD=${CDP_PULL_REQUEST_NUMBER+"true"} IS_PR_BUILD=${CDP_PULL_REQUEST_NUMBER+"true"}
if [[ ${CDP_TARGET_BRANCH} == "master" && ${IS_PR_BUILD} != "true" ]] if [[ ${CDP_TARGET_BRANCH} == "master" && ${IS_PR_BUILD} != "true" ]]
then then
@ -34,23 +25,7 @@ pipeline:
IMAGE=registry-write.opensource.zalan.do/acid/postgres-operator-test IMAGE=registry-write.opensource.zalan.do/acid/postgres-operator-test
fi fi
export IMAGE export IMAGE
make deps mocks docker make docker push
- desc: 'Run unit tests'
cmd: |
export PATH=$PATH:$HOME/go/bin
go test ./...
- desc: 'Push docker image'
cmd: |
export PATH=$PATH:$HOME/go/bin
IS_PR_BUILD=${CDP_PULL_REQUEST_NUMBER+"true"}
if [[ ${CDP_TARGET_BRANCH} == "master" && ${IS_PR_BUILD} != "true" ]]
then
IMAGE=registry-write.opensource.zalan.do/acid/postgres-operator
else
IMAGE=registry-write.opensource.zalan.do/acid/postgres-operator-test
fi
export IMAGE
make push
- id: build-operator-ui - id: build-operator-ui
type: script type: script
@ -90,7 +65,7 @@ pipeline:
commands: commands:
- desc: Build image - desc: Build image
cmd: | cmd: |
cd docker/logical-backup cd logical-backup
export TAG=$(git describe --tags --always --dirty) export TAG=$(git describe --tags --always --dirty)
IMAGE="registry-write.opensource.zalan.do/acid/logical-backup" IMAGE="registry-write.opensource.zalan.do/acid/logical-backup"
docker build --rm -t "$IMAGE:$TAG$CDP_TAG" . docker build --rm -t "$IMAGE:$TAG$CDP_TAG" .

View File

@ -1,18 +1,14 @@
FROM registry.opensource.zalan.do/library/alpine-3.15:latest FROM golang:1.25-alpine
LABEL maintainer="Team ACID @ Zalando <team-acid@zalando.de>" LABEL maintainer="Team ACID @ Zalando <team-acid@zalando.de>"
# We need root certificates to deal with teams api over https # We need root certificates to deal with teams api over https
RUN apk --no-cache add ca-certificates go git musl-dev RUN apk -U add --no-cache ca-certificates delve
COPY build/* / COPY build/* /
RUN addgroup -g 1000 pgo RUN addgroup -g 1000 pgo
RUN adduser -D -u 1000 -G pgo -g 'Postgres Operator' pgo RUN adduser -D -u 1000 -G pgo -g 'Postgres Operator' pgo
RUN go get -d github.com/derekparker/delve/cmd/dlv
RUN cp /root/go/bin/dlv /dlv
RUN chown -R pgo:pgo /dlv
USER pgo:pgo USER pgo:pgo
RUN ls -l / RUN ls -l /

View File

@ -1,23 +1,20 @@
ARG BASE_IMAGE=registry.opensource.zalan.do/library/alpine-3:latest ARG BASE_IMAGE=registry.opensource.zalan.do/library/alpine-3:latest
FROM golang:1.25-alpine AS builder
ARG VERSION=latest ARG VERSION=latest
FROM ubuntu:20.04 as builder
ARG VERSION
COPY . /go/src/github.com/zalando/postgres-operator COPY . /go/src/github.com/zalando/postgres-operator
WORKDIR /go/src/github.com/zalando/postgres-operator WORKDIR /go/src/github.com/zalando/postgres-operator
ENV OPERATOR_LDFLAGS="-X=main.version=${VERSION}" RUN GO111MODULE=on go mod vendor \
RUN bash docker/build_operator.sh && CGO_ENABLED=0 go build -o build/postgres-operator -v -ldflags "-X=main.version=${VERSION}" cmd/main.go
FROM ${BASE_IMAGE} FROM ${BASE_IMAGE}
LABEL maintainer="Team ACID @ Zalando <team-acid@zalando.de>" LABEL maintainer="Team ACID @ Zalando <team-acid@zalando.de>"
LABEL org.opencontainers.image.source="https://github.com/zalando/postgres-operator" LABEL org.opencontainers.image.source="https://github.com/zalando/postgres-operator"
# We need root certificates to deal with teams api over https # We need root certificates to deal with teams api over https
RUN apk --no-cache add curl RUN apk -U upgrade --no-cache \
RUN apk --no-cache add ca-certificates && apk add --no-cache curl ca-certificates
COPY --from=builder /go/src/github.com/zalando/postgres-operator/build/* / COPY --from=builder /go/src/github.com/zalando/postgres-operator/build/* /

View File

@ -13,7 +13,7 @@ apt-get install -y wget
( (
cd /tmp cd /tmp
wget -q "https://storage.googleapis.com/golang/go1.21.7.linux-${arch}.tar.gz" -O go.tar.gz wget -q "https://storage.googleapis.com/golang/go1.25.3.linux-${arch}.tar.gz" -O go.tar.gz
tar -xf go.tar.gz tar -xf go.tar.gz
mv go /usr/local mv go /usr/local
ln -s /usr/local/go/bin/go /usr/bin/go ln -s /usr/local/go/bin/go /usr/bin/go

View File

@ -63,14 +63,17 @@ the `PGVERSION` environment variable is set for the database pods. Since
`v1.6.0` the related option `enable_pgversion_env_var` is enabled by default. `v1.6.0` the related option `enable_pgversion_env_var` is enabled by default.
In-place major version upgrades can be configured to be executed by the In-place major version upgrades can be configured to be executed by the
operator with the `major_version_upgrade_mode` option. By default it is set operator with the `major_version_upgrade_mode` option. By default, it is
to `off` which means the cluster version will not change when increased in enabled (mode: `manual`). In any case, altering the version in the manifest
the manifest. Still, a rolling update would be triggered updating the will trigger a rolling update of pods to update the `PGVERSION` env variable.
`PGVERSION` variable. But Spilo's [`configure_spilo`](https://github.com/zalando/spilo/blob/master/postgres-appliance/scripts/configure_spilo.py) Spilo's [`configure_spilo`](https://github.com/zalando/spilo/blob/master/postgres-appliance/scripts/configure_spilo.py)
script will notice the version mismatch and start the old version again. script will notice the version mismatch but start the current version again.
In this scenario the major version could then be run by a user from within the Next, the operator would call an updage script inside Spilo. When automatic
master pod. Exec into the container and run: upgrades are disabled (mode: `off`) the upgrade could still be run by a user
from within the primary pod. This gives you full control about the point in
time when the upgrade can be started (check also maintenance windows below).
Exec into the container and run:
```bash ```bash
python3 /scripts/inplace_upgrade.py N python3 /scripts/inplace_upgrade.py N
``` ```
@ -79,8 +82,32 @@ The upgrade is usually fast, well under one minute for most DBs. Note, that
changes become irrevertible once `pg_upgrade` is called. To understand the changes become irrevertible once `pg_upgrade` is called. To understand the
upgrade procedure, refer to the [corresponding PR in Spilo](https://github.com/zalando/spilo/pull/488). upgrade procedure, refer to the [corresponding PR in Spilo](https://github.com/zalando/spilo/pull/488).
When `major_version_upgrade_mode` is set to `manual` the operator will run When `major_version_upgrade_mode` is set to `full` the operator will compare
the upgrade script for you after the manifest is updated and pods are rotated. the version in the manifest with the configured `minimal_major_version`. If it
is lower the operator would start an automatic upgrade as described above. The
configured `major_target_version` will be used as the new version. This option
can be useful if you have to get rid of outdated major versions in your fleet.
Please note, that the operator does not patch the version in the manifest.
Thus, the `full` mode can create drift between desired and actual state.
### Upgrade during maintenance windows
When `maintenanceWindows` are defined in the Postgres manifest the operator
will trigger a major version upgrade only during these periods. Make sure they
are at least twice as long as your configured `resync_period` to guarantee
that operator actions can be triggered.
### Upgrade annotations
When an upgrade is executed, the operator sets an annotation in the PostgreSQL
resource, either `last-major-upgrade-success` if the upgrade succeeds, or
`last-major-upgrade-failure` if it fails. The value of the annotation is a
timestamp indicating when the upgrade occurred.
If a PostgreSQL resource contains a failure annotation, the operator will not
attempt to retry the upgrade during a sync event. To remove the failure
annotation, you can revert the PostgreSQL version back to the current version.
This action will trigger the removal of the failure annotation.
## Non-default cluster domain ## Non-default cluster domain
@ -168,12 +195,14 @@ from numerous escape characters in the latter log entry, view it in CLI with
used internally in K8s. used internally in K8s.
The StatefulSet is replaced if the following properties change: The StatefulSet is replaced if the following properties change:
- annotations - annotations
- volumeClaimTemplates - volumeClaimTemplates
- template volumes - template volumes
The StatefulSet is replaced and a rolling updates is triggered if the following The StatefulSet is replaced and a rolling updates is triggered if the following
properties differ between the old and new state: properties differ between the old and new state:
- container name, ports, image, resources, env, envFrom, securityContext and volumeMounts - container name, ports, image, resources, env, envFrom, securityContext and volumeMounts
- template labels, annotations, service account, securityContext, affinity, priority class and termination grace period - template labels, annotations, service account, securityContext, affinity, priority class and termination grace period
@ -223,9 +252,9 @@ configuration:
Now, every cluster manifest must contain the configured annotation keys to Now, every cluster manifest must contain the configured annotation keys to
trigger the delete process when running `kubectl delete pg`. Note, that the trigger the delete process when running `kubectl delete pg`. Note, that the
`Postgresql` resource would still get deleted as K8s' API server does not `Postgresql` resource would still get deleted because the operator does not
block it. Only the operator logs will tell, that the delete criteria wasn't instruct K8s' API server to block it. Only the operator logs will tell, that
met. the delete criteria was not met.
**cluster manifest** **cluster manifest**
@ -243,11 +272,64 @@ spec:
In case, the resource has been deleted accidentally or the annotations were In case, the resource has been deleted accidentally or the annotations were
simply forgotten, it's safe to recreate the cluster with `kubectl create`. simply forgotten, it's safe to recreate the cluster with `kubectl create`.
Existing Postgres cluster are not replaced by the operator. But, as the Existing Postgres cluster are not replaced by the operator. But, when the
original cluster still exists the status will show `CreateFailed` at first. original cluster still exists the status will be `CreateFailed` at first. On
On the next sync event it should change to `Running`. However, as it is in the next sync event it should change to `Running`. However, because it is in
fact a new resource for K8s, the UID will differ which can trigger a rolling fact a new resource for K8s, the UID and therefore, the backup path to S3,
update of the pods because the UID is used as part of backup path to S3. will differ and trigger a rolling update of the pods.
## Owner References and Finalizers
The Postgres Operator can set [owner references](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/) to most of a cluster's child resources to improve
monitoring with GitOps tools and enable cascading deletes. There are two
exceptions:
* Persistent Volume Claims, because they are handled by the [PV Reclaim Policy]https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/ of the Stateful Set
* Cross-namespace secrets, because owner references are not allowed across namespaces by design
The operator would clean these resources up with its regular delete loop
unless they got synced correctly. If for some reason the initial cluster sync
fails, e.g. after a cluster creation or operator restart, a deletion of the
cluster manifest might leave orphaned resources behind which the user has to
clean up manually.
Another option is to enable finalizers which first ensures the deletion of all
child resources before the cluster manifest gets removed. There is a trade-off
though: The deletion is only performed after the next two operator SYNC cycles
with the first one setting a `deletionTimestamp` and the latter reacting to it.
The final removal of the custom resource will add a DELETE event to the worker
queue but the child resources are already gone at this point. If you do not
desire this behavior consider enabling owner references instead.
**postgres-operator ConfigMap**
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-operator
data:
enable_finalizers: "false"
enable_owner_references: "true"
```
**OperatorConfiguration**
```yaml
apiVersion: "acid.zalan.do/v1"
kind: OperatorConfiguration
metadata:
name: postgresql-operator-configuration
configuration:
kubernetes:
enable_finalizers: false
enable_owner_references: true
```
:warning: Please note, both options are disabled by default. When enabling owner
references the operator cannot block cascading deletes, even when the [delete protection annotations](administrator.md#delete-protection-via-annotations)
are in place. You would need an K8s admission controller that blocks the actual
`kubectl delete` API call e.g. based on existing annotations.
## Role-based access control for the operator ## Role-based access control for the operator
@ -304,7 +386,7 @@ exceptions:
The interval of days can be set with `password_rotation_interval` (default The interval of days can be set with `password_rotation_interval` (default
`90` = 90 days, minimum 1). On each rotation the user name and password values `90` = 90 days, minimum 1). On each rotation the user name and password values
are replaced in the K8s secret. They belong to a newly created user named after are replaced in the K8s secret. They belong to a newly created user named after
the original role plus rotation date in YYMMDD format. All priviliges are the original role plus rotation date in YYMMDD format. All privileges are
inherited meaning that migration scripts should still grant and revoke rights inherited meaning that migration scripts should still grant and revoke rights
against the original role. The timestamp of the next rotation (in RFC 3339 against the original role. The timestamp of the next rotation (in RFC 3339
format, UTC timezone) is written to the secret as well. Note, if the rotation format, UTC timezone) is written to the secret as well. Note, if the rotation
@ -484,7 +566,7 @@ manifest affinity.
``` ```
If `node_readiness_label_merge` is set to `"OR"` (default) the readiness label If `node_readiness_label_merge` is set to `"OR"` (default) the readiness label
affinty will be appended with its own expressions block: affinity will be appended with its own expressions block:
```yaml ```yaml
affinity: affinity:
@ -540,22 +622,34 @@ By default the topology key for the pod anti affinity is set to
`kubernetes.io/hostname`, you can set another topology key e.g. `kubernetes.io/hostname`, you can set another topology key e.g.
`failure-domain.beta.kubernetes.io/zone`. See [built-in node labels](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#interlude-built-in-node-labels) for available topology keys. `failure-domain.beta.kubernetes.io/zone`. See [built-in node labels](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#interlude-built-in-node-labels) for available topology keys.
## Pod Disruption Budget ## Pod Disruption Budgets
By default the operator uses a PodDisruptionBudget (PDB) to protect the cluster By default the operator creates two PodDisruptionBudgets (PDB) to protect the cluster
from voluntarily disruptions and hence unwanted DB downtime. The `MinAvailable` from voluntarily disruptions and hence unwanted DB downtime: so-called primary PDB and
parameter of the PDB is set to `1` which prevents killing masters in single-node and PDB for critical operations.
clusters and/or the last remaining running instance in a multi-node cluster.
### Primary PDB
The `MinAvailable` parameter of this PDB is set to `1` and, if `pdb_master_label_selector`
is enabled, label selector includes `spilo-role=master` condition, which prevents killing
masters in single-node clusters and/or the last remaining running instance in a multi-node
cluster.
## PDB for critical operations
The `MinAvailable` parameter of this PDB is equal to the `numberOfInstances` set in the
cluster manifest, while label selector includes `critical-operation=true` condition. This
allows to protect all pods of a cluster, given they are labeled accordingly.
For example, Operator labels all Spilo pods with `critical-operation=true` during the major
version upgrade run. You may want to protect cluster pods during other critical operations
by assigning the label to pods yourself or using other means of automation.
The PDB is only relaxed in two scenarios: The PDB is only relaxed in two scenarios:
* If a cluster is scaled down to `0` instances (e.g. for draining nodes) * If a cluster is scaled down to `0` instances (e.g. for draining nodes)
* If the PDB is disabled in the configuration (`enable_pod_disruption_budget`) * If the PDB is disabled in the configuration (`enable_pod_disruption_budget`)
The PDB is still in place having `MinAvailable` set to `0`. If enabled it will The PDBs are still in place having `MinAvailable` set to `0`. Disabling PDBs
be automatically set to `1` on scale up. Disabling PDBs helps avoiding blocking helps avoiding blocking Kubernetes upgrades in managed K8s environments at the
Kubernetes upgrades in managed K8s environments at the cost of prolonged DB cost of prolonged DB downtime. See PR [#384](https://github.com/zalando/postgres-operator/pull/384)
downtime. See PR [#384](https://github.com/zalando/postgres-operator/pull/384)
for the use case. for the use case.
## Add cluster-specific labels ## Add cluster-specific labels
@ -806,6 +900,7 @@ services:
There are multiple options to specify service annotations that will be merged There are multiple options to specify service annotations that will be merged
with each other and override in the following order (where latter take with each other and override in the following order (where latter take
precedence): precedence):
1. Default annotations if LoadBalancer is enabled 1. Default annotations if LoadBalancer is enabled
2. Globally configured `custom_service_annotations` 2. Globally configured `custom_service_annotations`
3. `serviceAnnotations` specified in the cluster manifest 3. `serviceAnnotations` specified in the cluster manifest
@ -1048,7 +1143,7 @@ metadata:
iam.gke.io/gcp-service-account: <GCP_SERVICE_ACCOUNT_NAME>@<GCP_PROJECT_ID>.iam.gserviceaccount.com iam.gke.io/gcp-service-account: <GCP_SERVICE_ACCOUNT_NAME>@<GCP_PROJECT_ID>.iam.gserviceaccount.com
``` ```
2. Specify the new custom service account in your [operator paramaters](./reference/operator_parameters.md) 2. Specify the new custom service account in your [operator parameters](./reference/operator_parameters.md)
If using manual deployment or kustomize, this is done by setting If using manual deployment or kustomize, this is done by setting
`pod_service_account_name` in your configuration file specified in the `pod_service_account_name` in your configuration file specified in the
@ -1217,7 +1312,7 @@ aws_or_gcp:
If cluster members have to be (re)initialized restoring physical backups If cluster members have to be (re)initialized restoring physical backups
happens automatically either from the backup location or by running happens automatically either from the backup location or by running
[pg_basebackup](https://www.postgresql.org/docs/16/app-pgbasebackup.html) [pg_basebackup](https://www.postgresql.org/docs/17/app-pgbasebackup.html)
on one of the other running instances (preferably replicas if they do not lag on one of the other running instances (preferably replicas if they do not lag
behind). You can test restoring backups by [cloning](user.md#how-to-clone-an-existing-postgresql-cluster) behind). You can test restoring backups by [cloning](user.md#how-to-clone-an-existing-postgresql-cluster)
clusters. clusters.
@ -1283,7 +1378,7 @@ but only snapshots of your data. In its current state, see logical backups as a
way to quickly create SQL dumps that you can easily restore in an empty test way to quickly create SQL dumps that you can easily restore in an empty test
cluster. cluster.
2. The [example image](https://github.com/zalando/postgres-operator/blob/master/docker/logical-backup/Dockerfile) implements the backup 2. The [example image](https://github.com/zalando/postgres-operator/blob/master/logical-backup/Dockerfile) implements the backup
via `pg_dumpall` and upload of compressed and encrypted results to an S3 bucket. via `pg_dumpall` and upload of compressed and encrypted results to an S3 bucket.
`pg_dumpall` requires a `superuser` access to a DB and runs on the replica when `pg_dumpall` requires a `superuser` access to a DB and runs on the replica when
possible. possible.
@ -1325,6 +1420,10 @@ configuration:
volumeMounts: volumeMounts:
- mountPath: /custom-pgdata-mountpoint - mountPath: /custom-pgdata-mountpoint
name: pgdata name: pgdata
env:
- name: "ENV_VAR_NAME"
value: "any-k8s-env-things"
command: ['sh', '-c', 'echo "logging" > /opt/logs.txt']
- ... - ...
``` ```
@ -1399,7 +1498,7 @@ make docker
# build in image in minikube docker env # build in image in minikube docker env
eval $(minikube docker-env) eval $(minikube docker-env)
docker build -t registry.opensource.zalan.do/acid/postgres-operator-ui:v1.8.1 . docker build -t ghcr.io/zalando/postgres-operator-ui:v1.13.0 .
# apply UI manifests next to a running Postgres Operator # apply UI manifests next to a running Postgres Operator
kubectl apply -f manifests/ kubectl apply -f manifests/

View File

@ -16,7 +16,7 @@ under the ~/go/src sub directories.
Given the schema above, the Postgres Operator source code located at Given the schema above, the Postgres Operator source code located at
`github.com/zalando/postgres-operator` should be put at `github.com/zalando/postgres-operator` should be put at
-`~/go/src/github.com/zalando/postgres-operator`. `~/go/src/github.com/zalando/postgres-operator`.
```bash ```bash
export GOPATH=~/go export GOPATH=~/go
@ -105,6 +105,7 @@ and K8s-like APIs for its custom resource definitions, namely the
Postgres CRD and the operator CRD. The usage of the code generation follows Postgres CRD and the operator CRD. The usage of the code generation follows
conventions from the K8s community. Relevant scripts live in the `hack` conventions from the K8s community. Relevant scripts live in the `hack`
directory: directory:
* `update-codegen.sh` triggers code generation for the APIs defined in `pkg/apis/acid.zalan.do/`, * `update-codegen.sh` triggers code generation for the APIs defined in `pkg/apis/acid.zalan.do/`,
* `verify-codegen.sh` checks if the generated code is up-to-date (to be used within CI). * `verify-codegen.sh` checks if the generated code is up-to-date (to be used within CI).
@ -112,6 +113,7 @@ The `/pkg/generated/` contains the resultant code. To make these scripts work,
you may need to `export GOPATH=$(go env GOPATH)` you may need to `export GOPATH=$(go env GOPATH)`
References for code generation are: References for code generation are:
* [Relevant pull request](https://github.com/zalando/postgres-operator/pull/369) * [Relevant pull request](https://github.com/zalando/postgres-operator/pull/369)
See comments there for minor issues that can sometimes broke the generation process. See comments there for minor issues that can sometimes broke the generation process.
* [Code generator source code](https://github.com/kubernetes/code-generator) * [Code generator source code](https://github.com/kubernetes/code-generator)
@ -186,7 +188,7 @@ go get -u github.com/derekparker/delve/cmd/dlv
``` ```
RUN apk --no-cache add go git musl-dev RUN apk --no-cache add go git musl-dev
RUN go get -d github.com/derekparker/delve/cmd/dlv RUN go get github.com/derekparker/delve/cmd/dlv
``` ```
* Update the `Makefile` to build the project with debugging symbols. For that * Update the `Makefile` to build the project with debugging symbols. For that
@ -315,6 +317,7 @@ precedence.
Update the following Go files that obtain the configuration parameter from the Update the following Go files that obtain the configuration parameter from the
manifest files: manifest files:
* [operator_configuration_type.go](https://github.com/zalando/postgres-operator/blob/master/pkg/apis/acid.zalan.do/v1/operator_configuration_type.go) * [operator_configuration_type.go](https://github.com/zalando/postgres-operator/blob/master/pkg/apis/acid.zalan.do/v1/operator_configuration_type.go)
* [operator_config.go](https://github.com/zalando/postgres-operator/blob/master/pkg/controller/operator_config.go) * [operator_config.go](https://github.com/zalando/postgres-operator/blob/master/pkg/controller/operator_config.go)
* [config.go](https://github.com/zalando/postgres-operator/blob/master/pkg/util/config/config.go) * [config.go](https://github.com/zalando/postgres-operator/blob/master/pkg/util/config/config.go)
@ -323,6 +326,7 @@ Postgres manifest parameters are defined in the [api package](https://github.com
The operator behavior has to be implemented at least in [k8sres.go](https://github.com/zalando/postgres-operator/blob/master/pkg/cluster/k8sres.go). The operator behavior has to be implemented at least in [k8sres.go](https://github.com/zalando/postgres-operator/blob/master/pkg/cluster/k8sres.go).
Validation of CRD parameters is controlled in [crds.go](https://github.com/zalando/postgres-operator/blob/master/pkg/apis/acid.zalan.do/v1/crds.go). Validation of CRD parameters is controlled in [crds.go](https://github.com/zalando/postgres-operator/blob/master/pkg/apis/acid.zalan.do/v1/crds.go).
Please, reflect your changes in tests, for example in: Please, reflect your changes in tests, for example in:
* [config_test.go](https://github.com/zalando/postgres-operator/blob/master/pkg/util/config/config_test.go) * [config_test.go](https://github.com/zalando/postgres-operator/blob/master/pkg/util/config/config_test.go)
* [k8sres_test.go](https://github.com/zalando/postgres-operator/blob/master/pkg/cluster/k8sres_test.go) * [k8sres_test.go](https://github.com/zalando/postgres-operator/blob/master/pkg/cluster/k8sres_test.go)
* [util_test.go](https://github.com/zalando/postgres-operator/blob/master/pkg/apis/acid.zalan.do/v1/util_test.go) * [util_test.go](https://github.com/zalando/postgres-operator/blob/master/pkg/apis/acid.zalan.do/v1/util_test.go)
@ -330,6 +334,7 @@ Please, reflect your changes in tests, for example in:
### Updating manifest files ### Updating manifest files
For the CRD-based configuration, please update the following files: For the CRD-based configuration, please update the following files:
* the default [OperatorConfiguration](https://github.com/zalando/postgres-operator/blob/master/manifests/postgresql-operator-default-configuration.yaml) * the default [OperatorConfiguration](https://github.com/zalando/postgres-operator/blob/master/manifests/postgresql-operator-default-configuration.yaml)
* the CRD's [validation](https://github.com/zalando/postgres-operator/blob/master/manifests/operatorconfiguration.crd.yaml) * the CRD's [validation](https://github.com/zalando/postgres-operator/blob/master/manifests/operatorconfiguration.crd.yaml)
* the CRD's validation in the [Helm chart](https://github.com/zalando/postgres-operator/blob/master/charts/postgres-operator/crds/operatorconfigurations.yaml) * the CRD's validation in the [Helm chart](https://github.com/zalando/postgres-operator/blob/master/charts/postgres-operator/crds/operatorconfigurations.yaml)
@ -342,6 +347,7 @@ Last but no least, update the [ConfigMap](https://github.com/zalando/postgres-op
Finally, add a section for each new configuration option and/or cluster manifest Finally, add a section for each new configuration option and/or cluster manifest
parameter in the reference documents: parameter in the reference documents:
* [config reference](reference/operator_parameters.md) * [config reference](reference/operator_parameters.md)
* [manifest reference](reference/cluster_manifest.md) * [manifest reference](reference/cluster_manifest.md)

View File

@ -10,7 +10,7 @@ hence set it up first. For local tests we recommend to use one of the following
solutions: solutions:
* [minikube](https://github.com/kubernetes/minikube/releases), which creates a * [minikube](https://github.com/kubernetes/minikube/releases), which creates a
single-node K8s cluster inside a VM (requires KVM or VirtualBox), K8s cluster inside a container or VM (requires Docker, KVM, Hyper-V, HyperKit, VirtualBox, or similar),
* [kind](https://kind.sigs.k8s.io/) and [k3d](https://k3d.io), which allows creating multi-nodes K8s * [kind](https://kind.sigs.k8s.io/) and [k3d](https://k3d.io), which allows creating multi-nodes K8s
clusters running on Docker (requires Docker) clusters running on Docker (requires Docker)
@ -20,7 +20,7 @@ This quickstart assumes that you have started minikube or created a local kind
cluster. Note that you can also use built-in K8s support in the Docker Desktop cluster. Note that you can also use built-in K8s support in the Docker Desktop
for Mac to follow the steps of this tutorial. You would have to replace for Mac to follow the steps of this tutorial. You would have to replace
`minikube start` and `minikube delete` with your launch actions for the Docker `minikube start` and `minikube delete` with your launch actions for the Docker
built-in K8s support. Desktop built-in K8s support.
## Configuration Options ## Configuration Options
@ -230,7 +230,7 @@ kubectl delete postgresql acid-minimal-cluster
``` ```
This should remove the associated StatefulSet, database Pods, Services and This should remove the associated StatefulSet, database Pods, Services and
Endpoints. The PersistentVolumes are released and the PodDisruptionBudget is Endpoints. The PersistentVolumes are released and the PodDisruptionBudgets are
deleted. Secrets however are not deleted and backups will remain in place. deleted. Secrets however are not deleted and backups will remain in place.
When deleting a cluster while it is still starting up or got stuck during that When deleting a cluster while it is still starting up or got stuck during that

View File

@ -114,6 +114,12 @@ These parameters are grouped directly under the `spec` key in the manifest.
this parameter. Optional, when empty the load balancer service becomes this parameter. Optional, when empty the load balancer service becomes
inaccessible from outside of the Kubernetes cluster. inaccessible from outside of the Kubernetes cluster.
* **maintenanceWindows**
a list which defines specific time frames when certain maintenance operations
such as automatic major upgrades or master pod migration. Accepted formats
are "01:00-06:00" for daily maintenance windows or "Sat:00:00-04:00" for specific
days, with all times in UTC.
* **users** * **users**
a map of usernames to user flags for the users that should be created in the a map of usernames to user flags for the users that should be created in the
cluster by the operator. User flags are a list, allowed elements are cluster by the operator. User flags are a list, allowed elements are
@ -223,10 +229,17 @@ These parameters are grouped directly under the `spec` key in the manifest.
Determines if the logical backup of this cluster should be taken and uploaded Determines if the logical backup of this cluster should be taken and uploaded
to S3. Default: false. Optional. to S3. Default: false. Optional.
* **logicalBackupRetention**
You can set a retention time for the logical backup cron job to remove old backup
files after a new backup has been uploaded. Example values are "3 days", "2 weeks", or
"1 month". It takes precedence over the global `logical_backup_s3_retention_time`
configuration. Currently only supported for AWS. Optional.
* **logicalBackupSchedule** * **logicalBackupSchedule**
Schedule for the logical backup K8s cron job. Please take Schedule for the logical backup K8s cron job. Please take
[the reference schedule format](https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#schedule) [the reference schedule format](https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#schedule)
into account. Optional. Default is: "30 00 \* \* \*" into account. It takes precedence over the global `logical_backup_schedule`
configuration. Optional.
* **additionalVolumes** * **additionalVolumes**
List of additional volumes to mount in each container of the statefulset pod. List of additional volumes to mount in each container of the statefulset pod.
@ -234,7 +247,8 @@ These parameters are grouped directly under the `spec` key in the manifest.
[kubernetes volumeSource](https://godoc.org/k8s.io/api/core/v1#VolumeSource). [kubernetes volumeSource](https://godoc.org/k8s.io/api/core/v1#VolumeSource).
It allows you to mount existing PersistentVolumeClaims, ConfigMaps and Secrets inside the StatefulSet. It allows you to mount existing PersistentVolumeClaims, ConfigMaps and Secrets inside the StatefulSet.
Also an `emptyDir` volume can be shared between initContainer and statefulSet. Also an `emptyDir` volume can be shared between initContainer and statefulSet.
Additionaly, you can provide a `SubPath` for volume mount (a file in a configMap source volume, for example). Additionally, you can provide a `SubPath` for volume mount (a file in a configMap source volume, for example).
Set `isSubPathExpr` to true if you want to include [API environment variables](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment).
You can also specify in which container the additional Volumes will be mounted with the `targetContainers` array option. You can also specify in which container the additional Volumes will be mounted with the `targetContainers` array option.
If `targetContainers` is empty, additional volumes will be mounted only in the `postgres` container. If `targetContainers` is empty, additional volumes will be mounted only in the `postgres` container.
If you set the `all` special item, it will be mounted in all containers (postgres + sidecars). If you set the `all` special item, it will be mounted in all containers (postgres + sidecars).
@ -243,7 +257,7 @@ These parameters are grouped directly under the `spec` key in the manifest.
## Prepared Databases ## Prepared Databases
The operator can create databases with default owner, reader and writer roles The operator can create databases with default owner, reader and writer roles
without the need to specifiy them under `users` or `databases` sections. Those without the need to specify them under `users` or `databases` sections. Those
parameters are grouped under the `preparedDatabases` top-level key. For more parameters are grouped under the `preparedDatabases` top-level key. For more
information, see [user docs](../user.md#prepared-databases-with-roles-and-default-privileges). information, see [user docs](../user.md#prepared-databases-with-roles-and-default-privileges).
@ -477,6 +491,9 @@ properties of the persistent storage that stores Postgres data.
* **subPath** * **subPath**
Subpath to use when mounting volume into Spilo container. Optional. Subpath to use when mounting volume into Spilo container. Optional.
* **isSubPathExpr**
Set it to true if the specified subPath is an expression. Optional.
* **iops** * **iops**
When running the operator on AWS the latest generation of EBS volumes (`gp3`) When running the operator on AWS the latest generation of EBS volumes (`gp3`)
allows for configuring the number of IOPS. Maximum is 16000. Optional. allows for configuring the number of IOPS. Maximum is 16000. Optional.
@ -621,7 +638,7 @@ the global configuration before adding the `tls` section'.
## Change data capture streams ## Change data capture streams
This sections enables change data capture (CDC) streams via Postgres' This sections enables change data capture (CDC) streams via Postgres'
[logical decoding](https://www.postgresql.org/docs/16/logicaldecoding.html) [logical decoding](https://www.postgresql.org/docs/17/logicaldecoding.html)
feature and `pgoutput` plugin. While the Postgres operator takes responsibility feature and `pgoutput` plugin. While the Postgres operator takes responsibility
for providing the setup to publish change events, it relies on external tools for providing the setup to publish change events, it relies on external tools
to consume them. At Zalando, we are using a workflow based on to consume them. At Zalando, we are using a workflow based on
@ -635,11 +652,11 @@ can have the following properties:
* **applicationId** * **applicationId**
The application name to which the database and CDC belongs to. For each The application name to which the database and CDC belongs to. For each
set of streams with a distinct `applicationId` a separate stream CR as well set of streams with a distinct `applicationId` a separate stream resource as
as a separate logical replication slot will be created. This means there can well as a separate logical replication slot will be created. This means there
be different streams in the same database and streams with the same can be different streams in the same database and streams with the same
`applicationId` are bundled in one stream CR. The stream CR will be called `applicationId` are bundled in one stream resource. The stream resource will
like the Postgres cluster plus "-<applicationId>" suffix. Required. be called like the Postgres cluster plus "-<applicationId>" suffix. Required.
* **database** * **database**
Name of the database from where events will be published via Postgres' Name of the database from where events will be published via Postgres'
@ -650,21 +667,37 @@ can have the following properties:
* **tables** * **tables**
Defines a map of table names and their properties (`eventType`, `idColumn` Defines a map of table names and their properties (`eventType`, `idColumn`
and `payloadColumn`). The CDC operator is following the [outbox pattern](https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/). and `payloadColumn`). Required.
The CDC operator is following the [outbox pattern](https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/).
The application is responsible for putting events into a (JSON/B or VARCHAR) The application is responsible for putting events into a (JSON/B or VARCHAR)
payload column of the outbox table in the structure of the specified target payload column of the outbox table in the structure of the specified target
event type. The operator will create a [PUBLICATION](https://www.postgresql.org/docs/16/logical-replication-publication.html) event type. The operator will create a [PUBLICATION](https://www.postgresql.org/docs/17/logical-replication-publication.html)
in Postgres for all tables specified for one `database` and `applicationId`. in Postgres for all tables specified for one `database` and `applicationId`.
The CDC operator will consume from it shortly after transactions are The CDC operator will consume from it shortly after transactions are
committed to the outbox table. The `idColumn` will be used in telemetry for committed to the outbox table. The `idColumn` will be used in telemetry for
the CDC operator. The names for `idColumn` and `payloadColumn` can be the CDC operator. The names for `idColumn` and `payloadColumn` can be
configured. Defaults are `id` and `payload`. The target `eventType` has to configured. Defaults are `id` and `payload`. The target `eventType` has to
be defined. Required. be defined. One can also specify a `recoveryEventType` that will be used
for a dead letter queue. By enabling `ignoreRecovery`, you can choose to
ignore failing events.
* **filter** * **filter**
Streamed events can be filtered by a jsonpath expression for each table. Streamed events can be filtered by a jsonpath expression for each table.
Optional. Optional.
* **enableRecovery**
Flag to enable a dead letter queue recovery for all streams tables.
Alternatively, recovery can also be enable for single outbox tables by only
specifying a `recoveryEventType` and no `enableRecovery` flag. When set to
false or missing, events will be retried until consuming succeeded. You can
use a `filter` expression to get rid of poison pills. Optional.
* **batchSize** * **batchSize**
Defines the size of batches in which events are consumed. Optional. Defines the size of batches in which events are consumed. Optional.
Defaults to 1. Defaults to 1.
* **cpu**
CPU requests to be set as an annotation on the stream resource. Optional.
* **memory**
memory requests to be set as an annotation on the stream resource. Optional.

View File

@ -94,9 +94,6 @@ Those are top-level keys, containing both leaf keys and groups.
* **enable_pgversion_env_var** * **enable_pgversion_env_var**
With newer versions of Spilo, it is preferable to use `PGVERSION` pod environment variable instead of the setting `postgresql.bin_dir` in the `SPILO_CONFIGURATION` env variable. When this option is true, the operator sets `PGVERSION` and omits `postgresql.bin_dir` from `SPILO_CONFIGURATION`. When false, the `postgresql.bin_dir` is set. This setting takes precedence over `PGVERSION`; see PR 222 in Spilo. The default is `true`. With newer versions of Spilo, it is preferable to use `PGVERSION` pod environment variable instead of the setting `postgresql.bin_dir` in the `SPILO_CONFIGURATION` env variable. When this option is true, the operator sets `PGVERSION` and omits `postgresql.bin_dir` from `SPILO_CONFIGURATION`. When false, the `postgresql.bin_dir` is set. This setting takes precedence over `PGVERSION`; see PR 222 in Spilo. The default is `true`.
* **enable_spilo_wal_path_compat**
enables backwards compatible path between Spilo 12 and Spilo 13+ images. The default is `false`.
* **enable_team_id_clustername_prefix** * **enable_team_id_clustername_prefix**
To lower the risk of name clashes between clusters of different teams you To lower the risk of name clashes between clusters of different teams you
can turn on this flag and the operator will sync only clusters where the can turn on this flag and the operator will sync only clusters where the
@ -110,8 +107,13 @@ Those are top-level keys, containing both leaf keys and groups.
* **kubernetes_use_configmaps** * **kubernetes_use_configmaps**
Select if setup uses endpoints (default), or configmaps to manage leader when Select if setup uses endpoints (default), or configmaps to manage leader when
DCS is kubernetes (not etcd or similar). In OpenShift it is not possible to DCS is kubernetes (not etcd or similar). In OpenShift it is not possible to
use endpoints option, and configmaps is required. By default, use endpoints option, and configmaps is required. Starting with K8s 1.33,
`kubernetes_use_configmaps: false`, meaning endpoints will be used. endpoints are marked as deprecated. It's recommended to switch to config maps
instead. But, to do so make sure you scale the Postgres cluster down to just
one primary pod (e.g. using `max_instances` option). Otherwise, you risk
running into a split-brain scenario.
By default, `kubernetes_use_configmaps: false`, meaning endpoints will be used.
Starting from v1.16.0 the default will be changed to `true`.
* **docker_image** * **docker_image**
Spilo Docker image for Postgres instances. For production, don't rely on the Spilo Docker image for Postgres instances. For production, don't rely on the
@ -212,7 +214,7 @@ under the `users` key.
For all `LOGIN` roles that are not database owners the operator can rotate For all `LOGIN` roles that are not database owners the operator can rotate
credentials in the corresponding K8s secrets by replacing the username and credentials in the corresponding K8s secrets by replacing the username and
password. This means, new users will be added on each rotation inheriting password. This means, new users will be added on each rotation inheriting
all priviliges from the original roles. The rotation date (in YYMMDD format) all privileges from the original roles. The rotation date (in YYMMDD format)
is appended to the names of the new user. The timestamp of the next rotation is appended to the names of the new user. The timestamp of the next rotation
is written to the secret. The default is `false`. is written to the secret. The default is `false`.
@ -242,7 +244,7 @@ CRD-configuration, they are grouped under the `major_version_upgrade` key.
`"manual"` = manifest triggers action, `"manual"` = manifest triggers action,
`"full"` = manifest and minimal version violation trigger upgrade. `"full"` = manifest and minimal version violation trigger upgrade.
Note, that with all three modes increasing the version in the manifest will Note, that with all three modes increasing the version in the manifest will
trigger a rolling update of the pods. The default is `"off"`. trigger a rolling update of the pods. The default is `"manual"`.
* **major_version_upgrade_team_allow_list** * **major_version_upgrade_team_allow_list**
Upgrades will only be carried out for clusters of listed teams when mode is Upgrades will only be carried out for clusters of listed teams when mode is
@ -250,12 +252,12 @@ CRD-configuration, they are grouped under the `major_version_upgrade` key.
* **minimal_major_version** * **minimal_major_version**
The minimal Postgres major version that will not automatically be upgraded The minimal Postgres major version that will not automatically be upgraded
when `major_version_upgrade_mode` is set to `"full"`. The default is `"12"`. when `major_version_upgrade_mode` is set to `"full"`. The default is `"13"`.
* **target_major_version** * **target_major_version**
The target Postgres major version when upgrading clusters automatically The target Postgres major version when upgrading clusters automatically
which violate the configured allowed `minimal_major_version` when which violate the configured allowed `minimal_major_version` when
`major_version_upgrade_mode` is set to `"full"`. The default is `"16"`. `major_version_upgrade_mode` is set to `"full"`. The default is `"17"`.
## Kubernetes resources ## Kubernetes resources
@ -263,6 +265,31 @@ Parameters to configure cluster-related Kubernetes objects created by the
operator, as well as some timeouts associated with them. In a CRD-based operator, as well as some timeouts associated with them. In a CRD-based
configuration they are grouped under the `kubernetes` key. configuration they are grouped under the `kubernetes` key.
* **enable_finalizers**
By default, a deletion of the Postgresql resource will trigger an event
that leads to a cleanup of all child resources. However, if the database
cluster is in a broken state (e.g. failed initialization) and the operator
cannot fully sync it, there can be leftovers. By enabling finalizers the
operator will ensure all managed resources are deleted prior to the
Postgresql resource. See also [admin docs](../administrator.md#owner-references-and-finalizers)
for more information The default is `false`.
* **enable_owner_references**
The operator can set owner references on its child resources (except PVCs,
Patroni config service/endpoint, cross-namespace secrets) to improve cluster
monitoring and enable cascading deletion. The default is `false`. Warning,
enabling this option disables configured delete protection checks (see below).
* **delete_annotation_date_key**
key name for annotation that compares manifest value with current date in the
YYYY-MM-DD format. Allowed pattern: `'([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'`.
The default is empty which also disables this delete protection check.
* **delete_annotation_name_key**
key name for annotation that compares manifest value with Postgres cluster name.
Allowed pattern: `'([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'`. The default is
empty which also disables this delete protection check.
* **pod_service_account_name** * **pod_service_account_name**
service account used by Patroni running on individual Pods to communicate service account used by Patroni running on individual Pods to communicate
with the operator. Required even if native Kubernetes support in Patroni is with the operator. Required even if native Kubernetes support in Patroni is
@ -293,16 +320,6 @@ configuration they are grouped under the `kubernetes` key.
of a database created by the operator. If the annotation key is also provided of a database created by the operator. If the annotation key is also provided
by the database definition, the database definition value is used. by the database definition, the database definition value is used.
* **delete_annotation_date_key**
key name for annotation that compares manifest value with current date in the
YYYY-MM-DD format. Allowed pattern: `'([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'`.
The default is empty which also disables this delete protection check.
* **delete_annotation_name_key**
key name for annotation that compares manifest value with Postgres cluster name.
Allowed pattern: `'([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'`. The default is
empty which also disables this delete protection check.
* **downscaler_annotations** * **downscaler_annotations**
An array of annotations that should be passed from Postgres CRD on to the An array of annotations that should be passed from Postgres CRD on to the
statefulset and, if exists, to the connection pooler deployment as well. statefulset and, if exists, to the connection pooler deployment as well.
@ -322,29 +339,15 @@ configuration they are grouped under the `kubernetes` key.
pod namespace). pod namespace).
* **pdb_name_format** * **pdb_name_format**
defines the template for PDB (Pod Disruption Budget) names created by the defines the template for primary PDB (Pod Disruption Budget) name created by the
operator. The default is `postgres-{cluster}-pdb`, where `{cluster}` is operator. The default is `postgres-{cluster}-pdb`, where `{cluster}` is
replaced by the cluster name. Only the `{cluster}` placeholders is allowed in replaced by the cluster name. Only the `{cluster}` placeholders is allowed in
the template. the template.
* **pdb_master_label_selector** * **pdb_master_label_selector**
By default the PDB will match the master role hence preventing nodes to be By default the primary PDB will match the master role hence preventing nodes to be
drained if the node_readiness_label is not used. This option if set to `false` drained if the node_readiness_label is not used. If this option if set to
will not add the `spilo-role=master` selector to the PDB. `false` the `spilo-role=master` selector will not be added to the PDB.
* **enable_finalizers**
By default, a deletion of the Postgresql resource will trigger an event
that leads to a cleanup of all child resources. However, if the database
cluster is in a broken state (e.g. failed initialization) and the operator
cannot fully sync it, there can be leftovers. By enabling finalizers the
operator will ensure all managed resources are deleted prior to the
Postgresql resource. There is a trade-off though: The deletion is only
performed after the next two SYNC cycles with the first one updating the
internal spec and the latter reacting on the `deletionTimestamp` while
processing the SYNC event. The final removal of the custom resource will
add a DELETE event to the worker queue but the child resources are already
gone at this point.
The default is `false`.
* **persistent_volume_claim_retention_policy** * **persistent_volume_claim_retention_policy**
The operator tries to protect volumes as much as possible. If somebody The operator tries to protect volumes as much as possible. If somebody
@ -360,8 +363,12 @@ configuration they are grouped under the `kubernetes` key.
`"retain"` - or `when_scaled` - default is also `"retain"`. The other possible `"retain"` - or `when_scaled` - default is also `"retain"`. The other possible
option is `delete`. option is `delete`.
* **enable_secrets_deletion**
By default, the operator deletes secrets when removing the Postgres cluster
manifest. To keep secrets, set this option to `false`. The default is `true`.
* **enable_persistent_volume_claim_deletion** * **enable_persistent_volume_claim_deletion**
By default, the operator deletes PersistentVolumeClaims when removing the By default, the operator deletes persistent volume claims when removing the
Postgres cluster manifest, no matter if `persistent_volume_claim_retention_policy` Postgres cluster manifest, no matter if `persistent_volume_claim_retention_policy`
on the statefulset is set to `retain`. To keep PVCs set this option to `false`. on the statefulset is set to `retain`. To keep PVCs set this option to `false`.
The default is `true`. The default is `true`.
@ -550,7 +557,7 @@ configuration they are grouped under the `kubernetes` key.
pods with `InitialDelaySeconds: 6`, `PeriodSeconds: 10`, `TimeoutSeconds: 5`, pods with `InitialDelaySeconds: 6`, `PeriodSeconds: 10`, `TimeoutSeconds: 5`,
`SuccessThreshold: 1` and `FailureThreshold: 3`. When enabling readiness `SuccessThreshold: 1` and `FailureThreshold: 3`. When enabling readiness
probes it is recommended to switch the `pod_management_policy` to `parallel` probes it is recommended to switch the `pod_management_policy` to `parallel`
to avoid unneccesary waiting times in case of multiple instances failing. to avoid unnecessary waiting times in case of multiple instances failing.
The default is `false`. The default is `false`.
* **storage_resize_mode** * **storage_resize_mode**
@ -699,7 +706,7 @@ In the CRD-based configuration they are grouped under the `load_balancer` key.
replaced by the cluster name, `{namespace}` is replaced with the namespace replaced by the cluster name, `{namespace}` is replaced with the namespace
and `{hostedzone}` is replaced with the hosted zone (the value of the and `{hostedzone}` is replaced with the hosted zone (the value of the
`db_hosted_zone` parameter). The `{team}` placeholder can still be used, `db_hosted_zone` parameter). The `{team}` placeholder can still be used,
although it is not recommened because the team of a cluster can change. although it is not recommended because the team of a cluster can change.
If the cluster name starts with the `teamId` it will also be part of the If the cluster name starts with the `teamId` it will also be part of the
DNS, aynway. No other placeholders are allowed! DNS, aynway. No other placeholders are allowed!
@ -718,7 +725,7 @@ In the CRD-based configuration they are grouped under the `load_balancer` key.
is replaced by the cluster name, `{namespace}` is replaced with the is replaced by the cluster name, `{namespace}` is replaced with the
namespace and `{hostedzone}` is replaced with the hosted zone (the value of namespace and `{hostedzone}` is replaced with the hosted zone (the value of
the `db_hosted_zone` parameter). The `{team}` placeholder can still be used, the `db_hosted_zone` parameter). The `{team}` placeholder can still be used,
although it is not recommened because the team of a cluster can change. although it is not recommended because the team of a cluster can change.
If the cluster name starts with the `teamId` it will also be part of the If the cluster name starts with the `teamId` it will also be part of the
DNS, aynway. No other placeholders are allowed! DNS, aynway. No other placeholders are allowed!
@ -813,11 +820,11 @@ grouped under the `logical_backup` key.
default values from `postgres_pod_resources` will be used. default values from `postgres_pod_resources` will be used.
* **logical_backup_docker_image** * **logical_backup_docker_image**
An image for pods of the logical backup job. The [example image](https://github.com/zalando/postgres-operator/blob/master/docker/logical-backup/Dockerfile) An image for pods of the logical backup job. The [example image](https://github.com/zalando/postgres-operator/blob/master/logical-backup/Dockerfile)
runs `pg_dumpall` on a replica if possible and uploads compressed results to runs `pg_dumpall` on a replica if possible and uploads compressed results to
an S3 bucket under the key `/spilo/pg_cluster_name/cluster_k8s_uuid/logical_backups`. an S3 bucket under the key `/<configured-s3-bucket-prefix>/<pg_cluster_name>/<cluster_k8s_uuid>/logical_backups`.
The default image is the same image built with the Zalando-internal CI The default image is the same image built with the Zalando-internal CI
pipeline. Default: "registry.opensource.zalan.do/acid/logical-backup:v1.11.0" pipeline. Default: "ghcr.io/zalando/postgres-operator/logical-backup:v1.13.0"
* **logical_backup_google_application_credentials** * **logical_backup_google_application_credentials**
Specifies the path of the google cloud service account json file. Default is empty. Specifies the path of the google cloud service account json file. Default is empty.
@ -845,6 +852,9 @@ grouped under the `logical_backup` key.
S3 bucket to store backup results. The bucket has to be present and S3 bucket to store backup results. The bucket has to be present and
accessible by Postgres pods. Default: empty. accessible by Postgres pods. Default: empty.
* **logical_backup_s3_bucket_prefix**
S3 bucket prefix to use in configured bucket. Default: "spilo"
* **logical_backup_s3_endpoint** * **logical_backup_s3_endpoint**
When using non-AWS S3 storage, endpoint can be set as a ENV variable. The default is empty. When using non-AWS S3 storage, endpoint can be set as a ENV variable. The default is empty.

View File

@ -30,7 +30,7 @@ spec:
databases: databases:
foo: zalando foo: zalando
postgresql: postgresql:
version: "16" version: "17"
``` ```
Once you cloned the Postgres Operator [repository](https://github.com/zalando/postgres-operator) Once you cloned the Postgres Operator [repository](https://github.com/zalando/postgres-operator)
@ -109,7 +109,7 @@ metadata:
spec: spec:
[...] [...]
postgresql: postgresql:
version: "16" version: "17"
parameters: parameters:
password_encryption: scram-sha-256 password_encryption: scram-sha-256
``` ```
@ -517,7 +517,7 @@ Postgres Operator will create the following NOLOGIN roles:
The `<dbname>_owner` role is the database owner and should be used when creating The `<dbname>_owner` role is the database owner and should be used when creating
new database objects. All members of the `admin` role, e.g. teams API roles, can new database objects. All members of the `admin` role, e.g. teams API roles, can
become the owner with the `SET ROLE` command. [Default privileges](https://www.postgresql.org/docs/16/sql-alterdefaultprivileges.html) become the owner with the `SET ROLE` command. [Default privileges](https://www.postgresql.org/docs/17/sql-alterdefaultprivileges.html)
are configured for the owner role so that the `<dbname>_reader` role are configured for the owner role so that the `<dbname>_reader` role
automatically gets read-access (SELECT) to new tables and sequences and the automatically gets read-access (SELECT) to new tables and sequences and the
`<dbname>_writer` receives write-access (INSERT, UPDATE, DELETE on tables, `<dbname>_writer` receives write-access (INSERT, UPDATE, DELETE on tables,
@ -594,7 +594,7 @@ spec:
### Schema `search_path` for default roles ### Schema `search_path` for default roles
The schema [`search_path`](https://www.postgresql.org/docs/16/ddl-schemas.html#DDL-SCHEMAS-PATH) The schema [`search_path`](https://www.postgresql.org/docs/17/ddl-schemas.html#DDL-SCHEMAS-PATH)
for each role will include the role name and the schemas, this role should have for each role will include the role name and the schemas, this role should have
access to. So `foo_bar_writer` does not have to schema-qualify tables from access to. So `foo_bar_writer` does not have to schema-qualify tables from
schemas `foo_bar_writer, bar`, while `foo_writer` can look up `foo_writer` and schemas `foo_bar_writer, bar`, while `foo_writer` can look up `foo_writer` and
@ -695,7 +695,7 @@ handle it.
### HugePages support ### HugePages support
The operator supports [HugePages](https://www.postgresql.org/docs/16/kernel-resources.html#LINUX-HUGEPAGES). The operator supports [HugePages](https://www.postgresql.org/docs/17/kernel-resources.html#LINUX-HUGEPAGES).
To enable HugePages, set the matching resource requests and/or limits in the manifest: To enable HugePages, set the matching resource requests and/or limits in the manifest:
```yaml ```yaml
@ -758,7 +758,7 @@ If you need to define a `nodeAffinity` for all your Postgres clusters use the
## In-place major version upgrade ## In-place major version upgrade
Starting with Spilo 13, operator supports in-place major version upgrade to a Starting with Spilo 13, operator supports in-place major version upgrade to a
higher major version (e.g. from PG 11 to PG 13). To trigger the upgrade, higher major version (e.g. from PG 14 to PG 16). To trigger the upgrade,
simply increase the version in the manifest. It is your responsibility to test simply increase the version in the manifest. It is your responsibility to test
your applications against the new version before the upgrade; downgrading is your applications against the new version before the upgrade; downgrading is
not supported. The easiest way to do so is to try the upgrade on the cloned not supported. The easiest way to do so is to try the upgrade on the cloned
@ -838,7 +838,7 @@ spec:
### Clone directly ### Clone directly
Another way to get a fresh copy of your source DB cluster is via Another way to get a fresh copy of your source DB cluster is via
[pg_basebackup](https://www.postgresql.org/docs/16/app-pgbasebackup.html). To [pg_basebackup](https://www.postgresql.org/docs/17/app-pgbasebackup.html). To
use this feature simply leave out the timestamp field from the clone section. use this feature simply leave out the timestamp field from the clone section.
The operator will connect to the service of the source cluster by name. If the The operator will connect to the service of the source cluster by name. If the
cluster is called test, then the connection string will look like host=test cluster is called test, then the connection string will look like host=test
@ -900,7 +900,7 @@ the PostgreSQL version between source and target cluster has to be the same.
To start a cluster as standby, add the following `standby` section in the YAML To start a cluster as standby, add the following `standby` section in the YAML
file. You can stream changes from archived WAL files (AWS S3 or Google Cloud file. You can stream changes from archived WAL files (AWS S3 or Google Cloud
Storage) or from a remote primary. Only one option can be specfied in the Storage) or from a remote primary. Only one option can be specified in the
manifest: manifest:
```yaml ```yaml
@ -911,7 +911,7 @@ spec:
For GCS, you have to define STANDBY_GOOGLE_APPLICATION_CREDENTIALS as a For GCS, you have to define STANDBY_GOOGLE_APPLICATION_CREDENTIALS as a
[custom pod environment variable](administrator.md#custom-pod-environment-variables). [custom pod environment variable](administrator.md#custom-pod-environment-variables).
It is not set from the config to allow for overridding. It is not set from the config to allow for overriding.
```yaml ```yaml
spec: spec:
@ -1005,6 +1005,7 @@ spec:
env: env:
- name: "ENV_VAR_NAME" - name: "ENV_VAR_NAME"
value: "any-k8s-env-things" value: "any-k8s-env-things"
command: ['sh', '-c', 'echo "logging" > /opt/logs.txt']
``` ```
In addition to any environment variables you specify, the following environment In addition to any environment variables you specify, the following environment
@ -1281,7 +1282,7 @@ minutes if the certificates have changed and reloads postgres accordingly.
### TLS certificates for connection pooler ### TLS certificates for connection pooler
By default, the pgBouncer image generates its own TLS certificate like Spilo. By default, the pgBouncer image generates its own TLS certificate like Spilo.
When the `tls` section is specfied in the manifest it will be used for the When the `tls` section is specified in the manifest it will be used for the
connection pooler pod(s) as well. The security context options are hard coded connection pooler pod(s) as well. The security context options are hard coded
to `runAsUser: 100` and `runAsGroup: 101`. The `fsGroup` will be the same to `runAsUser: 100` and `runAsGroup: 101`. The `fsGroup` will be the same
like for Spilo. like for Spilo.

View File

@ -15,7 +15,7 @@ RUN apt-get update \
curl \ curl \
vim \ vim \
&& pip3 install --no-cache-dir -r requirements.txt \ && pip3 install --no-cache-dir -r requirements.txt \
&& curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kubectl \ && curl -LO https://dl.k8s.io/release/v1.32.9/bin/linux/amd64/kubectl \
&& chmod +x ./kubectl \ && chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl \ && mv ./kubectl /usr/local/bin/kubectl \
&& apt-get clean \ && apt-get clean \

View File

@ -46,7 +46,7 @@ tools:
# install pinned version of 'kind' # install pinned version of 'kind'
# go install must run outside of a dir with a (module-based) Go project ! # go install must run outside of a dir with a (module-based) Go project !
# otherwise go install updates project's dependencies and/or behaves differently # otherwise go install updates project's dependencies and/or behaves differently
cd "/tmp" && GO111MODULE=on go install sigs.k8s.io/kind@v0.22.0 cd "/tmp" && GO111MODULE=on go install sigs.k8s.io/kind@v0.24.0
e2etest: tools copy clean e2etest: tools copy clean
./run.sh main ./run.sh main

View File

@ -8,7 +8,7 @@ IFS=$'\n\t'
readonly cluster_name="postgres-operator-e2e-tests" readonly cluster_name="postgres-operator-e2e-tests"
readonly kubeconfig_path="/tmp/kind-config-${cluster_name}" readonly kubeconfig_path="/tmp/kind-config-${cluster_name}"
readonly spilo_image="registry.opensource.zalan.do/acid/spilo-16-e2e:0.1" readonly spilo_image="registry.opensource.zalan.do/acid/spilo-17-e2e:0.3"
readonly e2e_test_runner_image="registry.opensource.zalan.do/acid/postgres-operator-e2e-tests-runner:0.4" readonly e2e_test_runner_image="registry.opensource.zalan.do/acid/postgres-operator-e2e-tests-runner:0.4"
export GOPATH=${GOPATH-~/go} export GOPATH=${GOPATH-~/go}

View File

@ -20,6 +20,7 @@ class K8sApi:
self.config = config.load_kube_config() self.config = config.load_kube_config()
self.k8s_client = client.ApiClient() self.k8s_client = client.ApiClient()
self.rbac_api = client.RbacAuthorizationV1Api()
self.core_v1 = client.CoreV1Api() self.core_v1 = client.CoreV1Api()
self.apps_v1 = client.AppsV1Api() self.apps_v1 = client.AppsV1Api()
@ -217,7 +218,6 @@ class K8s:
pod_phase = 'Failing over' pod_phase = 'Failing over'
new_pod_node = '' new_pod_node = ''
pods_with_update_flag = self.count_pods_with_rolling_update_flag(labels, namespace) pods_with_update_flag = self.count_pods_with_rolling_update_flag(labels, namespace)
while (pod_phase != 'Running') or (new_pod_node not in failover_targets): while (pod_phase != 'Running') or (new_pod_node not in failover_targets):
pods = self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items pods = self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items
if pods: if pods:
@ -524,7 +524,6 @@ class K8sBase:
pod_phase = 'Failing over' pod_phase = 'Failing over'
new_pod_node = '' new_pod_node = ''
pods_with_update_flag = self.count_pods_with_rolling_update_flag(labels, namespace) pods_with_update_flag = self.count_pods_with_rolling_update_flag(labels, namespace)
while (pod_phase != 'Running') or (new_pod_node not in failover_targets): while (pod_phase != 'Running') or (new_pod_node not in failover_targets):
pods = self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items pods = self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items
if pods: if pods:

View File

@ -12,9 +12,9 @@ from kubernetes import client
from tests.k8s_api import K8s from tests.k8s_api import K8s
from kubernetes.client.rest import ApiException from kubernetes.client.rest import ApiException
SPILO_CURRENT = "registry.opensource.zalan.do/acid/spilo-16-e2e:0.1" SPILO_CURRENT = "registry.opensource.zalan.do/acid/spilo-17-e2e:0.3"
SPILO_LAZY = "registry.opensource.zalan.do/acid/spilo-16-e2e:0.2" SPILO_LAZY = "registry.opensource.zalan.do/acid/spilo-17-e2e:0.4"
SPILO_FULL_IMAGE = "ghcr.io/zalando/spilo-17:4.0-p3"
def to_selector(labels): def to_selector(labels):
return ",".join(["=".join(lbl) for lbl in labels.items()]) return ",".join(["=".join(lbl) for lbl in labels.items()])
@ -95,7 +95,7 @@ class EndToEndTestCase(unittest.TestCase):
print("Failed to delete the 'standard' storage class: {0}".format(e)) print("Failed to delete the 'standard' storage class: {0}".format(e))
# operator deploys pod service account there on start up # operator deploys pod service account there on start up
# needed for test_multi_namespace_support() # needed for test_multi_namespace_support and test_owner_references
cls.test_namespace = "test" cls.test_namespace = "test"
try: try:
v1_namespace = client.V1Namespace(metadata=client.V1ObjectMeta(name=cls.test_namespace)) v1_namespace = client.V1Namespace(metadata=client.V1ObjectMeta(name=cls.test_namespace))
@ -115,6 +115,7 @@ class EndToEndTestCase(unittest.TestCase):
configmap = yaml.safe_load(f) configmap = yaml.safe_load(f)
configmap["data"]["workers"] = "1" configmap["data"]["workers"] = "1"
configmap["data"]["docker_image"] = SPILO_CURRENT configmap["data"]["docker_image"] = SPILO_CURRENT
configmap["data"]["major_version_upgrade_mode"] = "full"
with open("manifests/configmap.yaml", 'w') as f: with open("manifests/configmap.yaml", 'w') as f:
yaml.dump(configmap, f, Dumper=yaml.Dumper) yaml.dump(configmap, f, Dumper=yaml.Dumper)
@ -129,7 +130,8 @@ class EndToEndTestCase(unittest.TestCase):
"infrastructure-roles.yaml", "infrastructure-roles.yaml",
"infrastructure-roles-new.yaml", "infrastructure-roles-new.yaml",
"custom-team-membership.yaml", "custom-team-membership.yaml",
"e2e-storage-class.yaml"]: "e2e-storage-class.yaml",
"fes.crd.yaml"]:
result = k8s.create_with_kubectl("manifests/" + filename) result = k8s.create_with_kubectl("manifests/" + filename)
print("stdout: {}, stderr: {}".format(result.stdout, result.stderr)) print("stdout: {}, stderr: {}".format(result.stdout, result.stderr))
@ -199,6 +201,7 @@ class EndToEndTestCase(unittest.TestCase):
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", owner_query)), 3, self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", owner_query)), 3,
"Not all additional users found in database", 10, 5) "Not all additional users found in database", 10, 5)
@timeout_decorator.timeout(TEST_TIMEOUT_SEC) @timeout_decorator.timeout(TEST_TIMEOUT_SEC)
def test_additional_pod_capabilities(self): def test_additional_pod_capabilities(self):
''' '''
@ -909,6 +912,19 @@ class EndToEndTestCase(unittest.TestCase):
''' '''
k8s = self.k8s k8s = self.k8s
try:
patch_config_ignored_annotations = {
"data": {
"ignored_annotations": "k8s-status",
}
}
k8s.update_config(patch_config_ignored_annotations)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
sts = k8s.api.apps_v1.read_namespaced_stateful_set('acid-minimal-cluster', 'default')
svc = k8s.api.core_v1.read_namespaced_service('acid-minimal-cluster', 'default')
annotation_patch = { annotation_patch = {
"metadata": { "metadata": {
"annotations": { "annotations": {
@ -917,20 +933,12 @@ class EndToEndTestCase(unittest.TestCase):
} }
} }
try:
sts = k8s.api.apps_v1.read_namespaced_stateful_set('acid-minimal-cluster', 'default')
old_sts_creation_timestamp = sts.metadata.creation_timestamp old_sts_creation_timestamp = sts.metadata.creation_timestamp
k8s.api.apps_v1.patch_namespaced_stateful_set(sts.metadata.name, sts.metadata.namespace, annotation_patch) k8s.api.apps_v1.patch_namespaced_stateful_set(sts.metadata.name, sts.metadata.namespace, annotation_patch)
svc = k8s.api.core_v1.read_namespaced_service('acid-minimal-cluster', 'default')
old_svc_creation_timestamp = svc.metadata.creation_timestamp old_svc_creation_timestamp = svc.metadata.creation_timestamp
k8s.api.core_v1.patch_namespaced_service(svc.metadata.name, svc.metadata.namespace, annotation_patch) k8s.api.core_v1.patch_namespaced_service(svc.metadata.name, svc.metadata.namespace, annotation_patch)
patch_config_ignored_annotations = { k8s.delete_operator_pod()
"data": {
"ignored_annotations": "k8s-status",
}
}
k8s.update_config(patch_config_ignored_annotations)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync") self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
sts = k8s.api.apps_v1.read_namespaced_stateful_set('acid-minimal-cluster', 'default') sts = k8s.api.apps_v1.read_namespaced_stateful_set('acid-minimal-cluster', 'default')
@ -1174,31 +1182,143 @@ class EndToEndTestCase(unittest.TestCase):
self.eventuallyEqual(lambda: len(k8s.get_patroni_running_members("acid-minimal-cluster-0")), 2, "Postgres status did not enter running") self.eventuallyEqual(lambda: len(k8s.get_patroni_running_members("acid-minimal-cluster-0")), 2, "Postgres status did not enter running")
@timeout_decorator.timeout(TEST_TIMEOUT_SEC) @timeout_decorator.timeout(TEST_TIMEOUT_SEC)
@unittest.skip("Skipping this test until fixed")
def test_major_version_upgrade(self): def test_major_version_upgrade(self):
k8s = self.k8s """
result = k8s.create_with_kubectl("manifests/minimal-postgres-manifest-12.yaml") Test major version upgrade: with full upgrade, maintenance window, and annotation
self.eventuallyEqual(lambda: k8s.count_running_pods(labels="application=spilo,cluster-name=acid-upgrade-test"), 2, "No 2 pods running") """
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync") def check_version():
p = k8s.patroni_rest("acid-upgrade-test-0", "") or {}
version = p.get("server_version", 0) // 10000
return version
pg_patch_version = { def get_annotations():
pg_manifest = k8s.api.custom_objects_api.get_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test")
annotations = pg_manifest["metadata"]["annotations"]
return annotations
k8s = self.k8s
cluster_label = 'application=spilo,cluster-name=acid-upgrade-test'
with open("manifests/minimal-postgres-lowest-version-manifest.yaml", 'r+') as f:
upgrade_manifest = yaml.safe_load(f)
upgrade_manifest["spec"]["dockerImage"] = SPILO_FULL_IMAGE
with open("manifests/minimal-postgres-lowest-version-manifest.yaml", 'w') as f:
yaml.dump(upgrade_manifest, f, Dumper=yaml.Dumper)
k8s.create_with_kubectl("manifests/minimal-postgres-lowest-version-manifest.yaml")
self.eventuallyEqual(lambda: k8s.count_running_pods(labels=cluster_label), 2, "No 2 pods running")
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
self.eventuallyEqual(check_version, 13, "Version is not correct")
master_nodes, _ = k8s.get_cluster_nodes(cluster_labels=cluster_label)
# should upgrade immediately
pg_patch_version_14 = {
"spec": { "spec": {
"postgres": { "postgresql": {
"version": "14" "version": "14"
} }
} }
} }
k8s.api.custom_objects_api.patch_namespaced_custom_object( k8s.api.custom_objects_api.patch_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version) "acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version_14)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync") self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
def check_version_14(): k8s.wait_for_pod_failover(master_nodes, 'spilo-role=replica,' + cluster_label)
p = k8s.get_patroni_state("acid-upgrade-test-0") k8s.wait_for_pod_start('spilo-role=master,' + cluster_label)
version = p["server_version"][0:2] k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)
return version self.eventuallyEqual(check_version, 14, "Version should be upgraded from 13 to 14")
self.evantuallyEqual(check_version_14, "14", "Version was not upgrade to 14") # check if annotation for last upgrade's success is set
annotations = get_annotations()
self.assertIsNotNone(annotations.get("last-major-upgrade-success"), "Annotation for last upgrade's success is not set")
# should not upgrade because current time is not in maintenanceWindow
current_time = datetime.now()
maintenance_window_future = f"{(current_time+timedelta(minutes=60)).strftime('%H:%M')}-{(current_time+timedelta(minutes=120)).strftime('%H:%M')}"
pg_patch_version_15_outside_mw = {
"spec": {
"postgresql": {
"version": "15"
},
"maintenanceWindows": [
maintenance_window_future
]
}
}
k8s.api.custom_objects_api.patch_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version_15_outside_mw)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
# no pod replacement outside of the maintenance window
k8s.wait_for_pod_start('spilo-role=master,' + cluster_label)
k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)
self.eventuallyEqual(check_version, 14, "Version should not be upgraded")
second_annotations = get_annotations()
self.assertIsNone(second_annotations.get("last-major-upgrade-failure"), "Annotation for last upgrade's failure should not be set")
# change maintenanceWindows to current
maintenance_window_current = f"{(current_time-timedelta(minutes=30)).strftime('%H:%M')}-{(current_time+timedelta(minutes=30)).strftime('%H:%M')}"
pg_patch_version_15_in_mw = {
"spec": {
"postgresql": {
"version": "15"
},
"maintenanceWindows": [
maintenance_window_current
]
}
}
k8s.api.custom_objects_api.patch_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version_15_in_mw)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
k8s.wait_for_pod_failover(master_nodes, 'spilo-role=master,' + cluster_label)
k8s.wait_for_pod_start('spilo-role=master,' + cluster_label)
k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)
self.eventuallyEqual(check_version, 15, "Version should be upgraded from 14 to 15")
# check if annotation for last upgrade's success is updated after second upgrade
third_annotations = get_annotations()
self.assertIsNotNone(third_annotations.get("last-major-upgrade-success"), "Annotation for last upgrade's success is not set")
self.assertNotEqual(annotations.get("last-major-upgrade-success"), third_annotations.get("last-major-upgrade-success"), "Annotation for last upgrade's success is not updated")
# test upgrade with failed upgrade annotation
pg_patch_version_17 = {
"metadata": {
"annotations": {
"last-major-upgrade-failure": "2024-01-02T15:04:05Z"
},
},
"spec": {
"postgresql": {
"version": "17"
},
},
}
k8s.api.custom_objects_api.patch_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version_17)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
k8s.wait_for_pod_failover(master_nodes, 'spilo-role=replica,' + cluster_label)
k8s.wait_for_pod_start('spilo-role=master,' + cluster_label)
k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)
self.eventuallyEqual(check_version, 15, "Version should not be upgraded because annotation for last upgrade's failure is set")
# change the version back to 15 and should remove failure annotation
k8s.api.custom_objects_api.patch_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-upgrade-test", pg_patch_version_15_in_mw)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
k8s.wait_for_pod_start('spilo-role=master,' + cluster_label)
k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)
self.eventuallyEqual(check_version, 15, "Version should not be upgraded from 15")
fourth_annotations = get_annotations()
self.assertIsNone(fourth_annotations.get("last-major-upgrade-failure"), "Annotation for last upgrade's failure is not removed")
@timeout_decorator.timeout(TEST_TIMEOUT_SEC) @timeout_decorator.timeout(TEST_TIMEOUT_SEC)
def test_persistent_volume_claim_retention_policy(self): def test_persistent_volume_claim_retention_policy(self):
@ -1347,17 +1467,11 @@ class EndToEndTestCase(unittest.TestCase):
k8s.wait_for_pod_start("spilo-role=master", self.test_namespace) k8s.wait_for_pod_start("spilo-role=master", self.test_namespace)
k8s.wait_for_pod_start("spilo-role=replica", self.test_namespace) k8s.wait_for_pod_start("spilo-role=replica", self.test_namespace)
self.assert_master_is_unique(self.test_namespace, "acid-test-cluster") self.assert_master_is_unique(self.test_namespace, "acid-test-cluster")
# acid-test-cluster will be deleted in test_owner_references test
except timeout_decorator.TimeoutError: except timeout_decorator.TimeoutError:
print('Operator log: {}'.format(k8s.get_operator_log())) print('Operator log: {}'.format(k8s.get_operator_log()))
raise raise
finally:
# delete the new cluster so that the k8s_api.get_operator_state works correctly in subsequent tests
# ideally we should delete the 'test' namespace here but
# the pods inside the namespace stuck in the Terminating state making the test time out
k8s.api.custom_objects_api.delete_namespaced_custom_object(
"acid.zalan.do", "v1", self.test_namespace, "postgresqls", "acid-test-cluster")
time.sleep(5)
@timeout_decorator.timeout(TEST_TIMEOUT_SEC) @timeout_decorator.timeout(TEST_TIMEOUT_SEC)
@unittest.skip("Skipping this test until fixed") @unittest.skip("Skipping this test until fixed")
@ -1568,15 +1682,83 @@ class EndToEndTestCase(unittest.TestCase):
self.eventuallyEqual(lambda: k8s.count_running_pods("connection-pooler="+pooler_name), self.eventuallyEqual(lambda: k8s.count_running_pods("connection-pooler="+pooler_name),
0, "Pooler pods not scaled down") 0, "Pooler pods not scaled down")
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
def test_owner_references(self):
'''
Enable owner references, test if resources get updated and test cascade deletion of test cluster.
'''
k8s = self.k8s
cluster_name = 'acid-test-cluster'
cluster_label = 'application=spilo,cluster-name={}'.format(cluster_name)
default_test_cluster = 'acid-minimal-cluster'
try:
# enable owner references in config
enable_owner_refs = {
"data": {
"enable_owner_references": "true"
}
}
k8s.update_config(enable_owner_refs)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
time.sleep(5) # wait for the operator to sync the cluster and update resources
# check if child resources were updated with owner references
self.assertTrue(self.check_cluster_child_resources_owner_references(cluster_name, self.test_namespace), "Owner references not set on all child resources of {}".format(cluster_name))
self.assertTrue(self.check_cluster_child_resources_owner_references(default_test_cluster), "Owner references not set on all child resources of {}".format(default_test_cluster))
# delete the new cluster to test owner references
# and also to make k8s_api.get_operator_state work better in subsequent tests
# ideally we should delete the 'test' namespace here but the pods
# inside the namespace stuck in the Terminating state making the test time out
k8s.api.custom_objects_api.delete_namespaced_custom_object(
"acid.zalan.do", "v1", self.test_namespace, "postgresqls", cluster_name)
# child resources with owner references should be deleted via owner references
self.eventuallyEqual(lambda: k8s.count_pods_with_label(cluster_label), 0, "Pods not deleted")
self.eventuallyEqual(lambda: k8s.count_statefulsets_with_label(cluster_label), 0, "Statefulset not deleted")
self.eventuallyEqual(lambda: k8s.count_services_with_label(cluster_label), 0, "Services not deleted")
self.eventuallyEqual(lambda: k8s.count_endpoints_with_label(cluster_label), 0, "Endpoints not deleted")
self.eventuallyEqual(lambda: k8s.count_pdbs_with_label(cluster_label), 0, "Pod disruption budget not deleted")
self.eventuallyEqual(lambda: k8s.count_secrets_with_label(cluster_label), 0, "Secrets were not deleted")
time.sleep(5) # wait for the operator to also delete the PVCs
# pvcs do not have an owner reference but will deleted by the operator almost immediately
self.eventuallyEqual(lambda: k8s.count_pvcs_with_label(cluster_label), 0, "PVCs not deleted")
# disable owner references in config
disable_owner_refs = {
"data": {
"enable_owner_references": "false"
}
}
k8s.update_config(disable_owner_refs)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
time.sleep(5) # wait for the operator to remove owner references
# check if child resources were updated without Postgresql owner references
self.assertTrue(self.check_cluster_child_resources_owner_references(default_test_cluster, "default", True), "Owner references still present on some child resources of {}".format(default_test_cluster))
except timeout_decorator.TimeoutError:
print('Operator log: {}'.format(k8s.get_operator_log()))
raise
@timeout_decorator.timeout(TEST_TIMEOUT_SEC) @timeout_decorator.timeout(TEST_TIMEOUT_SEC)
def test_password_rotation(self): def test_password_rotation(self):
''' '''
Test password rotation and removal of users due to retention policy Test password rotation and removal of users due to retention policy
''' '''
k8s = self.k8s k8s = self.k8s
cluster_label = 'application=spilo,cluster-name=acid-minimal-cluster'
leader = k8s.get_cluster_leader_pod() leader = k8s.get_cluster_leader_pod()
today = date.today() today = date.today()
# remember number of secrets to make sure it stays the same
secret_count = k8s.count_secrets_with_label(cluster_label)
# enable password rotation for owner of foo database # enable password rotation for owner of foo database
pg_patch_rotation_single_users = { pg_patch_rotation_single_users = {
"spec": { "spec": {
@ -1632,6 +1814,7 @@ class EndToEndTestCase(unittest.TestCase):
enable_password_rotation = { enable_password_rotation = {
"data": { "data": {
"enable_password_rotation": "true", "enable_password_rotation": "true",
"inherited_annotations": "environment",
"password_rotation_interval": "30", "password_rotation_interval": "30",
"password_rotation_user_retention": "30", # should be set to 60 "password_rotation_user_retention": "30", # should be set to 60
}, },
@ -1678,13 +1861,29 @@ class EndToEndTestCase(unittest.TestCase):
self.eventuallyEqual(lambda: len(self.query_database_with_user(leader.metadata.name, "postgres", "SELECT 1", "foo_user")), 1, self.eventuallyEqual(lambda: len(self.query_database_with_user(leader.metadata.name, "postgres", "SELECT 1", "foo_user")), 1,
"Could not connect to the database with rotation user {}".format(rotation_user), 10, 5) "Could not connect to the database with rotation user {}".format(rotation_user), 10, 5)
# add annotation which triggers syncSecrets call
pg_annotation_patch = {
"metadata": {
"annotations": {
"environment": "test",
}
}
}
k8s.api.custom_objects_api.patch_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-minimal-cluster", pg_annotation_patch)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
time.sleep(10)
self.eventuallyEqual(lambda: k8s.count_secrets_with_label(cluster_label), secret_count, "Unexpected number of secrets")
# check if rotation has been ignored for user from test_cross_namespace_secrets test # check if rotation has been ignored for user from test_cross_namespace_secrets test
db_user_secret = k8s.get_secret(username="test.db_user", namespace="test") db_user_secret = k8s.get_secret(username="test.db_user", namespace="test")
secret_username = str(base64.b64decode(db_user_secret.data["username"]), 'utf-8') secret_username = str(base64.b64decode(db_user_secret.data["username"]), 'utf-8')
self.assertEqual("test.db_user", secret_username, self.assertEqual("test.db_user", secret_username,
"Unexpected username in secret of test.db_user: expected {}, got {}".format("test.db_user", secret_username)) "Unexpected username in secret of test.db_user: expected {}, got {}".format("test.db_user", secret_username))
# check if annotation for secret has been updated
self.assertTrue("environment" in db_user_secret.metadata.annotations, "Added annotation was not propagated to secret")
# disable password rotation for all other users (foo_user) # disable password rotation for all other users (foo_user)
# and pick smaller intervals to see if the third fake rotation user is dropped # and pick smaller intervals to see if the third fake rotation user is dropped
enable_password_rotation = { enable_password_rotation = {
@ -1766,7 +1965,6 @@ class EndToEndTestCase(unittest.TestCase):
replica = k8s.get_cluster_replica_pod() replica = k8s.get_cluster_replica_pod()
self.assertTrue(replica.metadata.creation_timestamp > old_creation_timestamp, "Old master pod was not recreated") self.assertTrue(replica.metadata.creation_timestamp > old_creation_timestamp, "Old master pod was not recreated")
except timeout_decorator.TimeoutError: except timeout_decorator.TimeoutError:
print('Operator log: {}'.format(k8s.get_operator_log())) print('Operator log: {}'.format(k8s.get_operator_log()))
raise raise
@ -1923,7 +2121,7 @@ class EndToEndTestCase(unittest.TestCase):
patch_sset_propagate_annotations = { patch_sset_propagate_annotations = {
"data": { "data": {
"downscaler_annotations": "deployment-time,downscaler/*", "downscaler_annotations": "deployment-time,downscaler/*",
"inherited_annotations": "owned-by", "inherited_annotations": "environment,owned-by",
} }
} }
k8s.update_config(patch_sset_propagate_annotations) k8s.update_config(patch_sset_propagate_annotations)
@ -1984,6 +2182,157 @@ class EndToEndTestCase(unittest.TestCase):
"acid.zalan.do", "v1", "default", "postgresqls", "acid-standby-cluster") "acid.zalan.do", "v1", "default", "postgresqls", "acid-standby-cluster")
time.sleep(5) time.sleep(5)
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
def test_stream_resources(self):
'''
Create and delete fabric event streaming resources.
'''
k8s = self.k8s
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"},
"Operator does not get in sync")
leader = k8s.get_cluster_leader_pod()
# patch ClusterRole with CRUD privileges on FES resources
cluster_role = k8s.api.rbac_api.read_cluster_role("postgres-operator")
fes_cluster_role_rule = client.V1PolicyRule(
api_groups=["zalando.org"],
resources=["fabriceventstreams"],
verbs=["create", "delete", "deletecollection", "get", "list", "patch", "update", "watch"]
)
cluster_role.rules.append(fes_cluster_role_rule)
try:
k8s.api.rbac_api.patch_cluster_role("postgres-operator", cluster_role)
# create a table in one of the database of acid-minimal-cluster
create_stream_table = """
CREATE TABLE test_table (id int, payload jsonb);
"""
self.query_database(leader.metadata.name, "foo", create_stream_table)
# update the manifest with the streams section
patch_streaming_config = {
"spec": {
"patroni": {
"slots": {
"manual_slot": {
"type": "physical"
}
}
},
"streams": [
{
"applicationId": "test-app",
"batchSize": 100,
"cpu": "100m",
"memory": "200Mi",
"database": "foo",
"enableRecovery": True,
"tables": {
"test_table": {
"eventType": "test-event",
"idColumn": "id",
"payloadColumn": "payload",
"recoveryEventType": "test-event-dlq"
}
}
},
{
"applicationId": "test-app2",
"batchSize": 100,
"database": "foo",
"enableRecovery": True,
"tables": {
"test_non_exist_table": {
"eventType": "test-event",
"idColumn": "id",
"payloadColumn": "payload",
"ignoreRecovery": True
}
}
}
]
}
}
k8s.api.custom_objects_api.patch_namespaced_custom_object(
'acid.zalan.do', 'v1', 'default', 'postgresqls', 'acid-minimal-cluster', patch_streaming_config)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
# check if publication, slot, and fes resource are created
get_publication_query = """
SELECT * FROM pg_publication WHERE pubname = 'fes_foo_test_app';
"""
get_slot_query = """
SELECT * FROM pg_replication_slots WHERE slot_name = 'fes_foo_test_app';
"""
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_publication_query)), 1,
"Publication is not created", 10, 5)
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_slot_query)), 1,
"Replication slot is not created", 10, 5)
self.eventuallyEqual(lambda: len(k8s.api.custom_objects_api.list_namespaced_custom_object(
"zalando.org", "v1", "default", "fabriceventstreams", label_selector="cluster-name=acid-minimal-cluster")["items"]), 1,
"Could not find Fabric Event Stream resource", 10, 5)
# check if the non-existing table in the stream section does not create a publication and slot
get_publication_query_not_exist_table = """
SELECT * FROM pg_publication WHERE pubname = 'fes_foo_test_app2';
"""
get_slot_query_not_exist_table = """
SELECT * FROM pg_replication_slots WHERE slot_name = 'fes_foo_test_app2';
"""
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_publication_query_not_exist_table)), 0,
"Publication is created for non-existing tables", 10, 5)
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_slot_query_not_exist_table)), 0,
"Replication slot is created for non-existing tables", 10, 5)
# grant create and ownership of test_table to foo_user, reset search path to default
grant_permission_foo_user = """
GRANT CREATE ON DATABASE foo TO foo_user;
ALTER TABLE test_table OWNER TO foo_user;
ALTER ROLE foo_user RESET search_path;
"""
self.query_database(leader.metadata.name, "foo", grant_permission_foo_user)
# non-postgres user creates a publication
create_nonstream_publication = """
CREATE PUBLICATION mypublication FOR TABLE test_table;
"""
self.query_database_with_user(leader.metadata.name, "foo", create_nonstream_publication, "foo_user")
# remove the streams section from the manifest
patch_streaming_config_removal = {
"spec": {
"streams": []
}
}
k8s.api.custom_objects_api.patch_namespaced_custom_object(
'acid.zalan.do', 'v1', 'default', 'postgresqls', 'acid-minimal-cluster', patch_streaming_config_removal)
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, "Operator does not get in sync")
# check if publication, slot, and fes resource are removed
self.eventuallyEqual(lambda: len(k8s.api.custom_objects_api.list_namespaced_custom_object(
"zalando.org", "v1", "default", "fabriceventstreams", label_selector="cluster-name=acid-minimal-cluster")["items"]), 0,
'Could not delete Fabric Event Stream resource', 10, 5)
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_publication_query)), 0,
"Publication is not deleted", 10, 5)
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_slot_query)), 0,
"Replication slot is not deleted", 10, 5)
# check the manual_slot and mypublication should not get deleted
get_manual_slot_query = """
SELECT * FROM pg_replication_slots WHERE slot_name = 'manual_slot';
"""
get_nonstream_publication_query = """
SELECT * FROM pg_publication WHERE pubname = 'mypublication';
"""
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", get_manual_slot_query)), 1,
"Slot defined in patroni config is deleted", 10, 5)
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "foo", get_nonstream_publication_query)), 1,
"Publication defined not in stream section is deleted", 10, 5)
except timeout_decorator.TimeoutError:
print('Operator log: {}'.format(k8s.get_operator_log()))
raise
@timeout_decorator.timeout(TEST_TIMEOUT_SEC) @timeout_decorator.timeout(TEST_TIMEOUT_SEC)
def test_taint_based_eviction(self): def test_taint_based_eviction(self):
''' '''
@ -2049,6 +2398,7 @@ class EndToEndTestCase(unittest.TestCase):
"data": { "data": {
"delete_annotation_date_key": "delete-date", "delete_annotation_date_key": "delete-date",
"delete_annotation_name_key": "delete-clustername", "delete_annotation_name_key": "delete-clustername",
"enable_secrets_deletion": "false",
"enable_persistent_volume_claim_deletion": "false" "enable_persistent_volume_claim_deletion": "false"
} }
} }
@ -2109,7 +2459,7 @@ class EndToEndTestCase(unittest.TestCase):
self.eventuallyEqual(lambda: k8s.count_statefulsets_with_label(cluster_label), 0, "Statefulset not deleted") self.eventuallyEqual(lambda: k8s.count_statefulsets_with_label(cluster_label), 0, "Statefulset not deleted")
self.eventuallyEqual(lambda: k8s.count_deployments_with_label(cluster_label), 0, "Deployments not deleted") self.eventuallyEqual(lambda: k8s.count_deployments_with_label(cluster_label), 0, "Deployments not deleted")
self.eventuallyEqual(lambda: k8s.count_pdbs_with_label(cluster_label), 0, "Pod disruption budget not deleted") self.eventuallyEqual(lambda: k8s.count_pdbs_with_label(cluster_label), 0, "Pod disruption budget not deleted")
self.eventuallyEqual(lambda: k8s.count_secrets_with_label(cluster_label), 0, "Secrets not deleted") self.eventuallyEqual(lambda: k8s.count_secrets_with_label(cluster_label), 8, "Secrets were deleted although disabled in config")
self.eventuallyEqual(lambda: k8s.count_pvcs_with_label(cluster_label), 3, "PVCs were deleted although disabled in config") self.eventuallyEqual(lambda: k8s.count_pvcs_with_label(cluster_label), 3, "PVCs were deleted although disabled in config")
except timeout_decorator.TimeoutError: except timeout_decorator.TimeoutError:
@ -2196,6 +2546,46 @@ class EndToEndTestCase(unittest.TestCase):
return True return True
def check_cluster_child_resources_owner_references(self, cluster_name, cluster_namespace='default', inverse=False):
k8s = self.k8s
# check if child resources were updated with owner references
sset = k8s.api.apps_v1.read_namespaced_stateful_set(cluster_name, cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(sset.metadata.owner_references, inverse), "statefulset owner reference check failed")
svc = k8s.api.core_v1.read_namespaced_service(cluster_name, cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(svc.metadata.owner_references, inverse), "primary service owner reference check failed")
replica_svc = k8s.api.core_v1.read_namespaced_service(cluster_name + "-repl", cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(replica_svc.metadata.owner_references, inverse), "replica service owner reference check failed")
config_svc = k8s.api.core_v1.read_namespaced_service(cluster_name + "-config", cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(config_svc.metadata.owner_references, inverse), "config service owner reference check failed")
ep = k8s.api.core_v1.read_namespaced_endpoints(cluster_name, cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(ep.metadata.owner_references, inverse), "primary endpoint owner reference check failed")
replica_ep = k8s.api.core_v1.read_namespaced_endpoints(cluster_name + "-repl", cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(replica_ep.metadata.owner_references, inverse), "replica endpoint owner reference check failed")
config_ep = k8s.api.core_v1.read_namespaced_endpoints(cluster_name + "-config", cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(config_ep.metadata.owner_references, inverse), "config endpoint owner reference check failed")
pdb = k8s.api.policy_v1.read_namespaced_pod_disruption_budget("postgres-{}-pdb".format(cluster_name), cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(pdb.metadata.owner_references, inverse), "primary pod disruption budget owner reference check failed")
pdb = k8s.api.policy_v1.read_namespaced_pod_disruption_budget("postgres-{}-critical-op-pdb".format(cluster_name), cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(pdb.metadata.owner_references, inverse), "pod disruption budget for critical operations owner reference check failed")
pg_secret = k8s.api.core_v1.read_namespaced_secret("postgres.{}.credentials.postgresql.acid.zalan.do".format(cluster_name), cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(pg_secret.metadata.owner_references, inverse), "postgres secret owner reference check failed")
standby_secret = k8s.api.core_v1.read_namespaced_secret("standby.{}.credentials.postgresql.acid.zalan.do".format(cluster_name), cluster_namespace)
self.assertTrue(self.has_postgresql_owner_reference(standby_secret.metadata.owner_references, inverse), "standby secret owner reference check failed")
return True
def has_postgresql_owner_reference(self, owner_references, inverse):
if inverse:
return owner_references is None or owner_references[0].kind != 'postgresql'
return owner_references is not None and owner_references[0].kind == 'postgresql' and owner_references[0].controller
def list_databases(self, pod_name): def list_databases(self, pod_name):
''' '''
Get list of databases we might want to iterate over Get list of databases we might want to iterate over

85
go.mod
View File

@ -1,70 +1,75 @@
module github.com/zalando/postgres-operator module github.com/zalando/postgres-operator
go 1.21 go 1.25.3
require ( require (
github.com/aws/aws-sdk-go v1.42.18 github.com/Masterminds/semver v1.5.0
github.com/aws/aws-sdk-go v1.55.8
github.com/golang/mock v1.6.0 github.com/golang/mock v1.6.0
github.com/lib/pq v1.10.9 github.com/lib/pq v1.10.9
github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d
github.com/pkg/errors v0.9.1 github.com/pkg/errors v0.9.1
github.com/r3labs/diff v1.1.0 github.com/r3labs/diff v1.1.0
github.com/sirupsen/logrus v1.9.0 github.com/sirupsen/logrus v1.9.3
github.com/stretchr/testify v1.8.2 github.com/stretchr/testify v1.11.1
golang.org/x/crypto v0.18.0 golang.org/x/crypto v0.43.0
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3
gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.28.7 k8s.io/api v0.32.9
k8s.io/apiextensions-apiserver v0.25.9 k8s.io/apiextensions-apiserver v0.25.9
k8s.io/apimachinery v0.28.7 k8s.io/apimachinery v0.32.9
k8s.io/client-go v0.28.7 k8s.io/client-go v0.32.9
k8s.io/code-generator v0.25.9 k8s.io/code-generator v0.25.9
) )
require ( require (
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.9.0 // indirect github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect github.com/go-logr/logr v1.4.2 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect github.com/go-openapi/swag v0.23.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/protobuf v1.5.4 // indirect
github.com/golang/protobuf v1.5.3 // indirect github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/gnostic-models v0.6.8 // indirect github.com/google/go-cmp v0.7.0 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/gofuzz v1.2.0 // indirect github.com/google/gofuzz v1.2.0 // indirect
github.com/google/uuid v1.3.0 // indirect github.com/google/uuid v1.6.0 // indirect
github.com/imdario/mergo v0.3.6 // indirect github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect github.com/json-iterator/go v1.1.12 // indirect
github.com/kr/text v0.2.0 // indirect github.com/kr/text v0.2.0 // indirect
github.com/mailru/easyjson v0.7.7 // indirect github.com/mailru/easyjson v0.7.7 // indirect
github.com/moby/spdystream v0.2.0 // indirect github.com/moby/spdystream v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/spf13/pflag v1.0.5 // indirect github.com/spf13/pflag v1.0.5 // indirect
golang.org/x/mod v0.14.0 // indirect github.com/x448/float16 v0.8.4 // indirect
golang.org/x/net v0.20.0 // indirect golang.org/x/mod v0.28.0 // indirect
golang.org/x/oauth2 v0.8.0 // indirect golang.org/x/net v0.45.0 // indirect
golang.org/x/sys v0.16.0 // indirect golang.org/x/oauth2 v0.27.0 // indirect
golang.org/x/term v0.16.0 // indirect golang.org/x/sync v0.17.0 // indirect
golang.org/x/text v0.14.0 // indirect golang.org/x/sys v0.37.0 // indirect
golang.org/x/time v0.3.0 // indirect golang.org/x/term v0.36.0 // indirect
golang.org/x/tools v0.17.0 // indirect golang.org/x/text v0.30.0 // indirect
google.golang.org/appengine v1.6.7 // indirect golang.org/x/time v0.9.0 // indirect
google.golang.org/protobuf v1.33.0 // indirect golang.org/x/tools v0.37.0 // indirect
golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated // indirect
google.golang.org/protobuf v1.36.5 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185 // indirect k8s.io/gengo v0.0.0-20220902162205-c0856e24416d // indirect
k8s.io/klog/v2 v2.100.1 // indirect k8s.io/gengo/v2 v2.0.0-20240826214909-a7b603a56eb7 // indirect
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 // indirect k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 // indirect k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
) )

200
go.sum
View File

@ -1,54 +1,53 @@
github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3QEww=
github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio= github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/aws/aws-sdk-go v1.42.18 h1:2f/cDNwQ3e+yHxtPn1si0to3GalbNHwkRm461IjwRiM= github.com/aws/aws-sdk-go v1.55.8 h1:JRmEUbU52aJQZ2AjX4q4Wu7t4uZjOu71uyNmaWlUkJQ=
github.com/aws/aws-sdk-go v1.42.18/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q= github.com/aws/aws-sdk-go v1.55.8/go.mod h1:ZkViS9AqA6otK+JBBNH2++sx1sgxrPKcSzPPvQkUtXk=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/emicklei/go-restful/v3 v3.9.0 h1:XwGDlfxEnQZzuopoqxwSEllNcCOM9DhhFyhFIIGKwxE= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84= github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU= github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ= github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs= github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE= github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k= github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/swag v0.22.3 h1:yMBqmnQ0gyZvEb/+KzuWZOXgllrXT4SADYbvDaXHv/g=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI= github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls= github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc= github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs= github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw=
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 h1:K6RDEckDVWvDI9JAJYCmNdQXq6neHJOYx3V6jnqNEec= github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo=
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/imdario/mergo v0.3.6 h1:xTNEAn+kxVO7dTZGu0CegyqKZmoWFI0rF8UxjlB2d28= github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
@ -71,8 +70,8 @@ github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8= github.com/moby/spdystream v0.5.0 h1:7r0J1Si3QO/kjRitvSLVVFUjxMEb/YLj6S9FF62JBCU=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c= github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@ -82,20 +81,23 @@ github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d h1:LznySqW8MqVeFh+p
github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d/go.mod h1:u3hJ0kqCQu/cPpsu3RbCOPZ0d7V3IjPjv1adNRleM9I= github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d/go.mod h1:u3hJ0kqCQu/cPpsu3RbCOPZ0d7V3IjPjv1adNRleM9I=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/onsi/ginkgo/v2 v2.9.4 h1:xR7vG4IXt5RWx6FfIjyAtsoMAtnc3C/rFXBBd2AjZwE= github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
github.com/onsi/ginkgo/v2 v2.9.4/go.mod h1:gCQYp2Q+kSoIj7ykSVb9nskRSsR6PUj4AiLywzIhbKM= github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/onsi/gomega v1.27.6 h1:ENqfyGeS5AX/rlXDd/ETokDz93u0YufY1Pgxuy/PvWE= github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM=
github.com/onsi/gomega v1.27.6/go.mod h1:PIQNjfQwkP3aQAH7lf7j87O/5FiNr+ZR8+ipb+qQlhg= github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
github.com/onsi/gomega v1.35.1 h1:Cwbd75ZBPxFSuZ6T+rN/WCb/gOc6YgFBXLlZLhC7Ds4=
github.com/onsi/gomega v1.35.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/r3labs/diff v1.1.0 h1:V53xhrbTHrWFWq3gI4b94AjgEJOerO1+1l0xyHOBi8M= github.com/r3labs/diff v1.1.0 h1:V53xhrbTHrWFWq3gI4b94AjgEJOerO1+1l0xyHOBi8M=
github.com/r3labs/diff v1.1.0/go.mod h1:7WjXasNzi0vJetRcB/RqNl5dlIsmXcTTLmF5IoH6Xig= github.com/r3labs/diff v1.1.0/go.mod h1:7WjXasNzi0vJetRcB/RqNl5dlIsmXcTTLmF5IoH6Xig=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ= github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog= github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
@ -107,83 +109,82 @@ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.18.0 h1:PGVlW0xEltQnzFZ55hkuX5+KLyrMYhHld1YHO4AKcdc= golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg= golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3 h1:hNQpMuAJe5CtcUqCXaWga3FHu+kQvCqcsoVaQgSV60o=
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3/go.mod h1:idGWGoKP1toJGkd5/ig9ZLuPcZBC3ewk7SzmH0uou08=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.14.0 h1:dGoOF9QVLYng8IHTm7BAyWqCqSheQ5pYWGhzW00YJr0= golang.org/x/mod v0.28.0 h1:gQBtGhjxykdjY9YhZpSlZIsbnaE2+PgjfLWUQTnoZ1U=
golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/mod v0.28.0/go.mod h1:yfB/L0NOf/kmEbXjzCPOx1iK1fRutOydrCMsqRhEBxI=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.45.0 h1:RLBg5JKixCy82FtLJpeNlVM0nrSqpCRYzVU1n8kj0tM=
golang.org/x/net v0.20.0 h1:aCL9BSgETF1k+blQaYUBx9hJ9LOGP3gAVemcZlf1Kpo= golang.org/x/net v0.45.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY= golang.org/x/oauth2 v0.27.0 h1:da9Vo7/tDv5RH/7nZDz1eMGS/q1Vv1N/7FCrBhI9I3M=
golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8= golang.org/x/oauth2 v0.27.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ= golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.16.0 h1:xWw16ngr6ZMtmxDyKyIgsE93KNKz5HKmMa3b8ALHidU= golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.16.0 h1:m+B6fahuftsE9qjo0VWp2FW0mB3MTJvR0BaMQrq0pmE= golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q=
golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY= golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/time v0.9.0 h1:EsRrnYcQiGH+5FfbgvV4AP7qEZstoyrHB0DzarOQ4ZY=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4= golang.org/x/time v0.9.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.17.0 h1:FvmRgNOcs3kOa+T20R1uhfP9F6HgG2mfxDv1vrx1Htc= golang.org/x/tools v0.37.0 h1:DVSRzp7FwePZW356yEAChSdNcQo6Nsp+fex1SUW09lE=
golang.org/x/tools v0.17.0/go.mod h1:xsh6VxdV005rRVaS6SSAf9oiAqljS7UZUacMZ8Bnsps= golang.org/x/tools v0.37.0/go.mod h1:MBN5QPQtLMHVdvsbtarmTNukZDdgwdwlO5qGacAzF0w=
golang.org/x/tools/go/expect v0.1.0-deprecated h1:jY2C5HGYR5lqex3gEniOQL0r7Dq5+VGVgY1nudX5lXY=
golang.org/x/tools/go/expect v0.1.0-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY=
golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM=
golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c= google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@ -193,29 +194,34 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/api v0.28.7 h1:YKIhBxjXKaxuxWJnwohV0aGjRA5l4IU0Eywf/q19AVI= k8s.io/api v0.32.9 h1:q/59kk8lnecgG0grJqzrmXC1Jcl2hPWp9ltz0FQuoLI=
k8s.io/api v0.28.7/go.mod h1:y4RbcjCCMff1930SG/TcP3AUKNfaJUgIeUp58e/2vyY= k8s.io/api v0.32.9/go.mod h1:jIfT3rwW4EU1IXZm9qjzSk/2j91k4CJL5vUULrxqp3Y=
k8s.io/apiextensions-apiserver v0.25.9 h1:Pycd6lm2auABp9wKQHCFSEPG+NPdFSTJXPST6NJFzB8= k8s.io/apiextensions-apiserver v0.25.9 h1:Pycd6lm2auABp9wKQHCFSEPG+NPdFSTJXPST6NJFzB8=
k8s.io/apiextensions-apiserver v0.25.9/go.mod h1:ijGxmSG1GLOEaWhTuaEr0M7KUeia3mWCZa6FFQqpt1M= k8s.io/apiextensions-apiserver v0.25.9/go.mod h1:ijGxmSG1GLOEaWhTuaEr0M7KUeia3mWCZa6FFQqpt1M=
k8s.io/apimachinery v0.28.7 h1:2Z38/XRAOcpb+PonxmBEmjG7hBfmmr41xnr0XvpTnB4= k8s.io/apimachinery v0.32.9 h1:fXk8ktfsxrdThaEOAQFgkhCK7iyoyvS8nbYJ83o/SSs=
k8s.io/apimachinery v0.28.7/go.mod h1:QFNX/kCl/EMT2WTSz8k4WLCv2XnkOLMaL8GAVRMdpsA= k8s.io/apimachinery v0.32.9/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE=
k8s.io/client-go v0.28.7 h1:3L6402+tjmOl8twX3fjUQ/wsYAkw6UlVNDVP+rF6YGA= k8s.io/client-go v0.32.9 h1:ZMyIQ1TEpTDAQni3L2gH1NZzyOA/gHfNcAazzCxMJ0c=
k8s.io/client-go v0.28.7/go.mod h1:xIoEaDewZ+EwWOo1/F1t0IOKMPe1rwBZhLu9Es6y0tE= k8s.io/client-go v0.32.9/go.mod h1:2OT8aFSYvUjKGadaeT+AVbhkXQSpMAkiSb88Kz2WggI=
k8s.io/code-generator v0.25.9 h1:lgyAV9AIRYNxZxgLRXqsCAtqJLHvakot41CjEqD5W0w= k8s.io/code-generator v0.25.9 h1:lgyAV9AIRYNxZxgLRXqsCAtqJLHvakot41CjEqD5W0w=
k8s.io/code-generator v0.25.9/go.mod h1:DHfpdhSUrwqF0f4oLqCtF8gYbqlndNetjBEz45nWzJI= k8s.io/code-generator v0.25.9/go.mod h1:DHfpdhSUrwqF0f4oLqCtF8gYbqlndNetjBEz45nWzJI=
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185 h1:TT1WdmqqXareKxZ/oNXEUSwKlLiHzPMyB0t8BaFeBYI= k8s.io/gengo v0.0.0-20220902162205-c0856e24416d h1:U9tB195lKdzwqicbJvyJeOXV7Klv+wNAWENRnXEGi08=
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E= k8s.io/gengo v0.0.0-20220902162205-c0856e24416d/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/gengo/v2 v2.0.0-20240826214909-a7b603a56eb7 h1:cErOOTkQ3JW19o4lo91fFurouhP8NcoBvb7CkvhZZpk=
k8s.io/gengo/v2 v2.0.0-20240826214909-a7b603a56eb7/go.mod h1:EJykeLsmFC60UQbYJezXkEsG2FLrt0GPNkU5iK5GWxU=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y= k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 h1:LyMgNKD2P8Wn1iAwQU5OhxCKlKJy0sHc+PcDwFB24dQ= k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff h1:/usPimJzUKKu+m+TE36gUyGcf03XZEP0ZIKgKj35LS4=
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9/go.mod h1:wZK2AVp1uHCp4VamDVgBP2COHZjqD1T68Rf0CM3YjSM= k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff/go.mod h1:5jIi+8yX4RIb8wk3XwBo5Pq2ccx4FP10ohkbSKCZoK8=
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 h1:qY1Ad8PODbnymg2pRbkyMT/ylpTrCM8P2RJ0yroCyIk= k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro=
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo= sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE= sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E= sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 h1:IUA9nvMmnKWcj5jl84xn+T5MnlZKThmUW1TdblaLVAc=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc= sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo= sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8= sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

View File

@ -23,6 +23,7 @@ THE SOFTWARE.
package cmd package cmd
import ( import (
"context"
"log" "log"
"os" "os"
user "os/user" user "os/user"
@ -121,7 +122,7 @@ func connect(clusterName string, master bool, replica string, psql bool, user st
log.Fatal(err) log.Fatal(err)
} }
err = exec.Stream(remotecommand.StreamOptions{ err = exec.StreamWithContext(context.TODO(), remotecommand.StreamOptions{
Stdin: os.Stdin, Stdin: os.Stdin,
Stdout: os.Stdout, Stdout: os.Stdout,
Stderr: os.Stderr, Stderr: os.Stderr,

View File

@ -65,7 +65,7 @@ func version(namespace string) {
operatorDeployment := getPostgresOperator(client) operatorDeployment := getPostgresOperator(client)
if operatorDeployment.Name == "" { if operatorDeployment.Name == "" {
log.Fatal("make sure zalando's postgres operator is running") log.Fatalf("make sure zalando's postgres operator is running in namespace %s", namespace)
} }
operatorImage := operatorDeployment.Spec.Template.Spec.Containers[0].Image operatorImage := operatorDeployment.Spec.Template.Spec.Containers[0].Image
imageDetails := strings.Split(operatorImage, ":") imageDetails := strings.Split(operatorImage, ":")

View File

@ -1,69 +1,71 @@
module github.com/zalando/postgres-operator/kubectl-pg module github.com/zalando/postgres-operator/kubectl-pg
go 1.21 go 1.25
require ( require (
github.com/spf13/cobra v1.7.0 github.com/spf13/cobra v1.10.1
github.com/spf13/viper v1.9.0 github.com/spf13/viper v1.21.0
github.com/zalando/postgres-operator v1.10.1 github.com/zalando/postgres-operator v1.14.0
k8s.io/api v0.28.7 k8s.io/api v0.32.9
k8s.io/apiextensions-apiserver v0.28.7 k8s.io/apiextensions-apiserver v0.25.9
k8s.io/apimachinery v0.28.7 k8s.io/apimachinery v0.32.9
k8s.io/client-go v0.28.7 k8s.io/client-go v0.32.9
) )
require ( require (
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.9.0 // indirect github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect github.com/go-logr/logr v1.4.2 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect github.com/golang/protobuf v1.5.4 // indirect
github.com/google/gnostic-models v0.6.8 // indirect github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.5.9 // indirect github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect github.com/google/gofuzz v1.2.0 // indirect
github.com/google/uuid v1.3.0 // indirect github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect github.com/gorilla/websocket v1.5.0 // indirect
github.com/imdario/mergo v0.3.6 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect github.com/json-iterator/go v1.1.12 // indirect
github.com/kr/text v0.2.0 // indirect github.com/kr/text v0.2.0 // indirect
github.com/magiconair/properties v1.8.5 // indirect
github.com/mailru/easyjson v0.7.7 // indirect github.com/mailru/easyjson v0.7.7 // indirect
github.com/mitchellh/mapstructure v1.4.2 // indirect github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d // indirect github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pelletier/go-toml v1.9.4 // indirect github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/sirupsen/logrus v1.9.0 // indirect github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/spf13/afero v1.6.0 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/spf13/cast v1.4.1 // indirect github.com/sagikazarmark/locafero v0.11.0 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect github.com/sirupsen/logrus v1.9.3 // indirect
github.com/spf13/pflag v1.0.5 // indirect github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect
github.com/subosito/gotenv v1.2.0 // indirect github.com/spf13/afero v1.15.0 // indirect
golang.org/x/crypto v0.17.0 // indirect github.com/spf13/cast v1.10.0 // indirect
golang.org/x/net v0.19.0 // indirect github.com/spf13/pflag v1.0.10 // indirect
golang.org/x/oauth2 v0.8.0 // indirect github.com/subosito/gotenv v1.6.0 // indirect
golang.org/x/sys v0.15.0 // indirect github.com/x448/float16 v0.8.4 // indirect
golang.org/x/term v0.15.0 // indirect go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/text v0.14.0 // indirect golang.org/x/crypto v0.31.0 // indirect
golang.org/x/time v0.3.0 // indirect golang.org/x/net v0.30.0 // indirect
google.golang.org/appengine v1.6.7 // indirect golang.org/x/oauth2 v0.23.0 // indirect
google.golang.org/protobuf v1.33.0 // indirect golang.org/x/sys v0.29.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.28.0 // indirect
golang.org/x/time v0.7.0 // indirect
google.golang.org/protobuf v1.35.1 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.63.2 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/klog/v2 v2.100.1 // indirect k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 // indirect k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 // indirect k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect sigs.k8s.io/structured-merge-diff/v4 v4.4.2 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect sigs.k8s.io/yaml v1.4.0 // indirect
) )

View File

@ -1,230 +1,59 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAVY=
cloud.google.com/go v0.84.0/go.mod h1:RazrYuxIK6Kb7YrzzhPoLmCVzl7Sup4NrbKPg8KHSUM=
cloud.google.com/go v0.87.0/go.mod h1:TpDYlFy7vuLzZMMZ+B6iRiELaY7z/gJPaqbMx6mlWcY=
cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aDQ=
cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/firestore v1.6.0/go.mod h1:afJwI0vaXwAG54kI7A//lP/lSPDkQORQuMkv56TxEPU=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio= github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/emicklei/go-restful/v3 v3.9.0 h1:XwGDlfxEnQZzuopoqxwSEllNcCOM9DhhFyhFIIGKwxE= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po= github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ= github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/fsnotify/fsnotify v1.5.1/go.mod h1:T3375wBYaZdLLcVNkcVbzGHY7f1l/uK5T5Ai1i3InKU=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs= github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE= github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k= github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/swag v0.22.3 h1:yMBqmnQ0gyZvEb/+KzuWZOXgllrXT4SADYbvDaXHv/g=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI= github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls= github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs=
github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I= github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U= github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 h1:K6RDEckDVWvDI9JAJYCmNdQXq6neHJOYx3V6jnqNEec=
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/consul/api v1.10.1/go.mod h1:XjsvQN+RJGWI2TWy1/kqaE16HrR2J/FWgkYjdZQsX9M=
github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.1/go.mod h1:4gW7WsVCke5TE7EPeYliwHlRUyBtfCwuFwuMg2DmyNY=
github.com/hashicorp/memberlist v0.2.2/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
github.com/hashicorp/serf v0.9.5/go.mod h1:UWDWwZeL5cuWDJdl0C6wrvrUwEqtQ4ZKBKKENpqIUyk=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.6 h1:xTNEAn+kxVO7dTZGu0CegyqKZmoWFI0rF8UxjlB2d28=
github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
@ -232,538 +61,143 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/magiconair/properties v1.8.5 h1:b6kJs+EmPFMYGkow9GiUyCyOvIwYetYJ3fSaWak/Gls=
github.com/magiconair/properties v1.8.5/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= github.com/moby/spdystream v0.5.0 h1:7r0J1Si3QO/kjRitvSLVVFUjxMEb/YLj6S9FF62JBCU=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.2 h1:6h7AQ0yhTcIsmFmnAwQls75jp2Gzs4iB8W7pjMO+rqo=
github.com/mitchellh/mapstructure v1.4.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d h1:LznySqW8MqVeFh+pW6rOkFdld9QQ7jRydBKKM6jyPVI= github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d h1:LznySqW8MqVeFh+pW6rOkFdld9QQ7jRydBKKM6jyPVI=
github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d/go.mod h1:u3hJ0kqCQu/cPpsu3RbCOPZ0d7V3IjPjv1adNRleM9I= github.com/motomux/pretty v0.0.0-20161209205251-b2aad2c9a95d/go.mod h1:u3hJ0kqCQu/cPpsu3RbCOPZ0d7V3IjPjv1adNRleM9I=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/onsi/ginkgo/v2 v2.9.4 h1:xR7vG4IXt5RWx6FfIjyAtsoMAtnc3C/rFXBBd2AjZwE= github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
github.com/onsi/ginkgo/v2 v2.9.4/go.mod h1:gCQYp2Q+kSoIj7ykSVb9nskRSsR6PUj4AiLywzIhbKM= github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/onsi/gomega v1.27.6 h1:ENqfyGeS5AX/rlXDd/ETokDz93u0YufY1Pgxuy/PvWE= github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM=
github.com/onsi/gomega v1.27.6/go.mod h1:PIQNjfQwkP3aQAH7lf7j87O/5FiNr+ZR8+ipb+qQlhg= github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= github.com/onsi/gomega v1.35.1 h1:Cwbd75ZBPxFSuZ6T+rN/WCb/gOc6YgFBXLlZLhC7Ds4=
github.com/pelletier/go-toml v1.9.4 h1:tjENF6MfZAg8e4ZmZTeWaWiT2vXtsoO6+iuOjFhECwM= github.com/onsi/gomega v1.35.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
github.com/pelletier/go-toml v1.9.4/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c= github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= github.com/sagikazarmark/locafero v0.11.0 h1:1iurJgmM9G3PA/I+wWYIOw/5SyBtxapeHDcg+AAIFXc=
github.com/sagikazarmark/crypt v0.1.0/go.mod h1:B/mN0msZuINBtQ1zZLEQcegFJJf9vnYIR88KRMEuODE= github.com/sagikazarmark/locafero v0.11.0/go.mod h1:nVIGvgyzw595SUSUE6tvCp3YYTeHs15MvlmU87WwIik=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8/go.mod h1:3n1Cwaq1E1/1lhQhtRK2ts/ZwZEhjcQeJQ1RuC6Q/8U=
github.com/spf13/afero v1.6.0 h1:xoax2sJ2DT8S8xA2paPFjDCScCNeWsg75VG0DLRreiY= github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I= github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
github.com/spf13/cast v1.4.1 h1:s0hze+J0196ZfEMTs80N7UlFt0BDuQ7Q+JDnHiMWKdA= github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
github.com/spf13/cast v1.4.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I= github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0= github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk= github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo= github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU=
github.com/spf13/viper v1.9.0 h1:yR6EXjTp0y0cLN8OZg1CRZmOBdI88UcGkhgyJhu6nZk= github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjbTCAY=
github.com/spf13/viper v1.9.0/go.mod h1:+i6ajR7OX2XaiBkrcZJFK21htRk7eDeLg7+O6bhUPP4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s= github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/zalando/postgres-operator v1.14.0 h1:C8+n26C8v6fPB1SNW+Y8X6oQoEHufzGJXJzYPlix+zw=
github.com/zalando/postgres-operator v1.10.1 h1:2QAZam6e3dhK8D64Hc9m4eul29f1yggGMAH3ff20etw= github.com/zalando/postgres-operator v1.14.0/go.mod h1:ZTHY3sVfHgLLRpTgyR/44JcumbACeJBjztr3o1yHBdc=
github.com/zalando/postgres-operator v1.10.1/go.mod h1:UYVdslgiYgsKSuU24Mne2qO67nuWTJwWiT1WQDurROs= go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs= go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v2 v2.305.0/go.mod h1:h9puh54ZTgAKtEbut2oe9P4L/oqKCVB6xsXlzd7alYQ=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k= golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c=
golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8=
golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc= golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/term v0.15.0 h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4=
golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= golang.org/x/tools v0.35.0 h1:mBffYraMEf7aa0sB+NuKnuCy8qI/9Bughn8dC2Gu5r0=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.35.0/go.mod h1:NKdj5HkL/73byiZSJjqJgKn3ep7KjFkBOkR/Hps3VPw=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.16.1 h1:TLyB3WofjdOEepBHAU20JdNC1Zbg87elYofWYAY5oZA=
golang.org/x/tools v0.16.1/go.mod h1:kYVVN6I1mBNoB1OX+noeBjbRk4IUEPa7JJ+TJMEooJ0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE=
google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo=
google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4=
google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNefaw=
google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU=
google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k=
google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210624195500-8bfb893ecb84/go.mod h1:SzzZ/N+nwJDaO1kznhnlzqS8ocJICar6hYhVyhi++24=
google.golang.org/genproto v0.0.0-20210713002101-d411969a0d9a/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701ea/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
google.golang.org/genproto v0.0.0-20210728212813-7823e685a01f/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
google.golang.org/genproto v0.0.0-20210805201207-89edb61ffb67/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
google.golang.org/genproto v0.0.0-20210813162853-db860fec028c/go.mod h1:cFeNkxwySK631ADgubI+/XFU/xp8FD5KIVV4rj8UC5w=
google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.63.2 h1:tGK/CyBg7SMzb60vP1M03vNZ3VDu3wGQJwn7Sxi9r3c=
gopkg.in/ini.v1 v1.63.2/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= k8s.io/api v0.32.9 h1:q/59kk8lnecgG0grJqzrmXC1Jcl2hPWp9ltz0FQuoLI=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= k8s.io/api v0.32.9/go.mod h1:jIfT3rwW4EU1IXZm9qjzSk/2j91k4CJL5vUULrxqp3Y=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= k8s.io/apiextensions-apiserver v0.25.9 h1:Pycd6lm2auABp9wKQHCFSEPG+NPdFSTJXPST6NJFzB8=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= k8s.io/apiextensions-apiserver v0.25.9/go.mod h1:ijGxmSG1GLOEaWhTuaEr0M7KUeia3mWCZa6FFQqpt1M=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= k8s.io/apimachinery v0.32.9 h1:fXk8ktfsxrdThaEOAQFgkhCK7iyoyvS8nbYJ83o/SSs=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= k8s.io/apimachinery v0.32.9/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= k8s.io/client-go v0.32.9 h1:ZMyIQ1TEpTDAQni3L2gH1NZzyOA/gHfNcAazzCxMJ0c=
k8s.io/api v0.28.7 h1:YKIhBxjXKaxuxWJnwohV0aGjRA5l4IU0Eywf/q19AVI= k8s.io/client-go v0.32.9/go.mod h1:2OT8aFSYvUjKGadaeT+AVbhkXQSpMAkiSb88Kz2WggI=
k8s.io/api v0.28.7/go.mod h1:y4RbcjCCMff1930SG/TcP3AUKNfaJUgIeUp58e/2vyY= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/apiextensions-apiserver v0.28.7 h1:NQlzP/vmvIO9Qt7wQTdMe9sGWGkozQZMPk9suehAvR8= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/apiextensions-apiserver v0.28.7/go.mod h1:ST+ZOppyy+Z0mIxezSOK8qwIXctNwdFLNpGkQp8bw4M= k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f h1:GA7//TjRY9yWGy1poLzYYJJ4JRdzg3+O6e8I+e+8T5Y=
k8s.io/apimachinery v0.28.7 h1:2Z38/XRAOcpb+PonxmBEmjG7hBfmmr41xnr0XvpTnB4= k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f/go.mod h1:R/HEjbvWI0qdfb8viZUeVZm0X6IZnxAydC7YU42CMw4=
k8s.io/apimachinery v0.28.7/go.mod h1:QFNX/kCl/EMT2WTSz8k4WLCv2XnkOLMaL8GAVRMdpsA= k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro=
k8s.io/client-go v0.28.7 h1:3L6402+tjmOl8twX3fjUQ/wsYAkw6UlVNDVP+rF6YGA= k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
k8s.io/client-go v0.28.7/go.mod h1:xIoEaDewZ+EwWOo1/F1t0IOKMPe1rwBZhLu9Es6y0tE= sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8=
k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg= sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo=
k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= sigs.k8s.io/structured-merge-diff/v4 v4.4.2 h1:MdmvkGuXi/8io6ixD5wud3vOLwc1rj0aNqRlpuvjmwA=
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 h1:LyMgNKD2P8Wn1iAwQU5OhxCKlKJy0sHc+PcDwFB24dQ= sigs.k8s.io/structured-merge-diff/v4 v4.4.2/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4=
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9/go.mod h1:wZK2AVp1uHCp4VamDVgBP2COHZjqD1T68Rf0CM3YjSM= sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 h1:qY1Ad8PODbnymg2pRbkyMT/ylpTrCM8P2RJ0yroCyIk= sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=

View File

@ -25,11 +25,11 @@ RUN apt-get update \
&& curl --silent https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \ && curl --silent https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& apt-get update \ && apt-get update \
&& apt-get install --no-install-recommends -y \ && apt-get install --no-install-recommends -y \
postgresql-client-17 \
postgresql-client-16 \ postgresql-client-16 \
postgresql-client-15 \ postgresql-client-15 \
postgresql-client-14 \ postgresql-client-14 \
postgresql-client-13 \ postgresql-client-13 \
postgresql-client-12 \
&& apt-get clean \ && apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*

View File

@ -45,7 +45,7 @@ function compress {
} }
function az_upload { function az_upload {
PATH_TO_BACKUP=$LOGICAL_BACKUP_S3_BUCKET"/spilo/"$SCOPE$LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX"/logical_backups/"$(date +%s).sql.gz PATH_TO_BACKUP=$LOGICAL_BACKUP_S3_BUCKET"/"$LOGICAL_BACKUP_S3_BUCKET_PREFIX"/"$SCOPE$LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX"/logical_backups/"$(date +%s).sql.gz
az storage blob upload --file "$1" --account-name "$LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_NAME" --account-key "$LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_KEY" -c "$LOGICAL_BACKUP_AZURE_STORAGE_CONTAINER" -n "$PATH_TO_BACKUP" az storage blob upload --file "$1" --account-name "$LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_NAME" --account-key "$LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_KEY" -c "$LOGICAL_BACKUP_AZURE_STORAGE_CONTAINER" -n "$PATH_TO_BACKUP"
} }
@ -72,7 +72,7 @@ function aws_delete_outdated {
cutoff_date=$(date -d "$LOGICAL_BACKUP_S3_RETENTION_TIME ago" +%F) cutoff_date=$(date -d "$LOGICAL_BACKUP_S3_RETENTION_TIME ago" +%F)
# mimic bucket setup from Spilo # mimic bucket setup from Spilo
prefix="spilo/"$SCOPE$LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX"/logical_backups/" prefix=$LOGICAL_BACKUP_S3_BUCKET_PREFIX"/"$SCOPE$LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX"/logical_backups/"
args=( args=(
"--no-paginate" "--no-paginate"
@ -107,7 +107,7 @@ function aws_upload {
# mimic bucket setup from Spilo # mimic bucket setup from Spilo
# to keep logical backups at the same path as WAL # to keep logical backups at the same path as WAL
# NB: $LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX already contains the leading "/" when set by the Postgres Operator # NB: $LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX already contains the leading "/" when set by the Postgres Operator
PATH_TO_BACKUP=s3://$LOGICAL_BACKUP_S3_BUCKET"/spilo/"$SCOPE$LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX"/logical_backups/"$(date +%s).sql.gz PATH_TO_BACKUP=s3://$LOGICAL_BACKUP_S3_BUCKET"/"$LOGICAL_BACKUP_S3_BUCKET_PREFIX"/"$SCOPE$LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX"/logical_backups/"$(date +%s).sql.gz
args=() args=()
@ -120,9 +120,23 @@ function aws_upload {
} }
function gcs_upload { function gcs_upload {
PATH_TO_BACKUP=gs://$LOGICAL_BACKUP_S3_BUCKET"/spilo/"$SCOPE$LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX"/logical_backups/"$(date +%s).sql.gz PATH_TO_BACKUP=gs://$LOGICAL_BACKUP_S3_BUCKET"/"$LOGICAL_BACKUP_S3_BUCKET_PREFIX"/"$SCOPE$LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX"/logical_backups/"$(date +%s).sql.gz
gsutil -o Credentials:gs_service_key_file=$LOGICAL_BACKUP_GOOGLE_APPLICATION_CREDENTIALS cp - "$PATH_TO_BACKUP" #Set local LOGICAL_GOOGLE_APPLICATION_CREDENTIALS to nothing or
#value of LOGICAL_GOOGLE_APPLICATION_CREDENTIALS env var. Needed
#because `set -o nounset` is globally set
local LOGICAL_BACKUP_GOOGLE_APPLICATION_CREDENTIALS=${LOGICAL_BACKUP_GOOGLE_APPLICATION_CREDENTIALS:-}
GSUTIL_OPTIONS=("-o" "Credentials:gs_service_key_file=$LOGICAL_BACKUP_GOOGLE_APPLICATION_CREDENTIALS")
#If GOOGLE_APPLICATION_CREDENTIALS is not set try to get
#creds from metadata
if [[ -z $LOGICAL_BACKUP_GOOGLE_APPLICATION_CREDENTIALS ]]
then
GSUTIL_OPTIONS[1]="GoogleCompute:service_account=default"
fi
gsutil ${GSUTIL_OPTIONS[@]} cp - "$PATH_TO_BACKUP"
} }
function upload { function upload {

View File

@ -10,7 +10,7 @@ metadata:
# "delete-date": "2020-08-31" # can only be deleted on that day if "delete-date "key is configured # "delete-date": "2020-08-31" # can only be deleted on that day if "delete-date "key is configured
# "delete-clustername": "acid-test-cluster" # can only be deleted when name matches if "delete-clustername" key is configured # "delete-clustername": "acid-test-cluster" # can only be deleted when name matches if "delete-clustername" key is configured
spec: spec:
dockerImage: ghcr.io/zalando/spilo-16:3.2-p2 dockerImage: ghcr.io/zalando/spilo-17:4.0-p3
teamId: "acid" teamId: "acid"
numberOfInstances: 2 numberOfInstances: 2
users: # Application/Robot users users: # Application/Robot users
@ -48,7 +48,7 @@ spec:
defaultRoles: true defaultRoles: true
defaultUsers: false defaultUsers: false
postgresql: postgresql:
version: "16" version: "17"
parameters: # Expert section parameters: # Expert section
shared_buffers: "32MB" shared_buffers: "32MB"
max_connections: "10" max_connections: "10"
@ -68,6 +68,8 @@ spec:
# matchLabels: # matchLabels:
# environment: dev # environment: dev
# service: postgres # service: postgres
# subPath: $(NODE_NAME)/$(POD_NAME)
# isSubPathExpr: true
additionalVolumes: additionalVolumes:
- name: empty - name: empty
mountPath: /opt/empty mountPath: /opt/empty
@ -83,6 +85,16 @@ spec:
# PersistentVolumeClaim: # PersistentVolumeClaim:
# claimName: pvc-postgresql-data-partitions # claimName: pvc-postgresql-data-partitions
# readyOnly: false # readyOnly: false
# - name: data
# mountPath: /home/postgres/pgdata/partitions
# subPath: $(NODE_NAME)/$(POD_NAME)
# isSubPathExpr: true
# targetContainers:
# - postgres
# volumeSource:
# PersistentVolumeClaim:
# claimName: pvc-postgresql-data-partitions
# readyOnly: false
# - name: conf # - name: conf
# mountPath: /etc/telegraf # mountPath: /etc/telegraf
# subPath: telegraf.conf # subPath: telegraf.conf
@ -151,6 +163,7 @@ spec:
# run periodic backups with k8s cron jobs # run periodic backups with k8s cron jobs
# enableLogicalBackup: true # enableLogicalBackup: true
# logicalBackupRetention: "3 months"
# logicalBackupSchedule: "30 00 * * *" # logicalBackupSchedule: "30 00 * * *"
# maintenanceWindows: # maintenanceWindows:

View File

@ -18,11 +18,11 @@ data:
connection_pooler_default_memory_limit: 100Mi connection_pooler_default_memory_limit: 100Mi
connection_pooler_default_memory_request: 100Mi connection_pooler_default_memory_request: 100Mi
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer:master-32" connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer:master-32"
# connection_pooler_max_db_connections: 60 connection_pooler_max_db_connections: "60"
# connection_pooler_mode: "transaction" connection_pooler_mode: "transaction"
# connection_pooler_number_of_instances: 2 connection_pooler_number_of_instances: "2"
# connection_pooler_schema: "pooler" connection_pooler_schema: "pooler"
# connection_pooler_user: "pooler" connection_pooler_user: "pooler"
crd_categories: "all" crd_categories: "all"
# custom_service_annotations: "keyx:valuez,keya:valuea" # custom_service_annotations: "keyx:valuez,keya:valuea"
# custom_pod_annotations: "keya:valuea,keyb:valueb" # custom_pod_annotations: "keya:valuea,keyb:valueb"
@ -34,38 +34,41 @@ data:
default_memory_request: 100Mi default_memory_request: 100Mi
# delete_annotation_date_key: delete-date # delete_annotation_date_key: delete-date
# delete_annotation_name_key: delete-clustername # delete_annotation_name_key: delete-clustername
docker_image: ghcr.io/zalando/spilo-16:3.2-p2 docker_image: ghcr.io/zalando/spilo-17:4.0-p3
# downscaler_annotations: "deployment-time,downscaler/*" # downscaler_annotations: "deployment-time,downscaler/*"
# enable_admin_role_for_users: "true" enable_admin_role_for_users: "true"
# enable_crd_registration: "true" enable_crd_registration: "true"
# enable_cross_namespace_secret: "false" enable_crd_validation: "true"
enable_cross_namespace_secret: "false"
enable_finalizers: "false" enable_finalizers: "false"
# enable_database_access: "true" enable_database_access: "true"
enable_ebs_gp3_migration: "false" enable_ebs_gp3_migration: "false"
# enable_ebs_gp3_migration_max_size: "1000" enable_ebs_gp3_migration_max_size: "1000"
# enable_init_containers: "true" enable_init_containers: "true"
# enable_lazy_spilo_upgrade: "false" enable_lazy_spilo_upgrade: "false"
enable_master_load_balancer: "false" enable_master_load_balancer: "false"
enable_master_pooler_load_balancer: "false" enable_master_pooler_load_balancer: "false"
enable_password_rotation: "false" enable_password_rotation: "false"
enable_patroni_failsafe_mode: "false" enable_patroni_failsafe_mode: "false"
enable_owner_references: "false"
enable_persistent_volume_claim_deletion: "true" enable_persistent_volume_claim_deletion: "true"
enable_pgversion_env_var: "true" enable_pgversion_env_var: "true"
# enable_pod_antiaffinity: "false" enable_pod_antiaffinity: "false"
# enable_pod_disruption_budget: "true" enable_pod_disruption_budget: "true"
# enable_postgres_team_crd: "false" enable_postgres_team_crd: "false"
# enable_postgres_team_crd_superusers: "false" enable_postgres_team_crd_superusers: "false"
enable_readiness_probe: "false" enable_readiness_probe: "false"
enable_replica_load_balancer: "false" enable_replica_load_balancer: "false"
enable_replica_pooler_load_balancer: "false" enable_replica_pooler_load_balancer: "false"
# enable_shm_volume: "true" enable_secrets_deletion: "true"
# enable_sidecars: "true" enable_shm_volume: "true"
enable_sidecars: "true"
enable_spilo_wal_path_compat: "true" enable_spilo_wal_path_compat: "true"
enable_team_id_clustername_prefix: "false" enable_team_id_clustername_prefix: "false"
enable_team_member_deprecation: "false" enable_team_member_deprecation: "false"
# enable_team_superuser: "false" enable_team_superuser: "false"
enable_teams_api: "false" enable_teams_api: "false"
# etcd_host: "" etcd_host: ""
external_traffic_policy: "Cluster" external_traffic_policy: "Cluster"
# gcp_credentials: "" # gcp_credentials: ""
# ignored_annotations: "" # ignored_annotations: ""
@ -75,54 +78,55 @@ data:
# inherited_annotations: owned-by # inherited_annotations: owned-by
# inherited_labels: application,environment # inherited_labels: application,environment
# kube_iam_role: "" # kube_iam_role: ""
# kubernetes_use_configmaps: "false" kubernetes_use_configmaps: "false"
# log_s3_bucket: "" # log_s3_bucket: ""
# logical_backup_azure_storage_account_name: "" # logical_backup_azure_storage_account_name: ""
# logical_backup_azure_storage_container: "" # logical_backup_azure_storage_container: ""
# logical_backup_azure_storage_account_key: "" # logical_backup_azure_storage_account_key: ""
# logical_backup_cpu_limit: "" # logical_backup_cpu_limit: ""
# logical_backup_cpu_request: "" # logical_backup_cpu_request: ""
logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.11.0" logical_backup_cronjob_environment_secret: ""
logical_backup_docker_image: "ghcr.io/zalando/postgres-operator/logical-backup:v1.15.0"
# logical_backup_google_application_credentials: "" # logical_backup_google_application_credentials: ""
logical_backup_job_prefix: "logical-backup-" logical_backup_job_prefix: "logical-backup-"
# logical_backup_memory_limit: "" # logical_backup_memory_limit: ""
# logical_backup_memory_request: "" # logical_backup_memory_request: ""
logical_backup_provider: "s3" logical_backup_provider: "s3"
# logical_backup_s3_access_key_id: "" logical_backup_s3_access_key_id: ""
logical_backup_s3_bucket: "my-bucket-url" logical_backup_s3_bucket: "my-bucket-url"
# logical_backup_s3_region: "" logical_backup_s3_bucket_prefix: "spilo"
# logical_backup_s3_endpoint: "" logical_backup_s3_region: ""
# logical_backup_s3_secret_access_key: "" logical_backup_s3_endpoint: ""
logical_backup_s3_secret_access_key: ""
logical_backup_s3_sse: "AES256" logical_backup_s3_sse: "AES256"
# logical_backup_s3_retention_time: "" logical_backup_s3_retention_time: ""
logical_backup_schedule: "30 00 * * *" logical_backup_schedule: "30 00 * * *"
# logical_backup_cronjob_environment_secret: ""
major_version_upgrade_mode: "manual" major_version_upgrade_mode: "manual"
# major_version_upgrade_team_allow_list: "" # major_version_upgrade_team_allow_list: ""
master_dns_name_format: "{cluster}.{namespace}.{hostedzone}" master_dns_name_format: "{cluster}.{namespace}.{hostedzone}"
# master_legacy_dns_name_format: "{cluster}.{team}.{hostedzone}" master_legacy_dns_name_format: "{cluster}.{team}.{hostedzone}"
# master_pod_move_timeout: 20m master_pod_move_timeout: 20m
# max_instances: "-1"
# min_instances: "-1"
# max_cpu_request: "1" # max_cpu_request: "1"
max_instances: "-1"
# max_memory_request: 4Gi # max_memory_request: 4Gi
# min_cpu_limit: 250m min_cpu_limit: 250m
# min_memory_limit: 250Mi min_instances: "-1"
# minimal_major_version: "12" min_memory_limit: 250Mi
minimal_major_version: "13"
# node_readiness_label: "status:ready" # node_readiness_label: "status:ready"
# node_readiness_label_merge: "OR" # node_readiness_label_merge: "OR"
# oauth_token_secret_name: postgresql-operator oauth_token_secret_name: postgresql-operator
# pam_configuration: | pam_configuration: "https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees"
# https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees pam_role_name: zalandos
# pam_role_name: zalandos
patroni_api_check_interval: "1s" patroni_api_check_interval: "1s"
patroni_api_check_timeout: "5s" patroni_api_check_timeout: "5s"
# password_rotation_interval: "90" password_rotation_interval: "90"
# password_rotation_user_retention: "180" password_rotation_user_retention: "180"
pdb_master_label_selector: "true"
pdb_name_format: "postgres-{cluster}-pdb" pdb_name_format: "postgres-{cluster}-pdb"
persistent_volume_claim_retention_policy: "when_deleted:retain,when_scaled:retain" persistent_volume_claim_retention_policy: "when_deleted:retain,when_scaled:retain"
# pod_antiaffinity_preferred_during_scheduling: "false" pod_antiaffinity_preferred_during_scheduling: "false"
# pod_antiaffinity_topology_key: "kubernetes.io/hostname" pod_antiaffinity_topology_key: "kubernetes.io/hostname"
pod_deletion_wait_timeout: 10m pod_deletion_wait_timeout: 10m
# pod_environment_configmap: "default/my-custom-config" # pod_environment_configmap: "default/my-custom-config"
# pod_environment_secret: "my-custom-secret" # pod_environment_secret: "my-custom-secret"
@ -130,17 +134,17 @@ data:
pod_management_policy: "ordered_ready" pod_management_policy: "ordered_ready"
# pod_priority_class_name: "postgres-pod-priority" # pod_priority_class_name: "postgres-pod-priority"
pod_role_label: spilo-role pod_role_label: spilo-role
# pod_service_account_definition: "" pod_service_account_definition: ""
pod_service_account_name: "postgres-pod" pod_service_account_name: "postgres-pod"
# pod_service_account_role_binding_definition: "" pod_service_account_role_binding_definition: ""
pod_terminate_grace_period: 5m pod_terminate_grace_period: 5m
# postgres_superuser_teams: "postgres_superusers" postgres_superuser_teams: "postgres_superusers"
# protected_role_names: "admin,cron_admin" protected_role_names: "admin,cron_admin"
ready_wait_interval: 3s ready_wait_interval: 3s
ready_wait_timeout: 30s ready_wait_timeout: 30s
repair_period: 5m repair_period: 5m
replica_dns_name_format: "{cluster}-repl.{namespace}.{hostedzone}" replica_dns_name_format: "{cluster}-repl.{namespace}.{hostedzone}"
# replica_legacy_dns_name_format: "{cluster}-repl.{team}.{hostedzone}" replica_legacy_dns_name_format: "{cluster}-repl.{team}.{hostedzone}"
replication_username: standby replication_username: standby
resource_check_interval: 3s resource_check_interval: 3s
resource_check_timeout: 10m resource_check_timeout: 10m
@ -150,7 +154,7 @@ data:
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}" secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
share_pgsocket_with_sidecars: "false" share_pgsocket_with_sidecars: "false"
# sidecar_docker_images: "" # sidecar_docker_images: ""
# set_memory_request_to_limit: "false" set_memory_request_to_limit: "false"
spilo_allow_privilege_escalation: "true" spilo_allow_privilege_escalation: "true"
# spilo_runasuser: 101 # spilo_runasuser: 101
# spilo_runasgroup: 103 # spilo_runasgroup: 103
@ -158,10 +162,10 @@ data:
spilo_privileged: "false" spilo_privileged: "false"
storage_resize_mode: "pvc" storage_resize_mode: "pvc"
super_username: postgres super_username: postgres
# target_major_version: "16" target_major_version: "17"
# team_admin_role: "admin" team_admin_role: "admin"
# team_api_role_configuration: "log_statement:all" team_api_role_configuration: "log_statement:all"
# teams_api_url: http://fake-teams-api.default.svc.cluster.local teams_api_url: http://fake-teams-api.default.svc.cluster.local
# toleration: "key:db-only,operator:Exists,effect:NoSchedule" # toleration: "key:db-only,operator:Exists,effect:NoSchedule"
# wal_az_storage_account: "" # wal_az_storage_account: ""
# wal_gs_bucket: "" # wal_gs_bucket: ""

23
manifests/fes.crd.yaml Normal file
View File

@ -0,0 +1,23 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: fabriceventstreams.zalando.org
spec:
group: zalando.org
names:
kind: FabricEventStream
listKind: FabricEventStreamList
plural: fabriceventstreams
singular: fabriceventstream
shortNames:
- fes
categories:
- all
scope: Namespaced
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object

View File

@ -31,11 +31,21 @@ spec:
version: "13" version: "13"
sidecars: sidecars:
- name: "exporter" - name: "exporter"
image: "wrouesnel/postgres_exporter" image: "quay.io/prometheuscommunity/postgres-exporter:v0.15.0"
ports: ports:
- name: exporter - name: exporter
containerPort: 9187 containerPort: 9187
protocol: TCP protocol: TCP
env:
- name: DATA_SOURCE_URI
value: ":5432/?sslmode=disable"
- name: DATA_SOURCE_USER
value: "postgres"
- name: DATA_SOURCE_PASS
valueFrom:
secretKeyRef:
name: postgres.test-pg.credentials.postgresql.acid.zalan.do
key: password
resources: resources:
limits: limits:
cpu: 500m cpu: 500m

View File

@ -17,4 +17,4 @@ spec:
preparedDatabases: preparedDatabases:
bar: {} bar: {}
postgresql: postgresql:
version: "12" version: "13"

View File

@ -17,4 +17,4 @@ spec:
preparedDatabases: preparedDatabases:
bar: {} bar: {}
postgresql: postgresql:
version: "16" version: "17"

View File

@ -94,6 +94,7 @@ rules:
- create - create
- delete - delete
- get - get
- patch
- update - update
# to check nodes for node readiness label # to check nodes for node readiness label
- apiGroups: - apiGroups:
@ -166,6 +167,7 @@ rules:
- get - get
- list - list
- patch - patch
- update
# to CRUD cron jobs for logical backups # to CRUD cron jobs for logical backups
- apiGroups: - apiGroups:
- batch - batch

View File

@ -59,13 +59,20 @@ rules:
- get - get
- patch - patch
- update - update
# to read configuration from ConfigMaps # to read configuration from ConfigMaps and help Patroni manage the cluster if endpoints are not used
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
- configmaps - configmaps
verbs: verbs:
- create
- delete
- deletecollection
- get - get
- list
- patch
- update
- watch
# to send events to the CRs # to send events to the CRs
- apiGroups: - apiGroups:
- "" - ""
@ -78,7 +85,7 @@ rules:
- patch - patch
- update - update
- watch - watch
# to manage endpoints which are also used by Patroni # to manage endpoints which are also used by Patroni (if it is using config maps)
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
@ -102,6 +109,7 @@ rules:
- delete - delete
- get - get
- update - update
- patch
# to check nodes for node readiness label # to check nodes for node readiness label
- apiGroups: - apiGroups:
- "" - ""
@ -173,6 +181,7 @@ rules:
- get - get
- list - list
- patch - patch
- update
# to CRUD cron jobs for logical backups # to CRUD cron jobs for logical backups
- apiGroups: - apiGroups:
- batch - batch
@ -247,7 +256,21 @@ kind: ClusterRole
metadata: metadata:
name: postgres-pod name: postgres-pod
rules: rules:
# Patroni needs to watch and manage endpoints # Patroni needs to watch and manage config maps (or endpoints)
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
# Patroni needs to watch and manage endpoints (or config maps)
- apiGroups: - apiGroups:
- "" - ""
resources: resources:

View File

@ -66,7 +66,7 @@ spec:
type: string type: string
docker_image: docker_image:
type: string type: string
default: "ghcr.io/zalando/spilo-16:3.2-p2" default: "ghcr.io/zalando/spilo-17:4.0-p3"
enable_crd_registration: enable_crd_registration:
type: boolean type: boolean
default: true default: true
@ -158,17 +158,17 @@ spec:
properties: properties:
major_version_upgrade_mode: major_version_upgrade_mode:
type: string type: string
default: "off" default: "manual"
major_version_upgrade_team_allow_list: major_version_upgrade_team_allow_list:
type: array type: array
items: items:
type: string type: string
minimal_major_version: minimal_major_version:
type: string type: string
default: "12" default: "13"
target_major_version: target_major_version:
type: string type: string
default: "16" default: "17"
kubernetes: kubernetes:
type: object type: object
properties: properties:
@ -209,6 +209,9 @@ spec:
enable_init_containers: enable_init_containers:
type: boolean type: boolean
default: true default: true
enable_owner_references:
type: boolean
default: false
enable_persistent_volume_claim_deletion: enable_persistent_volume_claim_deletion:
type: boolean type: boolean
default: true default: true
@ -221,6 +224,9 @@ spec:
enable_readiness_probe: enable_readiness_probe:
type: boolean type: boolean
default: false default: false
enable_secrets_deletion:
type: boolean
default: true
enable_sidecars: enable_sidecars:
type: boolean type: boolean
default: true default: true
@ -279,6 +285,9 @@ spec:
oauth_token_secret_name: oauth_token_secret_name:
type: string type: string
default: "postgresql-operator" default: "postgresql-operator"
pdb_master_label_selector:
type: boolean
default: true
pdb_name_format: pdb_name_format:
type: string type: string
default: "postgres-{cluster}-pdb" default: "postgres-{cluster}-pdb"
@ -365,28 +374,28 @@ spec:
properties: properties:
default_cpu_limit: default_cpu_limit:
type: string type: string
pattern: '^(\d+m|\d+(\.\d{1,3})?)$' pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
default_cpu_request: default_cpu_request:
type: string type: string
pattern: '^(\d+m|\d+(\.\d{1,3})?)$' pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
default_memory_limit: default_memory_limit:
type: string type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$' pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
default_memory_request: default_memory_request:
type: string type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$' pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
max_cpu_request: max_cpu_request:
type: string type: string
pattern: '^(\d+m|\d+(\.\d{1,3})?)$' pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
max_memory_request: max_memory_request:
type: string type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$' pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
min_cpu_limit: min_cpu_limit:
type: string type: string
pattern: '^(\d+m|\d+(\.\d{1,3})?)$' pattern: '^(\d+m|\d+(\.\d{1,3})?)$|^$'
min_memory_limit: min_memory_limit:
type: string type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$' pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$|^$'
timeouts: timeouts:
type: object type: object
properties: properties:
@ -461,7 +470,6 @@ spec:
type: string type: string
additional_secret_mount_path: additional_secret_mount_path:
type: string type: string
default: "/meta/credentials"
aws_region: aws_region:
type: string type: string
default: "eu-central-1" default: "eu-central-1"
@ -500,7 +508,7 @@ spec:
pattern: '^(\d+m|\d+(\.\d{1,3})?)$' pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
logical_backup_docker_image: logical_backup_docker_image:
type: string type: string
default: "registry.opensource.zalan.do/acid/logical-backup:v1.11.0" default: "ghcr.io/zalando/postgres-operator/logical-backup:v1.15.0"
logical_backup_google_application_credentials: logical_backup_google_application_credentials:
type: string type: string
logical_backup_job_prefix: logical_backup_job_prefix:
@ -523,6 +531,8 @@ spec:
type: string type: string
logical_backup_s3_bucket: logical_backup_s3_bucket:
type: string type: string
logical_backup_s3_bucket_prefix:
type: string
logical_backup_s3_endpoint: logical_backup_s3_endpoint:
type: string type: string
logical_backup_s3_region: logical_backup_s3_region:

View File

@ -19,7 +19,7 @@ spec:
serviceAccountName: postgres-operator serviceAccountName: postgres-operator
containers: containers:
- name: postgres-operator - name: postgres-operator
image: registry.opensource.zalan.do/acid/postgres-operator:v1.11.0 image: ghcr.io/zalando/postgres-operator:v1.15.0
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
resources: resources:
requests: requests:

View File

@ -3,7 +3,7 @@ kind: OperatorConfiguration
metadata: metadata:
name: postgresql-operator-default-configuration name: postgresql-operator-default-configuration
configuration: configuration:
docker_image: ghcr.io/zalando/spilo-16:3.2-p2 docker_image: ghcr.io/zalando/spilo-17:4.0-p3
# enable_crd_registration: true # enable_crd_registration: true
# crd_categories: # crd_categories:
# - all # - all
@ -36,11 +36,11 @@ configuration:
replication_username: standby replication_username: standby
super_username: postgres super_username: postgres
major_version_upgrade: major_version_upgrade:
major_version_upgrade_mode: "off" major_version_upgrade_mode: "manual"
# major_version_upgrade_team_allow_list: # major_version_upgrade_team_allow_list:
# - acid # - acid
minimal_major_version: "12" minimal_major_version: "13"
target_major_version: "16" target_major_version: "17"
kubernetes: kubernetes:
# additional_pod_capabilities: # additional_pod_capabilities:
# - "SYS_NICE" # - "SYS_NICE"
@ -59,10 +59,12 @@ configuration:
# enable_cross_namespace_secret: "false" # enable_cross_namespace_secret: "false"
enable_finalizers: false enable_finalizers: false
enable_init_containers: true enable_init_containers: true
enable_owner_references: false
enable_persistent_volume_claim_deletion: true enable_persistent_volume_claim_deletion: true
enable_pod_antiaffinity: false enable_pod_antiaffinity: false
enable_pod_disruption_budget: true enable_pod_disruption_budget: true
enable_readiness_probe: false enable_readiness_probe: false
enable_secrets_deletion: true
enable_sidecars: true enable_sidecars: true
# ignored_annotations: # ignored_annotations:
# - k8s.v1.cni.cncf.io/network-status # - k8s.v1.cni.cncf.io/network-status
@ -85,6 +87,7 @@ configuration:
# status: ready # status: ready
# node_readiness_label_merge: "OR" # node_readiness_label_merge: "OR"
oauth_token_secret_name: postgresql-operator oauth_token_secret_name: postgresql-operator
pdb_master_label_selector: true
pdb_name_format: "postgres-{cluster}-pdb" pdb_name_format: "postgres-{cluster}-pdb"
persistent_volume_claim_retention_policy: persistent_volume_claim_retention_policy:
when_deleted: "retain" when_deleted: "retain"
@ -165,12 +168,13 @@ configuration:
# logical_backup_cpu_request: "" # logical_backup_cpu_request: ""
# logical_backup_memory_limit: "" # logical_backup_memory_limit: ""
# logical_backup_memory_request: "" # logical_backup_memory_request: ""
logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.11.0" logical_backup_docker_image: "ghcr.io/zalando/postgres-operator/logical-backup:v1.15.0"
# logical_backup_google_application_credentials: "" # logical_backup_google_application_credentials: ""
logical_backup_job_prefix: "logical-backup-" logical_backup_job_prefix: "logical-backup-"
logical_backup_provider: "s3" logical_backup_provider: "s3"
# logical_backup_s3_access_key_id: "" # logical_backup_s3_access_key_id: ""
logical_backup_s3_bucket: "my-bucket-url" logical_backup_s3_bucket: "my-bucket-url"
# logical_backup_s3_bucket_prefix: "spilo"
# logical_backup_s3_endpoint: "" # logical_backup_s3_endpoint: ""
# logical_backup_s3_region: "" # logical_backup_s3_region: ""
# logical_backup_s3_secret_access_key: "" # logical_backup_s3_secret_access_key: ""

View File

@ -85,10 +85,14 @@ spec:
- mountPath - mountPath
- volumeSource - volumeSource
properties: properties:
isSubPathExpr:
type: boolean
name: name:
type: string type: string
mountPath: mountPath:
type: string type: string
subPath:
type: string
targetContainers: targetContainers:
type: array type: array
nullable: true nullable: true
@ -97,8 +101,6 @@ spec:
volumeSource: volumeSource:
type: object type: object
x-kubernetes-preserve-unknown-fields: true x-kubernetes-preserve-unknown-fields: true
subPath:
type: string
allowedSourceRanges: allowedSourceRanges:
type: array type: array
nullable: true nullable: true
@ -213,6 +215,8 @@ spec:
items: items:
type: object type: object
x-kubernetes-preserve-unknown-fields: true x-kubernetes-preserve-unknown-fields: true
logicalBackupRetention:
type: string
logicalBackupSchedule: logicalBackupSchedule:
type: string type: string
pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$' pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
@ -220,7 +224,7 @@ spec:
type: array type: array
items: items:
type: string type: string
pattern: '^\ *((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))-((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))\ *$' pattern: '^\ *((Mon|Tue|Wed|Thu|Fri|Sat|Sun):(2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))-((2[0-3]|[01]?\d):([0-5]?\d)|(2[0-3]|[01]?\d):([0-5]?\d))\ *$'
masterServiceAnnotations: masterServiceAnnotations:
type: object type: object
additionalProperties: additionalProperties:
@ -369,12 +373,11 @@ spec:
version: version:
type: string type: string
enum: enum:
- "11"
- "12"
- "13" - "13"
- "14" - "14"
- "15" - "15"
- "16" - "16"
- "17"
parameters: parameters:
type: object type: object
additionalProperties: additionalProperties:
@ -509,6 +512,9 @@ spec:
type: string type: string
batchSize: batchSize:
type: integer type: integer
cpu:
type: string
pattern: '^(\d+m|\d+(\.\d{1,3})?)$'
database: database:
type: string type: string
enableRecovery: enableRecovery:
@ -517,6 +523,9 @@ spec:
type: object type: object
additionalProperties: additionalProperties:
type: string type: string
memory:
type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
tables: tables:
type: object type: object
additionalProperties: additionalProperties:
@ -528,6 +537,8 @@ spec:
type: string type: string
idColumn: idColumn:
type: string type: string
ignoreRecovery:
type: boolean
payloadColumn: payloadColumn:
type: string type: string
recoveryEventType: recoveryEventType:
@ -630,6 +641,8 @@ spec:
required: required:
- size - size
properties: properties:
isSubPathExpr:
type: boolean
iops: iops:
type: integer type: integer
selector: selector:

View File

@ -8,7 +8,7 @@ spec:
size: 1Gi size: 1Gi
numberOfInstances: 1 numberOfInstances: 1
postgresql: postgresql:
version: "16" version: "17"
# Make this a standby cluster and provide either the s3 bucket path of source cluster or the remote primary host for continuous streaming. # Make this a standby cluster and provide either the s3 bucket path of source cluster or the remote primary host for continuous streaming.
standby: standby:
# s3_wal_path: "s3://mybucket/spilo/acid-minimal-cluster/abcd1234-2a4b-4b2a-8c9c-c1234defg567/wal/14/" # s3_wal_path: "s3://mybucket/spilo/acid-minimal-cluster/abcd1234-2a4b-4b2a-8c9c-c1234defg567/wal/14/"

View File

@ -146,12 +146,18 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
Type: "object", Type: "object",
Required: []string{"name", "mountPath", "volumeSource"}, Required: []string{"name", "mountPath", "volumeSource"},
Properties: map[string]apiextv1.JSONSchemaProps{ Properties: map[string]apiextv1.JSONSchemaProps{
"isSubPathExpr": {
Type: "boolean",
},
"name": { "name": {
Type: "string", Type: "string",
}, },
"mountPath": { "mountPath": {
Type: "string", Type: "string",
}, },
"subPath": {
Type: "string",
},
"targetContainers": { "targetContainers": {
Type: "array", Type: "array",
Nullable: true, Nullable: true,
@ -165,9 +171,6 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
Type: "object", Type: "object",
XPreserveUnknownFields: util.True(), XPreserveUnknownFields: util.True(),
}, },
"subPath": {
Type: "string",
},
}, },
}, },
}, },
@ -343,6 +346,9 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
}, },
}, },
}, },
"logicalBackupRetention": {
Type: "string",
},
"logicalBackupSchedule": { "logicalBackupSchedule": {
Type: "string", Type: "string",
Pattern: "^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$", Pattern: "^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$",
@ -589,12 +595,6 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
"version": { "version": {
Type: "string", Type: "string",
Enum: []apiextv1.JSON{ Enum: []apiextv1.JSON{
{
Raw: []byte(`"11"`),
},
{
Raw: []byte(`"12"`),
},
{ {
Raw: []byte(`"13"`), Raw: []byte(`"13"`),
}, },
@ -607,6 +607,9 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
{ {
Raw: []byte(`"16"`), Raw: []byte(`"16"`),
}, },
{
Raw: []byte(`"17"`),
},
}, },
}, },
"parameters": { "parameters": {
@ -1027,6 +1030,9 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
Type: "object", Type: "object",
Required: []string{"size"}, Required: []string{"size"},
Properties: map[string]apiextv1.JSONSchemaProps{ Properties: map[string]apiextv1.JSONSchemaProps{
"isSubPathExpr": {
Type: "boolean",
},
"iops": { "iops": {
Type: "integer", Type: "integer",
}, },
@ -1159,6 +1165,7 @@ var OperatorConfigCRDResourceValidation = apiextv1.CustomResourceValidation{
}, },
"enable_spilo_wal_path_compat": { "enable_spilo_wal_path_compat": {
Type: "boolean", Type: "boolean",
Description: "deprecated",
}, },
"enable_team_id_clustername_prefix": { "enable_team_id_clustername_prefix": {
Type: "boolean", Type: "boolean",
@ -1320,6 +1327,9 @@ var OperatorConfigCRDResourceValidation = apiextv1.CustomResourceValidation{
"enable_init_containers": { "enable_init_containers": {
Type: "boolean", Type: "boolean",
}, },
"enable_owner_references": {
Type: "boolean",
},
"enable_persistent_volume_claim_deletion": { "enable_persistent_volume_claim_deletion": {
Type: "boolean", Type: "boolean",
}, },
@ -1332,6 +1342,9 @@ var OperatorConfigCRDResourceValidation = apiextv1.CustomResourceValidation{
"enable_readiness_probe": { "enable_readiness_probe": {
Type: "boolean", Type: "boolean",
}, },
"enable_secrets_deletion": {
Type: "boolean",
},
"enable_sidecars": { "enable_sidecars": {
Type: "boolean", Type: "boolean",
}, },
@ -1561,35 +1574,35 @@ var OperatorConfigCRDResourceValidation = apiextv1.CustomResourceValidation{
Properties: map[string]apiextv1.JSONSchemaProps{ Properties: map[string]apiextv1.JSONSchemaProps{
"default_cpu_limit": { "default_cpu_limit": {
Type: "string", Type: "string",
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$", Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$|^$",
}, },
"default_cpu_request": { "default_cpu_request": {
Type: "string", Type: "string",
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$", Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$|^$",
}, },
"default_memory_limit": { "default_memory_limit": {
Type: "string", Type: "string",
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$", Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$|^$",
}, },
"default_memory_request": { "default_memory_request": {
Type: "string", Type: "string",
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$", Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$|^$",
}, },
"max_cpu_request": { "max_cpu_request": {
Type: "string", Type: "string",
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$", Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$|^$",
}, },
"max_memory_request": { "max_memory_request": {
Type: "string", Type: "string",
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$", Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$|^$",
}, },
"min_cpu_limit": { "min_cpu_limit": {
Type: "string", Type: "string",
Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$", Pattern: "^(\\d+m|\\d+(\\.\\d{1,3})?)$|^$",
}, },
"min_memory_limit": { "min_memory_limit": {
Type: "string", Type: "string",
Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$", Pattern: "^(\\d+(e\\d+)?|\\d+(\\.\\d+)?(e\\d+)?[EPTGMK]i?)$|^$",
}, },
}, },
}, },
@ -1762,6 +1775,9 @@ var OperatorConfigCRDResourceValidation = apiextv1.CustomResourceValidation{
"logical_backup_s3_bucket": { "logical_backup_s3_bucket": {
Type: "string", Type: "string",
}, },
"logical_backup_s3_bucket_prefix": {
Type: "string",
},
"logical_backup_s3_endpoint": { "logical_backup_s3_endpoint": {
Type: "string", Type: "string",
}, },

View File

@ -47,14 +47,15 @@ type PostgresUsersConfiguration struct {
// MajorVersionUpgradeConfiguration defines how to execute major version upgrades of Postgres. // MajorVersionUpgradeConfiguration defines how to execute major version upgrades of Postgres.
type MajorVersionUpgradeConfiguration struct { type MajorVersionUpgradeConfiguration struct {
MajorVersionUpgradeMode string `json:"major_version_upgrade_mode" default:"off"` // off - no actions, manual - manifest triggers action, full - manifest and minimal version violation trigger upgrade MajorVersionUpgradeMode string `json:"major_version_upgrade_mode" default:"manual"` // off - no actions, manual - manifest triggers action, full - manifest and minimal version violation trigger upgrade
MajorVersionUpgradeTeamAllowList []string `json:"major_version_upgrade_team_allow_list,omitempty"` MajorVersionUpgradeTeamAllowList []string `json:"major_version_upgrade_team_allow_list,omitempty"`
MinimalMajorVersion string `json:"minimal_major_version" default:"12"` MinimalMajorVersion string `json:"minimal_major_version" default:"13"`
TargetMajorVersion string `json:"target_major_version" default:"16"` TargetMajorVersion string `json:"target_major_version" default:"17"`
} }
// KubernetesMetaConfiguration defines k8s conf required for all Postgres clusters and the operator itself // KubernetesMetaConfiguration defines k8s conf required for all Postgres clusters and the operator itself
type KubernetesMetaConfiguration struct { type KubernetesMetaConfiguration struct {
EnableOwnerReferences *bool `json:"enable_owner_references,omitempty"`
PodServiceAccountName string `json:"pod_service_account_name,omitempty"` PodServiceAccountName string `json:"pod_service_account_name,omitempty"`
// TODO: change it to the proper json // TODO: change it to the proper json
PodServiceAccountDefinition string `json:"pod_service_account_definition,omitempty"` PodServiceAccountDefinition string `json:"pod_service_account_definition,omitempty"`
@ -102,6 +103,7 @@ type KubernetesMetaConfiguration struct {
PodAntiAffinityTopologyKey string `json:"pod_antiaffinity_topology_key,omitempty"` PodAntiAffinityTopologyKey string `json:"pod_antiaffinity_topology_key,omitempty"`
PodManagementPolicy string `json:"pod_management_policy,omitempty"` PodManagementPolicy string `json:"pod_management_policy,omitempty"`
PersistentVolumeClaimRetentionPolicy map[string]string `json:"persistent_volume_claim_retention_policy,omitempty"` PersistentVolumeClaimRetentionPolicy map[string]string `json:"persistent_volume_claim_retention_policy,omitempty"`
EnableSecretsDeletion *bool `json:"enable_secrets_deletion,omitempty"`
EnablePersistentVolumeClaimDeletion *bool `json:"enable_persistent_volume_claim_deletion,omitempty"` EnablePersistentVolumeClaimDeletion *bool `json:"enable_persistent_volume_claim_deletion,omitempty"`
EnableReadinessProbe bool `json:"enable_readiness_probe,omitempty"` EnableReadinessProbe bool `json:"enable_readiness_probe,omitempty"`
EnableCrossNamespaceSecret bool `json:"enable_cross_namespace_secret,omitempty"` EnableCrossNamespaceSecret bool `json:"enable_cross_namespace_secret,omitempty"`
@ -158,7 +160,7 @@ type AWSGCPConfiguration struct {
LogS3Bucket string `json:"log_s3_bucket,omitempty"` LogS3Bucket string `json:"log_s3_bucket,omitempty"`
KubeIAMRole string `json:"kube_iam_role,omitempty"` KubeIAMRole string `json:"kube_iam_role,omitempty"`
AdditionalSecretMount string `json:"additional_secret_mount,omitempty"` AdditionalSecretMount string `json:"additional_secret_mount,omitempty"`
AdditionalSecretMountPath string `json:"additional_secret_mount_path" default:"/meta/credentials"` AdditionalSecretMountPath string `json:"additional_secret_mount_path,omitempty"`
EnableEBSGp3Migration bool `json:"enable_ebs_gp3_migration" default:"false"` EnableEBSGp3Migration bool `json:"enable_ebs_gp3_migration" default:"false"`
EnableEBSGp3MigrationMaxSize int64 `json:"enable_ebs_gp3_migration_max_size" default:"1000"` EnableEBSGp3MigrationMaxSize int64 `json:"enable_ebs_gp3_migration_max_size" default:"1000"`
} }
@ -228,6 +230,7 @@ type OperatorLogicalBackupConfiguration struct {
AzureStorageContainer string `json:"logical_backup_azure_storage_container,omitempty"` AzureStorageContainer string `json:"logical_backup_azure_storage_container,omitempty"`
AzureStorageAccountKey string `json:"logical_backup_azure_storage_account_key,omitempty"` AzureStorageAccountKey string `json:"logical_backup_azure_storage_account_key,omitempty"`
S3Bucket string `json:"logical_backup_s3_bucket,omitempty"` S3Bucket string `json:"logical_backup_s3_bucket,omitempty"`
S3BucketPrefix string `json:"logical_backup_s3_bucket_prefix,omitempty"`
S3Region string `json:"logical_backup_s3_region,omitempty"` S3Region string `json:"logical_backup_s3_region,omitempty"`
S3Endpoint string `json:"logical_backup_s3_endpoint,omitempty"` S3Endpoint string `json:"logical_backup_s3_endpoint,omitempty"`
S3AccessKeyID string `json:"logical_backup_s3_access_key_id,omitempty"` S3AccessKeyID string `json:"logical_backup_s3_access_key_id,omitempty"`

View File

@ -76,6 +76,7 @@ type PostgresSpec struct {
PodPriorityClassName string `json:"podPriorityClassName,omitempty"` PodPriorityClassName string `json:"podPriorityClassName,omitempty"`
ShmVolume *bool `json:"enableShmVolume,omitempty"` ShmVolume *bool `json:"enableShmVolume,omitempty"`
EnableLogicalBackup bool `json:"enableLogicalBackup,omitempty"` EnableLogicalBackup bool `json:"enableLogicalBackup,omitempty"`
LogicalBackupRetention string `json:"logicalBackupRetention,omitempty"`
LogicalBackupSchedule string `json:"logicalBackupSchedule,omitempty"` LogicalBackupSchedule string `json:"logicalBackupSchedule,omitempty"`
StandbyCluster *StandbyDescription `json:"standby,omitempty"` StandbyCluster *StandbyDescription `json:"standby,omitempty"`
PodAnnotations map[string]string `json:"podAnnotations,omitempty"` PodAnnotations map[string]string `json:"podAnnotations,omitempty"`
@ -132,6 +133,7 @@ type Volume struct {
Size string `json:"size"` Size string `json:"size"`
StorageClass string `json:"storageClass,omitempty"` StorageClass string `json:"storageClass,omitempty"`
SubPath string `json:"subPath,omitempty"` SubPath string `json:"subPath,omitempty"`
IsSubPathExpr *bool `json:"isSubPathExpr,omitempty"`
Iops *int64 `json:"iops,omitempty"` Iops *int64 `json:"iops,omitempty"`
Throughput *int64 `json:"throughput,omitempty"` Throughput *int64 `json:"throughput,omitempty"`
VolumeType string `json:"type,omitempty"` VolumeType string `json:"type,omitempty"`
@ -142,6 +144,7 @@ type AdditionalVolume struct {
Name string `json:"name"` Name string `json:"name"`
MountPath string `json:"mountPath"` MountPath string `json:"mountPath"`
SubPath string `json:"subPath,omitempty"` SubPath string `json:"subPath,omitempty"`
IsSubPathExpr *bool `json:"isSubPathExpr,omitempty"`
TargetContainers []string `json:"targetContainers"` TargetContainers []string `json:"targetContainers"`
VolumeSource v1.VolumeSource `json:"volumeSource"` VolumeSource v1.VolumeSource `json:"volumeSource"`
} }
@ -217,6 +220,7 @@ type Sidecar struct {
DockerImage string `json:"image,omitempty"` DockerImage string `json:"image,omitempty"`
Ports []v1.ContainerPort `json:"ports,omitempty"` Ports []v1.ContainerPort `json:"ports,omitempty"`
Env []v1.EnvVar `json:"env,omitempty"` Env []v1.EnvVar `json:"env,omitempty"`
Command []string `json:"command,omitempty"`
} }
// UserFlags defines flags (such as superuser, nologin) that could be assigned to individual users // UserFlags defines flags (such as superuser, nologin) that could be assigned to individual users
@ -255,6 +259,8 @@ type Stream struct {
Tables map[string]StreamTable `json:"tables"` Tables map[string]StreamTable `json:"tables"`
Filter map[string]*string `json:"filter,omitempty"` Filter map[string]*string `json:"filter,omitempty"`
BatchSize *uint32 `json:"batchSize,omitempty"` BatchSize *uint32 `json:"batchSize,omitempty"`
CPU *string `json:"cpu,omitempty"`
Memory *string `json:"memory,omitempty"`
EnableRecovery *bool `json:"enableRecovery,omitempty"` EnableRecovery *bool `json:"enableRecovery,omitempty"`
} }
@ -262,6 +268,7 @@ type Stream struct {
type StreamTable struct { type StreamTable struct {
EventType string `json:"eventType"` EventType string `json:"eventType"`
RecoveryEventType string `json:"recoveryEventType,omitempty"` RecoveryEventType string `json:"recoveryEventType,omitempty"`
IgnoreRecovery *bool `json:"ignoreRecovery,omitempty"`
IdColumn *string `json:"idColumn,omitempty"` IdColumn *string `json:"idColumn,omitempty"`
PayloadColumn *string `json:"payloadColumn,omitempty"` PayloadColumn *string `json:"payloadColumn,omitempty"`
} }

View File

@ -123,6 +123,8 @@ var maintenanceWindows = []struct {
{"expect error as weekday is empty", []byte(`":00:00-10:00"`), MaintenanceWindow{}, errors.New(`could not parse weekday: incorrect weekday`)}, {"expect error as weekday is empty", []byte(`":00:00-10:00"`), MaintenanceWindow{}, errors.New(`could not parse weekday: incorrect weekday`)},
{"expect error as maintenance window set seconds", []byte(`"Mon:00:00:00-10:00:00"`), MaintenanceWindow{}, errors.New(`incorrect maintenance window format`)}, {"expect error as maintenance window set seconds", []byte(`"Mon:00:00:00-10:00:00"`), MaintenanceWindow{}, errors.New(`incorrect maintenance window format`)},
{"expect error as 'To' time set seconds", []byte(`"Mon:00:00-00:00:00"`), MaintenanceWindow{}, errors.New("could not parse end time: incorrect time format")}, {"expect error as 'To' time set seconds", []byte(`"Mon:00:00-00:00:00"`), MaintenanceWindow{}, errors.New("could not parse end time: incorrect time format")},
// ideally, should be implemented
{"expect error as 'To' has a weekday", []byte(`"Mon:00:00-Fri:00:00"`), MaintenanceWindow{}, errors.New("could not parse end time: incorrect time format")},
{"expect error as 'To' time is missing", []byte(`"Mon:00:00"`), MaintenanceWindow{}, errors.New("incorrect maintenance window format")}} {"expect error as 'To' time is missing", []byte(`"Mon:00:00"`), MaintenanceWindow{}, errors.New("incorrect maintenance window format")}}
var postgresStatus = []struct { var postgresStatus = []struct {
@ -217,7 +219,7 @@ var unmarshalCluster = []struct {
"127.0.0.1/32" "127.0.0.1/32"
], ],
"postgresql": { "postgresql": {
"version": "16", "version": "17",
"parameters": { "parameters": {
"shared_buffers": "32MB", "shared_buffers": "32MB",
"max_connections": "10", "max_connections": "10",
@ -277,7 +279,7 @@ var unmarshalCluster = []struct {
}, },
Spec: PostgresSpec{ Spec: PostgresSpec{
PostgresqlParam: PostgresqlParam{ PostgresqlParam: PostgresqlParam{
PgVersion: "16", PgVersion: "17",
Parameters: map[string]string{ Parameters: map[string]string{
"shared_buffers": "32MB", "shared_buffers": "32MB",
"max_connections": "10", "max_connections": "10",
@ -337,7 +339,7 @@ var unmarshalCluster = []struct {
}, },
Error: "", Error: "",
}, },
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"acid-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"16","parameters":{"log_statement":"all","max_connections":"10","shared_buffers":"32MB"}},"pod_priority_class_name":"spilo-pod-priority","volume":{"size":"5Gi","storageClass":"SSD", "subPath": "subdir"},"enableShmVolume":false,"patroni":{"initdb":{"data-checksums":"true","encoding":"UTF8","locale":"en_US.UTF-8"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"],"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}}},"resources":{"requests":{"cpu":"10m","memory":"50Mi"},"limits":{"cpu":"300m","memory":"3000Mi"}},"teamId":"acid","allowedSourceRanges":["127.0.0.1/32"],"numberOfInstances":2,"users":{"zalando":["superuser","createdb"]},"maintenanceWindows":["Mon:01:00-06:00","Sat:00:00-04:00","05:00-05:15"],"clone":{"cluster":"acid-batman"}},"status":{"PostgresClusterStatus":""}}`), marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"acid-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"17","parameters":{"log_statement":"all","max_connections":"10","shared_buffers":"32MB"}},"pod_priority_class_name":"spilo-pod-priority","volume":{"size":"5Gi","storageClass":"SSD", "subPath": "subdir"},"enableShmVolume":false,"patroni":{"initdb":{"data-checksums":"true","encoding":"UTF8","locale":"en_US.UTF-8"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"],"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}}},"resources":{"requests":{"cpu":"10m","memory":"50Mi"},"limits":{"cpu":"300m","memory":"3000Mi"}},"teamId":"acid","allowedSourceRanges":["127.0.0.1/32"],"numberOfInstances":2,"users":{"zalando":["superuser","createdb"]},"maintenanceWindows":["Mon:01:00-06:00","Sat:00:00-04:00","05:00-05:15"],"clone":{"cluster":"acid-batman"}},"status":{"PostgresClusterStatus":""}}`),
err: nil}, err: nil},
{ {
about: "example with clone", about: "example with clone",
@ -402,7 +404,7 @@ var postgresqlList = []struct {
out PostgresqlList out PostgresqlList
err error err error
}{ }{
{"expect success", []byte(`{"apiVersion":"v1","items":[{"apiVersion":"acid.zalan.do/v1","kind":"Postgresql","metadata":{"labels":{"team":"acid"},"name":"acid-testcluster42","namespace":"default","resourceVersion":"30446957","selfLink":"/apis/acid.zalan.do/v1/namespaces/default/postgresqls/acid-testcluster42","uid":"857cd208-33dc-11e7-b20a-0699041e4b03"},"spec":{"allowedSourceRanges":["185.85.220.0/22"],"numberOfInstances":1,"postgresql":{"version":"16"},"teamId":"acid","volume":{"size":"10Gi"}},"status":{"PostgresClusterStatus":"Running"}}],"kind":"List","metadata":{},"resourceVersion":"","selfLink":""}`), {"expect success", []byte(`{"apiVersion":"v1","items":[{"apiVersion":"acid.zalan.do/v1","kind":"Postgresql","metadata":{"labels":{"team":"acid"},"name":"acid-testcluster42","namespace":"default","resourceVersion":"30446957","selfLink":"/apis/acid.zalan.do/v1/namespaces/default/postgresqls/acid-testcluster42","uid":"857cd208-33dc-11e7-b20a-0699041e4b03"},"spec":{"allowedSourceRanges":["185.85.220.0/22"],"numberOfInstances":1,"postgresql":{"version":"17"},"teamId":"acid","volume":{"size":"10Gi"}},"status":{"PostgresClusterStatus":"Running"}}],"kind":"List","metadata":{},"resourceVersion":"","selfLink":""}`),
PostgresqlList{ PostgresqlList{
TypeMeta: metav1.TypeMeta{ TypeMeta: metav1.TypeMeta{
Kind: "List", Kind: "List",
@ -423,7 +425,7 @@ var postgresqlList = []struct {
}, },
Spec: PostgresSpec{ Spec: PostgresSpec{
ClusterName: "testcluster42", ClusterName: "testcluster42",
PostgresqlParam: PostgresqlParam{PgVersion: "16"}, PostgresqlParam: PostgresqlParam{PgVersion: "17"},
Volume: Volume{Size: "10Gi"}, Volume: Volume{Size: "10Gi"},
TeamID: "acid", TeamID: "acid",
AllowedSourceRanges: []string{"185.85.220.0/22"}, AllowedSourceRanges: []string{"185.85.220.0/22"},

View File

@ -2,7 +2,7 @@
// +build !ignore_autogenerated // +build !ignore_autogenerated
/* /*
Copyright 2024 Compose, Zalando SE Copyright 2025 Compose, Zalando SE
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal
@ -53,6 +53,11 @@ func (in *AWSGCPConfiguration) DeepCopy() *AWSGCPConfiguration {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AdditionalVolume) DeepCopyInto(out *AdditionalVolume) { func (in *AdditionalVolume) DeepCopyInto(out *AdditionalVolume) {
*out = *in *out = *in
if in.IsSubPathExpr != nil {
in, out := &in.IsSubPathExpr, &out.IsSubPathExpr
*out = new(bool)
**out = **in
}
if in.TargetContainers != nil { if in.TargetContainers != nil {
in, out := &in.TargetContainers, &out.TargetContainers in, out := &in.TargetContainers, &out.TargetContainers
*out = make([]string, len(*in)) *out = make([]string, len(*in))
@ -153,6 +158,11 @@ func (in *ConnectionPoolerConfiguration) DeepCopy() *ConnectionPoolerConfigurati
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubernetesMetaConfiguration) DeepCopyInto(out *KubernetesMetaConfiguration) { func (in *KubernetesMetaConfiguration) DeepCopyInto(out *KubernetesMetaConfiguration) {
*out = *in *out = *in
if in.EnableOwnerReferences != nil {
in, out := &in.EnableOwnerReferences, &out.EnableOwnerReferences
*out = new(bool)
**out = **in
}
if in.SpiloAllowPrivilegeEscalation != nil { if in.SpiloAllowPrivilegeEscalation != nil {
in, out := &in.SpiloAllowPrivilegeEscalation, &out.SpiloAllowPrivilegeEscalation in, out := &in.SpiloAllowPrivilegeEscalation, &out.SpiloAllowPrivilegeEscalation
*out = new(bool) *out = new(bool)
@ -272,6 +282,11 @@ func (in *KubernetesMetaConfiguration) DeepCopyInto(out *KubernetesMetaConfigura
(*out)[key] = val (*out)[key] = val
} }
} }
if in.EnableSecretsDeletion != nil {
in, out := &in.EnableSecretsDeletion, &out.EnableSecretsDeletion
*out = new(bool)
**out = **in
}
if in.EnablePersistentVolumeClaimDeletion != nil { if in.EnablePersistentVolumeClaimDeletion != nil {
in, out := &in.EnablePersistentVolumeClaimDeletion, &out.EnablePersistentVolumeClaimDeletion in, out := &in.EnablePersistentVolumeClaimDeletion, &out.EnablePersistentVolumeClaimDeletion
*out = new(bool) *out = new(bool)
@ -1262,6 +1277,11 @@ func (in *Sidecar) DeepCopyInto(out *Sidecar) {
(*in)[i].DeepCopyInto(&(*out)[i]) (*in)[i].DeepCopyInto(&(*out)[i])
} }
} }
if in.Command != nil {
in, out := &in.Command, &out.Command
*out = make([]string, len(*in))
copy(*out, *in)
}
return return
} }
@ -1321,6 +1341,16 @@ func (in *Stream) DeepCopyInto(out *Stream) {
*out = new(uint32) *out = new(uint32)
**out = **in **out = **in
} }
if in.CPU != nil {
in, out := &in.CPU, &out.CPU
*out = new(string)
**out = **in
}
if in.Memory != nil {
in, out := &in.Memory, &out.Memory
*out = new(string)
**out = **in
}
if in.EnableRecovery != nil { if in.EnableRecovery != nil {
in, out := &in.EnableRecovery, &out.EnableRecovery in, out := &in.EnableRecovery, &out.EnableRecovery
*out = new(bool) *out = new(bool)
@ -1342,6 +1372,11 @@ func (in *Stream) DeepCopy() *Stream {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *StreamTable) DeepCopyInto(out *StreamTable) { func (in *StreamTable) DeepCopyInto(out *StreamTable) {
*out = *in *out = *in
if in.IgnoreRecovery != nil {
in, out := &in.IgnoreRecovery, &out.IgnoreRecovery
*out = new(bool)
**out = **in
}
if in.IdColumn != nil { if in.IdColumn != nil {
in, out := &in.IdColumn, &out.IdColumn in, out := &in.IdColumn, &out.IdColumn
*out = new(string) *out = new(string)
@ -1442,6 +1477,11 @@ func (in *Volume) DeepCopyInto(out *Volume) {
*out = new(metav1.LabelSelector) *out = new(metav1.LabelSelector)
(*in).DeepCopyInto(*out) (*in).DeepCopyInto(*out)
} }
if in.IsSubPathExpr != nil {
in, out := &in.IsSubPathExpr, &out.IsSubPathExpr
*out = new(bool)
**out = **in
}
if in.Iops != nil { if in.Iops != nil {
in, out := &in.Iops, &out.Iops in, out := &in.Iops, &out.Iops
*out = new(int64) *out = new(int64)

View File

@ -1,6 +1,7 @@
package v1 package v1
import ( import (
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
) )
@ -89,3 +90,8 @@ type DBAuth struct {
UserKey string `json:"userKey,omitempty"` UserKey string `json:"userKey,omitempty"`
PasswordKey string `json:"passwordKey,omitempty"` PasswordKey string `json:"passwordKey,omitempty"`
} }
type Slot struct {
Slot map[string]string `json:"slot"`
Publication map[string]acidv1.StreamTable `json:"publication"`
}

View File

@ -3,7 +3,6 @@ package cluster
// Postgres CustomResourceDefinition object i.e. Spilo // Postgres CustomResourceDefinition object i.e. Spilo
import ( import (
"context"
"database/sql" "database/sql"
"encoding/json" "encoding/json"
"fmt" "fmt"
@ -15,6 +14,7 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1" acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
zalandov1 "github.com/zalando/postgres-operator/pkg/apis/zalando.org/v1"
"github.com/zalando/postgres-operator/pkg/generated/clientset/versioned/scheme" "github.com/zalando/postgres-operator/pkg/generated/clientset/versioned/scheme"
"github.com/zalando/postgres-operator/pkg/spec" "github.com/zalando/postgres-operator/pkg/spec"
@ -28,6 +28,7 @@ import (
"github.com/zalando/postgres-operator/pkg/util/users" "github.com/zalando/postgres-operator/pkg/util/users"
"github.com/zalando/postgres-operator/pkg/util/volumes" "github.com/zalando/postgres-operator/pkg/util/volumes"
appsv1 "k8s.io/api/apps/v1" appsv1 "k8s.io/api/apps/v1"
batchv1 "k8s.io/api/batch/v1"
v1 "k8s.io/api/core/v1" v1 "k8s.io/api/core/v1"
policyv1 "k8s.io/api/policy/v1" policyv1 "k8s.io/api/policy/v1"
rbacv1 "k8s.io/api/rbac/v1" rbacv1 "k8s.io/api/rbac/v1"
@ -60,11 +61,16 @@ type Config struct {
type kubeResources struct { type kubeResources struct {
Services map[PostgresRole]*v1.Service Services map[PostgresRole]*v1.Service
Endpoints map[PostgresRole]*v1.Endpoints Endpoints map[PostgresRole]*v1.Endpoints
PatroniEndpoints map[string]*v1.Endpoints
PatroniConfigMaps map[string]*v1.ConfigMap
Secrets map[types.UID]*v1.Secret Secrets map[types.UID]*v1.Secret
Statefulset *appsv1.StatefulSet Statefulset *appsv1.StatefulSet
PodDisruptionBudget *policyv1.PodDisruptionBudget VolumeClaims map[types.UID]*v1.PersistentVolumeClaim
PrimaryPodDisruptionBudget *policyv1.PodDisruptionBudget
CriticalOpPodDisruptionBudget *policyv1.PodDisruptionBudget
LogicalBackupJob *batchv1.CronJob
Streams map[string]*zalandov1.FabricEventStream
//Pods are treated separately //Pods are treated separately
//PVCs are treated separately
} }
// Cluster describes postgresql cluster // Cluster describes postgresql cluster
@ -104,6 +110,13 @@ type compareStatefulsetResult struct {
replace bool replace bool
rollingUpdate bool rollingUpdate bool
reasons []string reasons []string
deletedPodAnnotations []string
}
type compareLogicalBackupJobResult struct {
match bool
reasons []string
deletedPodAnnotations []string
} }
// New creates a new cluster. This function should be called from a controller. // New creates a new cluster. This function should be called from a controller.
@ -132,7 +145,11 @@ func New(cfg Config, kubeClient k8sutil.KubernetesClient, pgSpec acidv1.Postgres
kubeResources: kubeResources{ kubeResources: kubeResources{
Secrets: make(map[types.UID]*v1.Secret), Secrets: make(map[types.UID]*v1.Secret),
Services: make(map[PostgresRole]*v1.Service), Services: make(map[PostgresRole]*v1.Service),
Endpoints: make(map[PostgresRole]*v1.Endpoints)}, Endpoints: make(map[PostgresRole]*v1.Endpoints),
PatroniEndpoints: make(map[string]*v1.Endpoints),
PatroniConfigMaps: make(map[string]*v1.ConfigMap),
VolumeClaims: make(map[types.UID]*v1.PersistentVolumeClaim),
Streams: make(map[string]*zalandov1.FabricEventStream)},
userSyncStrategy: users.DefaultUserSyncStrategy{ userSyncStrategy: users.DefaultUserSyncStrategy{
PasswordEncryption: passwordEncryption, PasswordEncryption: passwordEncryption,
RoleDeletionSuffix: cfg.OpConfig.RoleDeletionSuffix, RoleDeletionSuffix: cfg.OpConfig.RoleDeletionSuffix,
@ -327,14 +344,10 @@ func (c *Cluster) Create() (err error) {
c.logger.Infof("secrets have been successfully created") c.logger.Infof("secrets have been successfully created")
c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "Secrets", "The secrets have been successfully created") c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "Secrets", "The secrets have been successfully created")
if c.PodDisruptionBudget != nil { if err = c.createPodDisruptionBudgets(); err != nil {
return fmt.Errorf("pod disruption budget already exists in the cluster") return fmt.Errorf("could not create pod disruption budgets: %v", err)
} }
pdb, err := c.createPodDisruptionBudget() c.logger.Info("pod disruption budgets have been successfully created")
if err != nil {
return fmt.Errorf("could not create pod disruption budget: %v", err)
}
c.logger.Infof("pod disruption budget %q has been successfully created", util.NameFromMeta(pdb.ObjectMeta))
if c.Statefulset != nil { if c.Statefulset != nil {
return fmt.Errorf("statefulset already exists in the cluster") return fmt.Errorf("statefulset already exists in the cluster")
@ -355,6 +368,16 @@ func (c *Cluster) Create() (err error) {
c.logger.Infof("pods are ready") c.logger.Infof("pods are ready")
c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "StatefulSet", "Pods are ready") c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "StatefulSet", "Pods are ready")
// sync volume may already transition volumes to gp3, if iops/throughput or type is specified
if err = c.syncVolumes(); err != nil {
return err
}
// sync resources created by Patroni
if err = c.syncPatroniResources(); err != nil {
c.logger.Warnf("Patroni resources not yet synced: %v", err)
}
// create database objects unless we are running without pods or disabled // create database objects unless we are running without pods or disabled
// that feature explicitly // that feature explicitly
if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&c.Spec) <= 0 || c.Spec.StandbyCluster != nil) { if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&c.Spec) <= 0 || c.Spec.StandbyCluster != nil) {
@ -380,10 +403,6 @@ func (c *Cluster) Create() (err error) {
c.logger.Info("a k8s cron job for logical backup has been successfully created") c.logger.Info("a k8s cron job for logical backup has been successfully created")
} }
if err := c.listResources(); err != nil {
c.logger.Errorf("could not list resources: %v", err)
}
// Create connection pooler deployment and services if necessary. Since we // Create connection pooler deployment and services if necessary. Since we
// need to perform some operations with the database itself (e.g. install // need to perform some operations with the database itself (e.g. install
// lookup function), do it as the last step, when everything is available. // lookup function), do it as the last step, when everything is available.
@ -408,10 +427,15 @@ func (c *Cluster) Create() (err error) {
} }
} }
if err := c.listResources(); err != nil {
c.logger.Errorf("could not list resources: %v", err)
}
return nil return nil
} }
func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compareStatefulsetResult { func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compareStatefulsetResult {
deletedPodAnnotations := []string{}
reasons := make([]string, 0) reasons := make([]string, 0)
var match, needsRollUpdate, needsReplace bool var match, needsRollUpdate, needsReplace bool
@ -421,7 +445,12 @@ func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compa
match = false match = false
reasons = append(reasons, "new statefulset's number of replicas does not match the current one") reasons = append(reasons, "new statefulset's number of replicas does not match the current one")
} }
if changed, reason := c.compareAnnotations(c.Statefulset.Annotations, statefulSet.Annotations); changed { if !reflect.DeepEqual(c.Statefulset.OwnerReferences, statefulSet.OwnerReferences) {
match = false
needsReplace = true
reasons = append(reasons, "new statefulset's ownerReferences do not match")
}
if changed, reason := c.compareAnnotations(c.Statefulset.Annotations, statefulSet.Annotations, nil); changed {
match = false match = false
needsReplace = true needsReplace = true
reasons = append(reasons, "new statefulset's annotations do not match: "+reason) reasons = append(reasons, "new statefulset's annotations do not match: "+reason)
@ -432,14 +461,20 @@ func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compa
reasons = append(reasons, "new statefulset's pod management policy do not match") reasons = append(reasons, "new statefulset's pod management policy do not match")
} }
if c.Statefulset.Spec.PersistentVolumeClaimRetentionPolicy == nil {
c.Statefulset.Spec.PersistentVolumeClaimRetentionPolicy = &appsv1.StatefulSetPersistentVolumeClaimRetentionPolicy{
WhenDeleted: appsv1.RetainPersistentVolumeClaimRetentionPolicyType,
WhenScaled: appsv1.RetainPersistentVolumeClaimRetentionPolicyType,
}
}
if !reflect.DeepEqual(c.Statefulset.Spec.PersistentVolumeClaimRetentionPolicy, statefulSet.Spec.PersistentVolumeClaimRetentionPolicy) { if !reflect.DeepEqual(c.Statefulset.Spec.PersistentVolumeClaimRetentionPolicy, statefulSet.Spec.PersistentVolumeClaimRetentionPolicy) {
match = false match = false
needsReplace = true needsReplace = true
reasons = append(reasons, "new statefulset's persistent volume claim retention policy do not match") reasons = append(reasons, "new statefulset's persistent volume claim retention policy do not match")
} }
needsRollUpdate, reasons = c.compareContainers("initContainers", c.Statefulset.Spec.Template.Spec.InitContainers, statefulSet.Spec.Template.Spec.InitContainers, needsRollUpdate, reasons) needsRollUpdate, reasons = c.compareContainers("statefulset initContainers", c.Statefulset.Spec.Template.Spec.InitContainers, statefulSet.Spec.Template.Spec.InitContainers, needsRollUpdate, reasons)
needsRollUpdate, reasons = c.compareContainers("containers", c.Statefulset.Spec.Template.Spec.Containers, statefulSet.Spec.Template.Spec.Containers, needsRollUpdate, reasons) needsRollUpdate, reasons = c.compareContainers("statefulset containers", c.Statefulset.Spec.Template.Spec.Containers, statefulSet.Spec.Template.Spec.Containers, needsRollUpdate, reasons)
if len(c.Statefulset.Spec.Template.Spec.Containers) == 0 { if len(c.Statefulset.Spec.Template.Spec.Containers) == 0 {
c.logger.Warningf("statefulset %q has no container", util.NameFromMeta(c.Statefulset.ObjectMeta)) c.logger.Warningf("statefulset %q has no container", util.NameFromMeta(c.Statefulset.ObjectMeta))
@ -489,10 +524,9 @@ func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compa
} }
} }
if changed, reason := c.compareAnnotations(c.Statefulset.Spec.Template.Annotations, statefulSet.Spec.Template.Annotations); changed { if changed, reason := c.compareAnnotations(c.Statefulset.Spec.Template.Annotations, statefulSet.Spec.Template.Annotations, &deletedPodAnnotations); changed {
match = false match = false
needsReplace = true needsReplace = true
needsRollUpdate = true
reasons = append(reasons, "new statefulset's pod template metadata annotations does not match "+reason) reasons = append(reasons, "new statefulset's pod template metadata annotations does not match "+reason)
} }
if !reflect.DeepEqual(c.Statefulset.Spec.Template.Spec.SecurityContext, statefulSet.Spec.Template.Spec.SecurityContext) { if !reflect.DeepEqual(c.Statefulset.Spec.Template.Spec.SecurityContext, statefulSet.Spec.Template.Spec.SecurityContext) {
@ -512,9 +546,9 @@ func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compa
reasons = append(reasons, fmt.Sprintf("new statefulset's name for volume %d does not match the current one", i)) reasons = append(reasons, fmt.Sprintf("new statefulset's name for volume %d does not match the current one", i))
continue continue
} }
if !reflect.DeepEqual(c.Statefulset.Spec.VolumeClaimTemplates[i].Annotations, statefulSet.Spec.VolumeClaimTemplates[i].Annotations) { if changed, reason := c.compareAnnotations(c.Statefulset.Spec.VolumeClaimTemplates[i].Annotations, statefulSet.Spec.VolumeClaimTemplates[i].Annotations, nil); changed {
needsReplace = true needsReplace = true
reasons = append(reasons, fmt.Sprintf("new statefulset's annotations for volume %q does not match the current one", name)) reasons = append(reasons, fmt.Sprintf("new statefulset's annotations for volume %q do not match the current ones: %s", name, reason))
} }
if !reflect.DeepEqual(c.Statefulset.Spec.VolumeClaimTemplates[i].Spec, statefulSet.Spec.VolumeClaimTemplates[i].Spec) { if !reflect.DeepEqual(c.Statefulset.Spec.VolumeClaimTemplates[i].Spec, statefulSet.Spec.VolumeClaimTemplates[i].Spec) {
name := c.Statefulset.Spec.VolumeClaimTemplates[i].Name name := c.Statefulset.Spec.VolumeClaimTemplates[i].Name
@ -550,7 +584,7 @@ func (c *Cluster) compareStatefulSetWith(statefulSet *appsv1.StatefulSet) *compa
match = false match = false
} }
return &compareStatefulsetResult{match: match, reasons: reasons, rollingUpdate: needsRollUpdate, replace: needsReplace} return &compareStatefulsetResult{match: match, reasons: reasons, rollingUpdate: needsRollUpdate, replace: needsReplace, deletedPodAnnotations: deletedPodAnnotations}
} }
type containerCondition func(a, b v1.Container) bool type containerCondition func(a, b v1.Container) bool
@ -571,30 +605,30 @@ func newCheck(msg string, cond containerCondition) containerCheck {
func (c *Cluster) compareContainers(description string, setA, setB []v1.Container, needsRollUpdate bool, reasons []string) (bool, []string) { func (c *Cluster) compareContainers(description string, setA, setB []v1.Container, needsRollUpdate bool, reasons []string) (bool, []string) {
if len(setA) != len(setB) { if len(setA) != len(setB) {
return true, append(reasons, fmt.Sprintf("new statefulset %s's length does not match the current ones", description)) return true, append(reasons, fmt.Sprintf("new %s's length does not match the current ones", description))
} }
checks := []containerCheck{ checks := []containerCheck{
newCheck("new statefulset %s's %s (index %d) name does not match the current one", newCheck("new %s's %s (index %d) name does not match the current one",
func(a, b v1.Container) bool { return a.Name != b.Name }), func(a, b v1.Container) bool { return a.Name != b.Name }),
newCheck("new statefulset %s's %s (index %d) readiness probe does not match the current one", newCheck("new %s's %s (index %d) readiness probe does not match the current one",
func(a, b v1.Container) bool { return !reflect.DeepEqual(a.ReadinessProbe, b.ReadinessProbe) }), func(a, b v1.Container) bool { return !reflect.DeepEqual(a.ReadinessProbe, b.ReadinessProbe) }),
newCheck("new statefulset %s's %s (index %d) ports do not match the current one", newCheck("new %s's %s (index %d) ports do not match the current one",
func(a, b v1.Container) bool { return !comparePorts(a.Ports, b.Ports) }), func(a, b v1.Container) bool { return !comparePorts(a.Ports, b.Ports) }),
newCheck("new statefulset %s's %s (index %d) resources do not match the current ones", newCheck("new %s's %s (index %d) resources do not match the current ones",
func(a, b v1.Container) bool { return !compareResources(&a.Resources, &b.Resources) }), func(a, b v1.Container) bool { return !compareResources(&a.Resources, &b.Resources) }),
newCheck("new statefulset %s's %s (index %d) environment does not match the current one", newCheck("new %s's %s (index %d) environment does not match the current one",
func(a, b v1.Container) bool { return !compareEnv(a.Env, b.Env) }), func(a, b v1.Container) bool { return !compareEnv(a.Env, b.Env) }),
newCheck("new statefulset %s's %s (index %d) environment sources do not match the current one", newCheck("new %s's %s (index %d) environment sources do not match the current one",
func(a, b v1.Container) bool { return !reflect.DeepEqual(a.EnvFrom, b.EnvFrom) }), func(a, b v1.Container) bool { return !reflect.DeepEqual(a.EnvFrom, b.EnvFrom) }),
newCheck("new statefulset %s's %s (index %d) security context does not match the current one", newCheck("new %s's %s (index %d) security context does not match the current one",
func(a, b v1.Container) bool { return !reflect.DeepEqual(a.SecurityContext, b.SecurityContext) }), func(a, b v1.Container) bool { return !reflect.DeepEqual(a.SecurityContext, b.SecurityContext) }),
newCheck("new statefulset %s's %s (index %d) volume mounts do not match the current one", newCheck("new %s's %s (index %d) volume mounts do not match the current one",
func(a, b v1.Container) bool { return !reflect.DeepEqual(a.VolumeMounts, b.VolumeMounts) }), func(a, b v1.Container) bool { return !compareVolumeMounts(a.VolumeMounts, b.VolumeMounts) }),
} }
if !c.OpConfig.EnableLazySpiloUpgrade { if !c.OpConfig.EnableLazySpiloUpgrade {
checks = append(checks, newCheck("new statefulset %s's %s (index %d) image does not match the current one", checks = append(checks, newCheck("new %s's %s (index %d) image does not match the current one",
func(a, b v1.Container) bool { return a.Image != b.Image })) func(a, b v1.Container) bool { return a.Image != b.Image }))
} }
@ -645,7 +679,7 @@ func compareEnv(a, b []v1.EnvVar) bool {
if len(a) != len(b) { if len(a) != len(b) {
return false return false
} }
equal := true var equal bool
for _, enva := range a { for _, enva := range a {
hasmatch := false hasmatch := false
for _, envb := range b { for _, envb := range b {
@ -731,7 +765,28 @@ func comparePorts(a, b []v1.ContainerPort) bool {
return true return true
} }
func (c *Cluster) compareAnnotations(old, new map[string]string) (bool, string) { func compareVolumeMounts(old, new []v1.VolumeMount) bool {
if len(old) != len(new) {
return false
}
for _, mount := range old {
if !volumeMountExists(mount, new) {
return false
}
}
return true
}
func volumeMountExists(mount v1.VolumeMount, mounts []v1.VolumeMount) bool {
for _, m := range mounts {
if reflect.DeepEqual(mount, m) {
return true
}
}
return false
}
func (c *Cluster) compareAnnotations(old, new map[string]string, removedList *[]string) (bool, string) {
reason := "" reason := ""
ignoredAnnotations := make(map[string]bool) ignoredAnnotations := make(map[string]bool)
for _, ignore := range c.OpConfig.IgnoredAnnotations { for _, ignore := range c.OpConfig.IgnoredAnnotations {
@ -744,6 +799,9 @@ func (c *Cluster) compareAnnotations(old, new map[string]string) (bool, string)
} }
if _, ok := new[key]; !ok { if _, ok := new[key]; !ok {
reason += fmt.Sprintf(" Removed %q.", key) reason += fmt.Sprintf(" Removed %q.", key)
if removedList != nil {
*removedList = append(*removedList, key)
}
} }
} }
@ -779,13 +837,87 @@ func (c *Cluster) compareServices(old, new *v1.Service) (bool, string) {
} }
} }
if changed, reason := c.compareAnnotations(old.Annotations, new.Annotations); changed { if !reflect.DeepEqual(old.ObjectMeta.OwnerReferences, new.ObjectMeta.OwnerReferences) {
return !changed, "new service's annotations does not match the current one:" + reason return false, "new service's owner references do not match the current ones"
}
if !reflect.DeepEqual(old.Spec.Selector, new.Spec.Selector) {
return false, "new service's selector does not match the current one"
}
if old.Spec.ExternalTrafficPolicy != new.Spec.ExternalTrafficPolicy {
return false, "new service's ExternalTrafficPolicy does not match the current one"
} }
return true, "" return true, ""
} }
func (c *Cluster) compareLogicalBackupJob(cur, new *batchv1.CronJob) *compareLogicalBackupJobResult {
deletedPodAnnotations := []string{}
reasons := make([]string, 0)
match := true
if cur.Spec.Schedule != new.Spec.Schedule {
match = false
reasons = append(reasons, fmt.Sprintf("new job's schedule %q does not match the current one %q", new.Spec.Schedule, cur.Spec.Schedule))
}
newImage := new.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Image
curImage := cur.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Image
if newImage != curImage {
match = false
reasons = append(reasons, fmt.Sprintf("new job's image %q does not match the current one %q", newImage, curImage))
}
newPodAnnotation := new.Spec.JobTemplate.Spec.Template.Annotations
curPodAnnotation := cur.Spec.JobTemplate.Spec.Template.Annotations
if changed, reason := c.compareAnnotations(curPodAnnotation, newPodAnnotation, &deletedPodAnnotations); changed {
match = false
reasons = append(reasons, fmt.Sprint("new job's pod template metadata annotations do not match "+reason))
}
newPgVersion := getPgVersion(new)
curPgVersion := getPgVersion(cur)
if newPgVersion != curPgVersion {
match = false
reasons = append(reasons, fmt.Sprintf("new job's env PG_VERSION %q does not match the current one %q", newPgVersion, curPgVersion))
}
needsReplace := false
contReasons := make([]string, 0)
needsReplace, contReasons = c.compareContainers("cronjob container", cur.Spec.JobTemplate.Spec.Template.Spec.Containers, new.Spec.JobTemplate.Spec.Template.Spec.Containers, needsReplace, contReasons)
if needsReplace {
match = false
reasons = append(reasons, fmt.Sprintf("logical backup container specs do not match: %v", strings.Join(contReasons, `', '`)))
}
return &compareLogicalBackupJobResult{match: match, reasons: reasons, deletedPodAnnotations: deletedPodAnnotations}
}
func (c *Cluster) comparePodDisruptionBudget(cur, new *policyv1.PodDisruptionBudget) (bool, string) {
//TODO: improve comparison
if !reflect.DeepEqual(new.Spec, cur.Spec) {
return false, "new PDB's spec does not match the current one"
}
if !reflect.DeepEqual(new.ObjectMeta.OwnerReferences, cur.ObjectMeta.OwnerReferences) {
return false, "new PDB's owner references do not match the current ones"
}
if changed, reason := c.compareAnnotations(cur.Annotations, new.Annotations, nil); changed {
return false, "new PDB's annotations do not match the current ones:" + reason
}
return true, ""
}
func getPgVersion(cronJob *batchv1.CronJob) string {
envs := cronJob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Env
for _, env := range envs {
if env.Name == "PG_VERSION" {
return env.Value
}
}
return ""
}
// addFinalizer patches the postgresql CR to add finalizer // addFinalizer patches the postgresql CR to add finalizer
func (c *Cluster) addFinalizer() error { func (c *Cluster) addFinalizer() error {
if c.hasFinalizer() { if c.hasFinalizer() {
@ -841,12 +973,16 @@ func (c *Cluster) hasFinalizer() bool {
func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error { func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
updateFailed := false updateFailed := false
userInitFailed := false userInitFailed := false
syncStatefulSet := false
c.mu.Lock() c.mu.Lock()
defer c.mu.Unlock() defer c.mu.Unlock()
c.KubeClient.SetPostgresCRDStatus(c.clusterName(), acidv1.ClusterStatusUpdating) c.KubeClient.SetPostgresCRDStatus(c.clusterName(), acidv1.ClusterStatusUpdating)
if !isInMaintenanceWindow(newSpec.Spec.MaintenanceWindows) {
// do not apply any major version related changes yet
newSpec.Spec.PostgresqlParam.PgVersion = oldSpec.Spec.PostgresqlParam.PgVersion
}
c.setSpec(newSpec) c.setSpec(newSpec)
defer func() { defer func() {
@ -872,7 +1008,6 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
if IsBiggerPostgresVersion(oldSpec.Spec.PostgresqlParam.PgVersion, c.GetDesiredMajorVersion()) { if IsBiggerPostgresVersion(oldSpec.Spec.PostgresqlParam.PgVersion, c.GetDesiredMajorVersion()) {
c.logger.Infof("postgresql version increased (%s -> %s), depending on config manual upgrade needed", c.logger.Infof("postgresql version increased (%s -> %s), depending on config manual upgrade needed",
oldSpec.Spec.PostgresqlParam.PgVersion, newSpec.Spec.PostgresqlParam.PgVersion) oldSpec.Spec.PostgresqlParam.PgVersion, newSpec.Spec.PostgresqlParam.PgVersion)
syncStatefulSet = true
} else { } else {
c.logger.Infof("postgresql major version unchanged or smaller, no changes needed") c.logger.Infof("postgresql major version unchanged or smaller, no changes needed")
// sticking with old version, this will also advance GetDesiredVersion next time. // sticking with old version, this will also advance GetDesiredVersion next time.
@ -880,12 +1015,15 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
} }
// Service // Service
if !reflect.DeepEqual(c.generateService(Master, &oldSpec.Spec), c.generateService(Master, &newSpec.Spec)) ||
!reflect.DeepEqual(c.generateService(Replica, &oldSpec.Spec), c.generateService(Replica, &newSpec.Spec)) {
if err := c.syncServices(); err != nil { if err := c.syncServices(); err != nil {
c.logger.Errorf("could not sync services: %v", err) c.logger.Errorf("could not sync services: %v", err)
updateFailed = true updateFailed = true
} }
// Patroni service and endpoints / config maps
if err := c.syncPatroniResources(); err != nil {
c.logger.Errorf("could not sync services: %v", err)
updateFailed = true
} }
// Users // Users
@ -904,8 +1042,19 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
// only when streams were not specified in oldSpec but in newSpec // only when streams were not specified in oldSpec but in newSpec
needStreamUser := len(oldSpec.Spec.Streams) == 0 && len(newSpec.Spec.Streams) > 0 needStreamUser := len(oldSpec.Spec.Streams) == 0 && len(newSpec.Spec.Streams) > 0
if !sameUsers || !sameRotatedUsers || needPoolerUser || needStreamUser { initUsers := !sameUsers || !sameRotatedUsers || needPoolerUser || needStreamUser
c.logger.Debugf("initialize users")
// if inherited annotations differ secrets have to be synced on update
newAnnotations := c.annotationsSet(nil)
oldAnnotations := make(map[string]string)
for _, secret := range c.Secrets {
oldAnnotations = secret.ObjectMeta.Annotations
break
}
annotationsChanged, _ := c.compareAnnotations(oldAnnotations, newAnnotations, nil)
if initUsers || annotationsChanged {
c.logger.Debug("initialize users")
if err := c.initUsers(); err != nil { if err := c.initUsers(); err != nil {
c.logger.Errorf("could not init users - skipping sync of secrets and databases: %v", err) c.logger.Errorf("could not init users - skipping sync of secrets and databases: %v", err)
userInitFailed = true userInitFailed = true
@ -913,7 +1062,7 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
return return
} }
c.logger.Debugf("syncing secrets") c.logger.Debug("syncing secrets")
//TODO: mind the secrets of the deleted/new users //TODO: mind the secrets of the deleted/new users
if err := c.syncSecrets(); err != nil { if err := c.syncSecrets(); err != nil {
c.logger.Errorf("could not sync secrets: %v", err) c.logger.Errorf("could not sync secrets: %v", err)
@ -926,39 +1075,15 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
if c.OpConfig.StorageResizeMode != "off" { if c.OpConfig.StorageResizeMode != "off" {
c.syncVolumes() c.syncVolumes()
} else { } else {
c.logger.Infof("Storage resize is disabled (storage_resize_mode is off). Skipping volume sync.") c.logger.Infof("Storage resize is disabled (storage_resize_mode is off). Skipping volume size sync.")
}
// streams configuration
if len(oldSpec.Spec.Streams) == 0 && len(newSpec.Spec.Streams) > 0 {
syncStatefulSet = true
} }
// Statefulset // Statefulset
func() { func() {
oldSs, err := c.generateStatefulSet(&oldSpec.Spec)
if err != nil {
c.logger.Errorf("could not generate old statefulset spec: %v", err)
updateFailed = true
return
}
newSs, err := c.generateStatefulSet(&newSpec.Spec)
if err != nil {
c.logger.Errorf("could not generate new statefulset spec: %v", err)
updateFailed = true
return
}
if syncStatefulSet || !reflect.DeepEqual(oldSs, newSs) {
c.logger.Debugf("syncing statefulsets")
syncStatefulSet = false
// TODO: avoid generating the StatefulSet object twice by passing it to syncStatefulSet
if err := c.syncStatefulSet(); err != nil { if err := c.syncStatefulSet(); err != nil {
c.logger.Errorf("could not sync statefulsets: %v", err) c.logger.Errorf("could not sync statefulsets: %v", err)
updateFailed = true updateFailed = true
} }
}
}() }()
// add or remove standby_cluster section from Patroni config depending on changes in standby section // add or remove standby_cluster section from Patroni config depending on changes in standby section
@ -968,21 +1093,18 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
} }
} }
// pod disruption budget // pod disruption budgets
if oldSpec.Spec.NumberOfInstances != newSpec.Spec.NumberOfInstances { if err := c.syncPodDisruptionBudgets(true); err != nil {
c.logger.Debug("syncing pod disruption budgets") c.logger.Errorf("could not sync pod disruption budgets: %v", err)
if err := c.syncPodDisruptionBudget(true); err != nil {
c.logger.Errorf("could not sync pod disruption budget: %v", err)
updateFailed = true updateFailed = true
} }
}
// logical backup job // logical backup job
func() { func() {
// create if it did not exist // create if it did not exist
if !oldSpec.Spec.EnableLogicalBackup && newSpec.Spec.EnableLogicalBackup { if !oldSpec.Spec.EnableLogicalBackup && newSpec.Spec.EnableLogicalBackup {
c.logger.Debugf("creating backup cron job") c.logger.Debug("creating backup cron job")
if err := c.createLogicalBackupJob(); err != nil { if err := c.createLogicalBackupJob(); err != nil {
c.logger.Errorf("could not create a k8s cron job for logical backups: %v", err) c.logger.Errorf("could not create a k8s cron job for logical backups: %v", err)
updateFailed = true updateFailed = true
@ -992,7 +1114,7 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
// delete if no longer needed // delete if no longer needed
if oldSpec.Spec.EnableLogicalBackup && !newSpec.Spec.EnableLogicalBackup { if oldSpec.Spec.EnableLogicalBackup && !newSpec.Spec.EnableLogicalBackup {
c.logger.Debugf("deleting backup cron job") c.logger.Debug("deleting backup cron job")
if err := c.deleteLogicalBackupJob(); err != nil { if err := c.deleteLogicalBackupJob(); err != nil {
c.logger.Errorf("could not delete a k8s cron job for logical backups: %v", err) c.logger.Errorf("could not delete a k8s cron job for logical backups: %v", err)
updateFailed = true updateFailed = true
@ -1001,11 +1123,7 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
} }
// apply schedule changes if oldSpec.Spec.EnableLogicalBackup && newSpec.Spec.EnableLogicalBackup {
// this is the only parameter of logical backups a user can overwrite in the cluster manifest
if (oldSpec.Spec.EnableLogicalBackup && newSpec.Spec.EnableLogicalBackup) &&
(newSpec.Spec.LogicalBackupSchedule != oldSpec.Spec.LogicalBackupSchedule) {
c.logger.Debugf("updating schedule of the backup cron job")
if err := c.syncLogicalBackupJob(); err != nil { if err := c.syncLogicalBackupJob(); err != nil {
c.logger.Errorf("could not sync logical backup jobs: %v", err) c.logger.Errorf("could not sync logical backup jobs: %v", err)
updateFailed = true updateFailed = true
@ -1016,7 +1134,7 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
// Roles and Databases // Roles and Databases
if !userInitFailed && !(c.databaseAccessDisabled() || c.getNumberOfInstances(&c.Spec) <= 0 || c.Spec.StandbyCluster != nil) { if !userInitFailed && !(c.databaseAccessDisabled() || c.getNumberOfInstances(&c.Spec) <= 0 || c.Spec.StandbyCluster != nil) {
c.logger.Debugf("syncing roles") c.logger.Debug("syncing roles")
if err := c.syncRoles(); err != nil { if err := c.syncRoles(); err != nil {
c.logger.Errorf("could not sync roles: %v", err) c.logger.Errorf("could not sync roles: %v", err)
updateFailed = true updateFailed = true
@ -1049,7 +1167,8 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
} }
// streams // streams
if len(newSpec.Spec.Streams) > 0 { if len(newSpec.Spec.Streams) > 0 || len(oldSpec.Spec.Streams) != len(newSpec.Spec.Streams) {
c.logger.Debug("syncing streams")
if err := c.syncStreams(); err != nil { if err := c.syncStreams(); err != nil {
c.logger.Errorf("could not sync streams: %v", err) c.logger.Errorf("could not sync streams: %v", err)
updateFailed = true updateFailed = true
@ -1112,20 +1231,23 @@ func (c *Cluster) Delete() error {
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not delete statefulset: %v", err) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not delete statefulset: %v", err)
} }
if c.OpConfig.EnableSecretsDeletion != nil && *c.OpConfig.EnableSecretsDeletion {
if err := c.deleteSecrets(); err != nil { if err := c.deleteSecrets(); err != nil {
anyErrors = true anyErrors = true
c.logger.Warningf("could not delete secrets: %v", err) c.logger.Warningf("could not delete secrets: %v", err)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not delete secrets: %v", err) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not delete secrets: %v", err)
} }
} else {
c.logger.Info("not deleting secrets because disabled in configuration")
}
if err := c.deletePodDisruptionBudget(); err != nil { if err := c.deletePodDisruptionBudgets(); err != nil {
anyErrors = true anyErrors = true
c.logger.Warningf("could not delete pod disruption budget: %v", err) c.logger.Warningf("could not delete pod disruption budgets: %v", err)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not delete pod disruption budget: %v", err) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not delete pod disruption budgets: %v", err)
} }
for _, role := range []PostgresRole{Master, Replica} { for _, role := range []PostgresRole{Master, Replica} {
if !c.patroniKubernetesUseConfigMaps() { if !c.patroniKubernetesUseConfigMaps() {
if err := c.deleteEndpoint(role); err != nil { if err := c.deleteEndpoint(role); err != nil {
anyErrors = true anyErrors = true
@ -1141,10 +1263,10 @@ func (c *Cluster) Delete() error {
} }
} }
if err := c.deletePatroniClusterObjects(); err != nil { if err := c.deletePatroniResources(); err != nil {
anyErrors = true anyErrors = true
c.logger.Warningf("could not remove leftover patroni objects; %v", err) c.logger.Warningf("could not delete all Patroni resources: %v", err)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not remove leftover patroni objects; %v", err) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Delete", "could not delete all Patroni resources: %v", err)
} }
// Delete connection pooler objects anyway, even if it's not mentioned in the // Delete connection pooler objects anyway, even if it's not mentioned in the
@ -1308,18 +1430,18 @@ func (c *Cluster) initPreparedDatabaseRoles() error {
preparedSchemas = map[string]acidv1.PreparedSchema{"data": {DefaultRoles: util.True()}} preparedSchemas = map[string]acidv1.PreparedSchema{"data": {DefaultRoles: util.True()}}
} }
var searchPath strings.Builder searchPathArr := []string{constants.DefaultSearchPath}
searchPath.WriteString(constants.DefaultSearchPath)
for preparedSchemaName := range preparedSchemas { for preparedSchemaName := range preparedSchemas {
searchPath.WriteString(", " + preparedSchemaName) searchPathArr = append(searchPathArr, fmt.Sprintf("%q", preparedSchemaName))
} }
searchPath := strings.Join(searchPathArr, ", ")
// default roles per database // default roles per database
if err := c.initDefaultRoles(defaultRoles, "admin", preparedDbName, searchPath.String(), preparedDB.SecretNamespace); err != nil { if err := c.initDefaultRoles(defaultRoles, "admin", preparedDbName, searchPath, preparedDB.SecretNamespace); err != nil {
return fmt.Errorf("could not initialize default roles for database %s: %v", preparedDbName, err) return fmt.Errorf("could not initialize default roles for database %s: %v", preparedDbName, err)
} }
if preparedDB.DefaultUsers { if preparedDB.DefaultUsers {
if err := c.initDefaultRoles(defaultUsers, "admin", preparedDbName, searchPath.String(), preparedDB.SecretNamespace); err != nil { if err := c.initDefaultRoles(defaultUsers, "admin", preparedDbName, searchPath, preparedDB.SecretNamespace); err != nil {
return fmt.Errorf("could not initialize default roles for database %s: %v", preparedDbName, err) return fmt.Errorf("could not initialize default roles for database %s: %v", preparedDbName, err)
} }
} }
@ -1330,14 +1452,16 @@ func (c *Cluster) initPreparedDatabaseRoles() error {
if err := c.initDefaultRoles(defaultRoles, if err := c.initDefaultRoles(defaultRoles,
preparedDbName+constants.OwnerRoleNameSuffix, preparedDbName+constants.OwnerRoleNameSuffix,
preparedDbName+"_"+preparedSchemaName, preparedDbName+"_"+preparedSchemaName,
constants.DefaultSearchPath+", "+preparedSchemaName, preparedDB.SecretNamespace); err != nil { fmt.Sprintf("%s, %q", constants.DefaultSearchPath, preparedSchemaName),
preparedDB.SecretNamespace); err != nil {
return fmt.Errorf("could not initialize default roles for database schema %s: %v", preparedSchemaName, err) return fmt.Errorf("could not initialize default roles for database schema %s: %v", preparedSchemaName, err)
} }
if preparedSchema.DefaultUsers { if preparedSchema.DefaultUsers {
if err := c.initDefaultRoles(defaultUsers, if err := c.initDefaultRoles(defaultUsers,
preparedDbName+constants.OwnerRoleNameSuffix, preparedDbName+constants.OwnerRoleNameSuffix,
preparedDbName+"_"+preparedSchemaName, preparedDbName+"_"+preparedSchemaName,
constants.DefaultSearchPath+", "+preparedSchemaName, preparedDB.SecretNamespace); err != nil { fmt.Sprintf("%s, %q", constants.DefaultSearchPath, preparedSchemaName),
preparedDB.SecretNamespace); err != nil {
return fmt.Errorf("could not initialize default users for database schema %s: %v", preparedSchemaName, err) return fmt.Errorf("could not initialize default users for database schema %s: %v", preparedSchemaName, err)
} }
} }
@ -1627,7 +1751,8 @@ func (c *Cluster) GetStatus() *ClusterStatus {
MasterService: c.GetServiceMaster(), MasterService: c.GetServiceMaster(),
ReplicaService: c.GetServiceReplica(), ReplicaService: c.GetServiceReplica(),
StatefulSet: c.GetStatefulSet(), StatefulSet: c.GetStatefulSet(),
PodDisruptionBudget: c.GetPodDisruptionBudget(), PrimaryPodDisruptionBudget: c.GetPrimaryPodDisruptionBudget(),
CriticalOpPodDisruptionBudget: c.GetCriticalOpPodDisruptionBudget(),
CurrentProcess: c.GetCurrentProcess(), CurrentProcess: c.GetCurrentProcess(),
Error: fmt.Errorf("error: %s", c.Error), Error: fmt.Errorf("error: %s", c.Error),
@ -1641,18 +1766,58 @@ func (c *Cluster) GetStatus() *ClusterStatus {
return status return status
} }
// Switchover does a switchover (via Patroni) to a candidate pod func (c *Cluster) GetSwitchoverSchedule() string {
func (c *Cluster) Switchover(curMaster *v1.Pod, candidate spec.NamespacedName) error { var possibleSwitchover, schedule time.Time
now := time.Now().UTC()
for _, window := range c.Spec.MaintenanceWindows {
// in the best case it is possible today
possibleSwitchover = time.Date(now.Year(), now.Month(), now.Day(), window.StartTime.Hour(), window.StartTime.Minute(), 0, 0, time.UTC)
if window.Everyday {
if now.After(possibleSwitchover) {
// we are already past the time for today, try tomorrow
possibleSwitchover = possibleSwitchover.AddDate(0, 0, 1)
}
} else {
if now.Weekday() != window.Weekday {
// get closest possible time for this window
possibleSwitchover = possibleSwitchover.AddDate(0, 0, int((7+window.Weekday-now.Weekday())%7))
} else if now.After(possibleSwitchover) {
// we are already past the time for today, try next week
possibleSwitchover = possibleSwitchover.AddDate(0, 0, 7)
}
}
if (schedule.Equal(time.Time{})) || possibleSwitchover.Before(schedule) {
schedule = possibleSwitchover
}
}
return schedule.Format("2006-01-02T15:04+00")
}
// Switchover does a switchover (via Patroni) to a candidate pod
func (c *Cluster) Switchover(curMaster *v1.Pod, candidate spec.NamespacedName, scheduled bool) error {
var err error var err error
c.logger.Debugf("switching over from %q to %q", curMaster.Name, candidate)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switching over from %q to %q", curMaster.Name, candidate)
stopCh := make(chan struct{}) stopCh := make(chan struct{})
ch := c.registerPodSubscriber(candidate) ch := c.registerPodSubscriber(candidate)
defer c.unregisterPodSubscriber(candidate) defer c.unregisterPodSubscriber(candidate)
defer close(stopCh) defer close(stopCh)
if err = c.patroni.Switchover(curMaster, candidate.Name); err == nil { var scheduled_at string
if scheduled {
scheduled_at = c.GetSwitchoverSchedule()
} else {
c.logger.Debugf("switching over from %q to %q", curMaster.Name, candidate)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switching over from %q to %q", curMaster.Name, candidate)
scheduled_at = ""
}
if err = c.patroni.Switchover(curMaster, candidate.Name, scheduled_at); err == nil {
if scheduled {
c.logger.Infof("switchover from %q to %q is scheduled at %s", curMaster.Name, candidate, scheduled_at)
return nil
}
c.logger.Debugf("successfully switched over from %q to %q", curMaster.Name, candidate) c.logger.Debugf("successfully switched over from %q to %q", curMaster.Name, candidate)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Successfully switched over from %q to %q", curMaster.Name, candidate) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Successfully switched over from %q to %q", curMaster.Name, candidate)
_, err = c.waitForPodLabel(ch, stopCh, nil) _, err = c.waitForPodLabel(ch, stopCh, nil)
@ -1660,6 +1825,9 @@ func (c *Cluster) Switchover(curMaster *v1.Pod, candidate spec.NamespacedName) e
err = fmt.Errorf("could not get master pod label: %v", err) err = fmt.Errorf("could not get master pod label: %v", err)
} }
} else { } else {
if scheduled {
return fmt.Errorf("could not schedule switchover: %v", err)
}
err = fmt.Errorf("could not switch over from %q to %q: %v", curMaster.Name, candidate, err) err = fmt.Errorf("could not switch over from %q to %q: %v", curMaster.Name, candidate, err)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switchover from %q to %q FAILED: %v", curMaster.Name, candidate, err) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switchover from %q to %q FAILED: %v", curMaster.Name, candidate, err)
} }
@ -1676,96 +1844,3 @@ func (c *Cluster) Lock() {
func (c *Cluster) Unlock() { func (c *Cluster) Unlock() {
c.mu.Unlock() c.mu.Unlock()
} }
type simpleActionWithResult func()
type clusterObjectGet func(name string) (spec.NamespacedName, error)
type clusterObjectDelete func(name string) error
func (c *Cluster) deletePatroniClusterObjects() error {
// TODO: figure out how to remove leftover patroni objects in other cases
var actionsList []simpleActionWithResult
if !c.patroniUsesKubernetes() {
c.logger.Infof("not cleaning up Etcd Patroni objects on cluster delete")
}
actionsList = append(actionsList, c.deletePatroniClusterServices)
if c.patroniKubernetesUseConfigMaps() {
actionsList = append(actionsList, c.deletePatroniClusterConfigMaps)
} else {
actionsList = append(actionsList, c.deletePatroniClusterEndpoints)
}
c.logger.Debugf("removing leftover Patroni objects (endpoints / services and configmaps)")
for _, deleter := range actionsList {
deleter()
}
return nil
}
func deleteClusterObject(
get clusterObjectGet,
del clusterObjectDelete,
objType string,
clusterName string,
logger *logrus.Entry) {
for _, suffix := range patroniObjectSuffixes {
name := fmt.Sprintf("%s-%s", clusterName, suffix)
namespacedName, err := get(name)
if err == nil {
logger.Debugf("deleting %s %q",
objType, namespacedName)
if err = del(name); err != nil {
logger.Warningf("could not delete %s %q: %v",
objType, namespacedName, err)
}
} else if !k8sutil.ResourceNotFound(err) {
logger.Warningf("could not fetch %s %q: %v",
objType, namespacedName, err)
}
}
}
func (c *Cluster) deletePatroniClusterServices() {
get := func(name string) (spec.NamespacedName, error) {
svc, err := c.KubeClient.Services(c.Namespace).Get(context.TODO(), name, metav1.GetOptions{})
return util.NameFromMeta(svc.ObjectMeta), err
}
deleteServiceFn := func(name string) error {
return c.KubeClient.Services(c.Namespace).Delete(context.TODO(), name, c.deleteOptions)
}
deleteClusterObject(get, deleteServiceFn, "service", c.Name, c.logger)
}
func (c *Cluster) deletePatroniClusterEndpoints() {
get := func(name string) (spec.NamespacedName, error) {
ep, err := c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), name, metav1.GetOptions{})
return util.NameFromMeta(ep.ObjectMeta), err
}
deleteEndpointFn := func(name string) error {
return c.KubeClient.Endpoints(c.Namespace).Delete(context.TODO(), name, c.deleteOptions)
}
deleteClusterObject(get, deleteEndpointFn, "endpoint", c.Name, c.logger)
}
func (c *Cluster) deletePatroniClusterConfigMaps() {
get := func(name string) (spec.NamespacedName, error) {
cm, err := c.KubeClient.ConfigMaps(c.Namespace).Get(context.TODO(), name, metav1.GetOptions{})
return util.NameFromMeta(cm.ObjectMeta), err
}
deleteConfigMapFn := func(name string) error {
return c.KubeClient.ConfigMaps(c.Namespace).Delete(context.TODO(), name, c.deleteOptions)
}
deleteClusterObject(get, deleteConfigMapFn, "configmap", c.Name, c.logger)
}

View File

@ -18,8 +18,11 @@ import (
"github.com/zalando/postgres-operator/pkg/util/config" "github.com/zalando/postgres-operator/pkg/util/config"
"github.com/zalando/postgres-operator/pkg/util/constants" "github.com/zalando/postgres-operator/pkg/util/constants"
"github.com/zalando/postgres-operator/pkg/util/k8sutil" "github.com/zalando/postgres-operator/pkg/util/k8sutil"
"github.com/zalando/postgres-operator/pkg/util/patroni"
"github.com/zalando/postgres-operator/pkg/util/teams" "github.com/zalando/postgres-operator/pkg/util/teams"
batchv1 "k8s.io/api/batch/v1"
v1 "k8s.io/api/core/v1" v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake" "k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/tools/record" "k8s.io/client-go/tools/record"
@ -68,11 +71,11 @@ var cl = New(
Spec: acidv1.PostgresSpec{ Spec: acidv1.PostgresSpec{
EnableConnectionPooler: util.True(), EnableConnectionPooler: util.True(),
Streams: []acidv1.Stream{ Streams: []acidv1.Stream{
acidv1.Stream{ {
ApplicationId: "test-app", ApplicationId: "test-app",
Database: "test_db", Database: "test_db",
Tables: map[string]acidv1.StreamTable{ Tables: map[string]acidv1.StreamTable{
"test_table": acidv1.StreamTable{ "test_table": {
EventType: "test-app.test", EventType: "test-app.test",
}, },
}, },
@ -92,6 +95,7 @@ func TestCreate(t *testing.T) {
client := k8sutil.KubernetesClient{ client := k8sutil.KubernetesClient{
DeploymentsGetter: clientSet.AppsV1(), DeploymentsGetter: clientSet.AppsV1(),
CronJobsGetter: clientSet.BatchV1(),
EndpointsGetter: clientSet.CoreV1(), EndpointsGetter: clientSet.CoreV1(),
PersistentVolumeClaimsGetter: clientSet.CoreV1(), PersistentVolumeClaimsGetter: clientSet.CoreV1(),
PodDisruptionBudgetsGetter: clientSet.PolicyV1(), PodDisruptionBudgetsGetter: clientSet.PolicyV1(),
@ -108,6 +112,7 @@ func TestCreate(t *testing.T) {
Namespace: clusterNamespace, Namespace: clusterNamespace,
}, },
Spec: acidv1.PostgresSpec{ Spec: acidv1.PostgresSpec{
EnableLogicalBackup: true,
Volume: acidv1.Volume{ Volume: acidv1.Volume{
Size: "1Gi", Size: "1Gi",
}, },
@ -1336,14 +1341,21 @@ func TestCompareEnv(t *testing.T) {
} }
} }
func newService(ann map[string]string, svcT v1.ServiceType, lbSr []string) *v1.Service { func newService(
annotations map[string]string,
svcType v1.ServiceType,
sourceRanges []string,
selector map[string]string,
policy v1.ServiceExternalTrafficPolicyType) *v1.Service {
svc := &v1.Service{ svc := &v1.Service{
Spec: v1.ServiceSpec{ Spec: v1.ServiceSpec{
Type: svcT, Selector: selector,
LoadBalancerSourceRanges: lbSr, Type: svcType,
LoadBalancerSourceRanges: sourceRanges,
ExternalTrafficPolicy: policy,
}, },
} }
svc.Annotations = ann svc.Annotations = annotations
return svc return svc
} }
@ -1360,6 +1372,28 @@ func TestCompareServices(t *testing.T) {
}, },
} }
defaultPolicy := v1.ServiceExternalTrafficPolicyTypeCluster
serviceWithOwnerReference := newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
},
v1.ServiceTypeClusterIP,
[]string{"128.141.0.0/16", "137.138.0.0/16"},
nil,
defaultPolicy,
)
ownerRef := metav1.OwnerReference{
APIVersion: "acid.zalan.do/v1",
Controller: boolToPointer(true),
Kind: "Postgresql",
Name: "clstr",
}
serviceWithOwnerReference.ObjectMeta.OwnerReferences = append(serviceWithOwnerReference.ObjectMeta.OwnerReferences, ownerRef)
tests := []struct { tests := []struct {
about string about string
current *v1.Service current *v1.Service
@ -1375,14 +1409,16 @@ func TestCompareServices(t *testing.T) {
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue, constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
}, },
v1.ServiceTypeClusterIP, v1.ServiceTypeClusterIP,
[]string{"128.141.0.0/16", "137.138.0.0/16"}), []string{"128.141.0.0/16", "137.138.0.0/16"},
nil, defaultPolicy),
new: newService( new: newService(
map[string]string{ map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do", constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue, constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
}, },
v1.ServiceTypeClusterIP, v1.ServiceTypeClusterIP,
[]string{"128.141.0.0/16", "137.138.0.0/16"}), []string{"128.141.0.0/16", "137.138.0.0/16"},
nil, defaultPolicy),
match: true, match: true,
}, },
{ {
@ -1393,14 +1429,16 @@ func TestCompareServices(t *testing.T) {
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue, constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
}, },
v1.ServiceTypeClusterIP, v1.ServiceTypeClusterIP,
[]string{"128.141.0.0/16", "137.138.0.0/16"}), []string{"128.141.0.0/16", "137.138.0.0/16"},
nil, defaultPolicy),
new: newService( new: newService(
map[string]string{ map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do", constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue, constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
}, },
v1.ServiceTypeLoadBalancer, v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}), []string{"128.141.0.0/16", "137.138.0.0/16"},
nil, defaultPolicy),
match: false, match: false,
reason: `new service's type "LoadBalancer" does not match the current one "ClusterIP"`, reason: `new service's type "LoadBalancer" does not match the current one "ClusterIP"`,
}, },
@ -1412,14 +1450,16 @@ func TestCompareServices(t *testing.T) {
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue, constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
}, },
v1.ServiceTypeLoadBalancer, v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}), []string{"128.141.0.0/16", "137.138.0.0/16"},
nil, defaultPolicy),
new: newService( new: newService(
map[string]string{ map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do", constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue, constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
}, },
v1.ServiceTypeLoadBalancer, v1.ServiceTypeLoadBalancer,
[]string{"185.249.56.0/22"}), []string{"185.249.56.0/22"},
nil, defaultPolicy),
match: false, match: false,
reason: `new service's LoadBalancerSourceRange does not match the current one`, reason: `new service's LoadBalancerSourceRange does not match the current one`,
}, },
@ -1431,215 +1471,59 @@ func TestCompareServices(t *testing.T) {
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue, constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
}, },
v1.ServiceTypeLoadBalancer, v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}), []string{"128.141.0.0/16", "137.138.0.0/16"},
nil, defaultPolicy),
new: newService( new: newService(
map[string]string{ map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do", constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue, constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
}, },
v1.ServiceTypeLoadBalancer, v1.ServiceTypeLoadBalancer,
[]string{}), []string{},
nil, defaultPolicy),
match: false, match: false,
reason: `new service's LoadBalancerSourceRange does not match the current one`, reason: `new service's LoadBalancerSourceRange does not match the current one`,
}, },
{ {
about: "services differ on DNS annotation", about: "new service doesn't have owner references",
current: newService( current: newService(
map[string]string{ map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do", constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue, constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
}, },
v1.ServiceTypeLoadBalancer, v1.ServiceTypeClusterIP,
[]string{"128.141.0.0/16", "137.138.0.0/16"}), []string{"128.141.0.0/16", "137.138.0.0/16"},
new: newService( nil, defaultPolicy),
map[string]string{ new: serviceWithOwnerReference,
constants.ZalandoDNSNameAnnotation: "new_clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
match: false, match: false,
reason: `new service's annotations does not match the current one: "external-dns.alpha.kubernetes.io/hostname" changed from "clstr.acid.zalan.do" to "new_clstr.acid.zalan.do".`,
}, },
{ {
about: "services differ on AWS ELB annotation", about: "new service has a label selector",
current: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
new: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: "1800",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
match: false,
reason: `new service's annotations does not match the current one: "service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout" changed from "3600" to "1800".`,
},
{
about: "service changes existing annotation",
current: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
"foo": "bar",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
new: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
"foo": "baz",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
match: false,
reason: `new service's annotations does not match the current one: "foo" changed from "bar" to "baz".`,
},
{
about: "service changes multiple existing annotations",
current: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
"foo": "bar",
"bar": "foo",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
new: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
"foo": "baz",
"bar": "fooz",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
match: false,
// Test just the prefix to avoid flakiness and map sorting
reason: `new service's annotations does not match the current one:`,
},
{
about: "service adds a new custom annotation",
current: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
new: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
"foo": "bar",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
match: false,
reason: `new service's annotations does not match the current one: Added "foo" with value "bar".`,
},
{
about: "service removes a custom annotation",
current: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
"foo": "bar",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
new: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
match: false,
reason: `new service's annotations does not match the current one: Removed "foo".`,
},
{
about: "service removes a custom annotation and adds a new one",
current: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
"foo": "bar",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
new: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
"bar": "foo",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
match: false,
reason: `new service's annotations does not match the current one: Removed "foo". Added "bar" with value "foo".`,
},
{
about: "service removes a custom annotation, adds a new one and change another",
current: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
"foo": "bar",
"zalan": "do",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
new: newService(
map[string]string{
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do",
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue,
"bar": "foo",
"zalan": "do.com",
},
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
match: false,
// Test just the prefix to avoid flakiness and map sorting
reason: `new service's annotations does not match the current one: Removed "foo".`,
},
{
about: "service add annotations",
current: newService( current: newService(
map[string]string{}, map[string]string{},
v1.ServiceTypeLoadBalancer, v1.ServiceTypeClusterIP,
[]string{"128.141.0.0/16", "137.138.0.0/16"}), []string{},
nil, defaultPolicy),
new: newService( new: newService(
map[string]string{ map[string]string{},
constants.ZalandoDNSNameAnnotation: "clstr.acid.zalan.do", v1.ServiceTypeClusterIP,
constants.ElbTimeoutAnnotationName: constants.ElbTimeoutAnnotationValue, []string{},
}, map[string]string{"cluster-name": "clstr", "spilo-role": "master"}, defaultPolicy),
v1.ServiceTypeLoadBalancer,
[]string{"128.141.0.0/16", "137.138.0.0/16"}),
match: false, match: false,
// Test just the prefix to avoid flakiness and map sorting
reason: `new service's annotations does not match the current one: Added `,
}, },
{ {
about: "ignored annotations", about: "services differ on external traffic policy",
current: newService( current: newService(
map[string]string{}, map[string]string{},
v1.ServiceTypeLoadBalancer, v1.ServiceTypeClusterIP,
[]string{"128.141.0.0/16", "137.138.0.0/16"}), []string{},
nil, defaultPolicy),
new: newService( new: newService(
map[string]string{ map[string]string{},
"k8s.v1.cni.cncf.io/network-status": "up", v1.ServiceTypeClusterIP,
}, []string{},
v1.ServiceTypeLoadBalancer, nil, v1.ServiceExternalTrafficPolicyTypeLocal),
[]string{"128.141.0.0/16", "137.138.0.0/16"}), match: false,
match: true,
}, },
} }
@ -1649,16 +1533,216 @@ func TestCompareServices(t *testing.T) {
if match && !tt.match { if match && !tt.match {
t.Logf("match=%v current=%v, old=%v reason=%s", match, tt.current.Annotations, tt.new.Annotations, reason) t.Logf("match=%v current=%v, old=%v reason=%s", match, tt.current.Annotations, tt.new.Annotations, reason)
t.Errorf("%s - expected services to do not match: %q and %q", t.Name(), tt.current, tt.new) t.Errorf("%s - expected services to do not match: %q and %q", t.Name(), tt.current, tt.new)
return
} }
if !match && tt.match { if !match && tt.match {
t.Errorf("%s - expected services to be the same: %q and %q", t.Name(), tt.current, tt.new) t.Errorf("%s - expected services to be the same: %q and %q", t.Name(), tt.current, tt.new)
return
} }
if !match && !tt.match { if !match && !tt.match {
if !strings.HasPrefix(reason, tt.reason) { if !strings.HasPrefix(reason, tt.reason) {
t.Errorf("%s - expected reason prefix %s, found %s", t.Name(), tt.reason, reason) t.Errorf("%s - expected reason prefix %s, found %s", t.Name(), tt.reason, reason)
return }
}
})
}
}
func newCronJob(image, schedule string, vars []v1.EnvVar, mounts []v1.VolumeMount) *batchv1.CronJob {
cron := &batchv1.CronJob{
Spec: batchv1.CronJobSpec{
Schedule: schedule,
JobTemplate: batchv1.JobTemplateSpec{
Spec: batchv1.JobSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "logical-backup",
Image: image,
Env: vars,
Ports: []v1.ContainerPort{
{
ContainerPort: patroni.ApiPort,
Protocol: v1.ProtocolTCP,
},
{
ContainerPort: pgPort,
Protocol: v1.ProtocolTCP,
},
{
ContainerPort: operatorPort,
Protocol: v1.ProtocolTCP,
},
},
Resources: v1.ResourceRequirements{
Requests: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("100m"),
v1.ResourceMemory: resource.MustParse("100Mi"),
},
Limits: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("100m"),
v1.ResourceMemory: resource.MustParse("100Mi"),
},
},
SecurityContext: &v1.SecurityContext{
AllowPrivilegeEscalation: nil,
Privileged: util.False(),
ReadOnlyRootFilesystem: util.False(),
Capabilities: nil,
},
VolumeMounts: mounts,
},
},
},
},
},
},
},
}
return cron
}
func TestCompareLogicalBackupJob(t *testing.T) {
img1 := "registry.opensource.zalan.do/acid/logical-backup:v1.0"
img2 := "registry.opensource.zalan.do/acid/logical-backup:v2.0"
clientSet := fake.NewSimpleClientset()
acidClientSet := fakeacidv1.NewSimpleClientset()
namespace := "default"
client := k8sutil.KubernetesClient{
CronJobsGetter: clientSet.BatchV1(),
PostgresqlsGetter: acidClientSet.AcidV1(),
}
pg := acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{
Name: "acid-cron-cluster",
Namespace: namespace,
},
Spec: acidv1.PostgresSpec{
Volume: acidv1.Volume{
Size: "1Gi",
},
EnableLogicalBackup: true,
LogicalBackupSchedule: "0 0 * * *",
LogicalBackupRetention: "3 months",
},
}
var cluster = New(
Config{
OpConfig: config.Config{
PodManagementPolicy: "ordered_ready",
Resources: config.Resources{
ClusterLabels: map[string]string{"application": "spilo"},
ClusterNameLabel: "cluster-name",
DefaultCPURequest: "300m",
DefaultCPULimit: "300m",
DefaultMemoryRequest: "300Mi",
DefaultMemoryLimit: "300Mi",
PodRoleLabel: "spilo-role",
},
LogicalBackup: config.LogicalBackup{
LogicalBackupSchedule: "30 00 * * *",
LogicalBackupDockerImage: img1,
LogicalBackupJobPrefix: "logical-backup-",
LogicalBackupCPURequest: "100m",
LogicalBackupCPULimit: "100m",
LogicalBackupMemoryRequest: "100Mi",
LogicalBackupMemoryLimit: "100Mi",
LogicalBackupProvider: "s3",
LogicalBackupS3Bucket: "testBucket",
LogicalBackupS3BucketPrefix: "spilo",
LogicalBackupS3Region: "eu-central-1",
LogicalBackupS3Endpoint: "https://s3.amazonaws.com",
LogicalBackupS3AccessKeyID: "access",
LogicalBackupS3SecretAccessKey: "secret",
LogicalBackupS3SSE: "aws:kms",
LogicalBackupS3RetentionTime: "3 months",
LogicalBackupCronjobEnvironmentSecret: "",
},
},
}, client, pg, logger, eventRecorder)
desiredCronJob, err := cluster.generateLogicalBackupJob()
if err != nil {
t.Errorf("Could not generate logical backup job with error: %v", err)
}
err = cluster.createLogicalBackupJob()
if err != nil {
t.Errorf("Could not create logical backup job with error: %v", err)
}
currentCronJob, err := cluster.KubeClient.CronJobs(namespace).Get(context.TODO(), cluster.getLogicalBackupJobName(), metav1.GetOptions{})
if err != nil {
t.Errorf("Could not create logical backup job with error: %v", err)
}
tests := []struct {
about string
cronjob *batchv1.CronJob
match bool
reason string
}{
{
about: "two equal cronjobs",
cronjob: newCronJob(img1, "0 0 * * *", []v1.EnvVar{}, []v1.VolumeMount{}),
match: true,
},
{
about: "two cronjobs with different image",
cronjob: newCronJob(img2, "0 0 * * *", []v1.EnvVar{}, []v1.VolumeMount{}),
match: false,
reason: fmt.Sprintf("new job's image %q does not match the current one %q", img2, img1),
},
{
about: "two cronjobs with different schedule",
cronjob: newCronJob(img1, "0 * * * *", []v1.EnvVar{}, []v1.VolumeMount{}),
match: false,
reason: fmt.Sprintf("new job's schedule %q does not match the current one %q", "0 * * * *", "0 0 * * *"),
},
{
about: "two cronjobs with empty and nil volume mounts",
cronjob: newCronJob(img1, "0 0 * * *", []v1.EnvVar{}, nil),
match: true,
},
{
about: "two cronjobs with different environment variables",
cronjob: newCronJob(img1, "0 0 * * *", []v1.EnvVar{{Name: "LOGICAL_BACKUP_S3_BUCKET_PREFIX", Value: "logical-backup"}}, []v1.VolumeMount{}),
match: false,
reason: "logical backup container specs do not match: new cronjob container's logical-backup (index 0) environment does not match the current one",
},
}
for _, tt := range tests {
t.Run(tt.about, func(t *testing.T) {
desiredCronJob.Spec.Schedule = tt.cronjob.Spec.Schedule
desiredCronJob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Image = tt.cronjob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Image
desiredCronJob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].VolumeMounts = tt.cronjob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].VolumeMounts
for _, testEnv := range tt.cronjob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Env {
for i, env := range desiredCronJob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Env {
if env.Name == testEnv.Name {
desiredCronJob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Env[i] = testEnv
}
}
}
cmp := cluster.compareLogicalBackupJob(currentCronJob, desiredCronJob)
if cmp.match != tt.match {
t.Errorf("%s - unexpected match result %t when comparing cronjobs %#v and %#v", t.Name(), cmp.match, currentCronJob, desiredCronJob)
} else if !cmp.match {
found := false
for _, reason := range cmp.reasons {
if strings.HasPrefix(reason, tt.reason) {
found = true
break
}
found = false
}
if !found {
t.Errorf("%s - expected reason prefix %s, not found in %#v", t.Name(), tt.reason, cmp.reasons)
} }
} }
}) })
@ -1850,3 +1934,271 @@ func TestComparePorts(t *testing.T) {
}) })
} }
} }
func TestCompareVolumeMounts(t *testing.T) {
testCases := []struct {
name string
mountsA []v1.VolumeMount
mountsB []v1.VolumeMount
expected bool
}{
{
name: "empty vs nil",
mountsA: []v1.VolumeMount{},
mountsB: nil,
expected: true,
},
{
name: "both empty",
mountsA: []v1.VolumeMount{},
mountsB: []v1.VolumeMount{},
expected: true,
},
{
name: "same mounts",
mountsA: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
},
mountsB: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
},
expected: true,
},
{
name: "different mounts",
mountsA: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPathExpr: "$(POD_NAME)",
},
},
mountsB: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
},
expected: false,
},
{
name: "one equal mount one different",
mountsA: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
{
Name: "poddata",
ReadOnly: false,
MountPath: "/poddata",
SubPathExpr: "$(POD_NAME)",
},
},
mountsB: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
{
Name: "etc",
ReadOnly: true,
MountPath: "/etc",
},
},
expected: false,
},
{
name: "same mounts, different order",
mountsA: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
{
Name: "etc",
ReadOnly: true,
MountPath: "/etc",
},
},
mountsB: []v1.VolumeMount{
{
Name: "etc",
ReadOnly: true,
MountPath: "/etc",
},
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
},
expected: true,
},
{
name: "new mounts added",
mountsA: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
},
mountsB: []v1.VolumeMount{
{
Name: "etc",
ReadOnly: true,
MountPath: "/etc",
},
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
},
expected: false,
},
{
name: "one mount removed",
mountsA: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
{
Name: "etc",
ReadOnly: true,
MountPath: "/etc",
},
},
mountsB: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data",
SubPath: "subdir",
},
},
expected: false,
},
}
for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) {
got := compareVolumeMounts(tt.mountsA, tt.mountsB)
assert.Equal(t, tt.expected, got)
})
}
}
func TestGetSwitchoverSchedule(t *testing.T) {
now := time.Now()
futureTimeStart := now.Add(1 * time.Hour)
futureWindowTimeStart := futureTimeStart.Format("15:04")
futureWindowTimeEnd := now.Add(2 * time.Hour).Format("15:04")
pastTimeStart := now.Add(-2 * time.Hour)
pastWindowTimeStart := pastTimeStart.Format("15:04")
pastWindowTimeEnd := now.Add(-1 * time.Hour).Format("15:04")
tests := []struct {
name string
windows []acidv1.MaintenanceWindow
expected string
}{
{
name: "everyday maintenance windows is later today",
windows: []acidv1.MaintenanceWindow{
{
Everyday: true,
StartTime: mustParseTime(futureWindowTimeStart),
EndTime: mustParseTime(futureWindowTimeEnd),
},
},
expected: futureTimeStart.Format("2006-01-02T15:04+00"),
},
{
name: "everyday maintenance window is tomorrow",
windows: []acidv1.MaintenanceWindow{
{
Everyday: true,
StartTime: mustParseTime(pastWindowTimeStart),
EndTime: mustParseTime(pastWindowTimeEnd),
},
},
expected: pastTimeStart.AddDate(0, 0, 1).Format("2006-01-02T15:04+00"),
},
{
name: "weekday maintenance windows is later today",
windows: []acidv1.MaintenanceWindow{
{
Weekday: now.Weekday(),
StartTime: mustParseTime(futureWindowTimeStart),
EndTime: mustParseTime(futureWindowTimeEnd),
},
},
expected: futureTimeStart.Format("2006-01-02T15:04+00"),
},
{
name: "weekday maintenance windows is passed for today",
windows: []acidv1.MaintenanceWindow{
{
Weekday: now.Weekday(),
StartTime: mustParseTime(pastWindowTimeStart),
EndTime: mustParseTime(pastWindowTimeEnd),
},
},
expected: pastTimeStart.AddDate(0, 0, 7).Format("2006-01-02T15:04+00"),
},
{
name: "choose the earliest window",
windows: []acidv1.MaintenanceWindow{
{
Weekday: now.AddDate(0, 0, 2).Weekday(),
StartTime: mustParseTime(futureWindowTimeStart),
EndTime: mustParseTime(futureWindowTimeEnd),
},
{
Everyday: true,
StartTime: mustParseTime(pastWindowTimeStart),
EndTime: mustParseTime(pastWindowTimeEnd),
},
},
expected: pastTimeStart.AddDate(0, 0, 1).Format("2006-01-02T15:04+00"),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cluster.Spec.MaintenanceWindows = tt.windows
schedule := cluster.GetSwitchoverSchedule()
if schedule != tt.expected {
t.Errorf("Expected GetSwitchoverSchedule to return %s, returned: %s", tt.expected, schedule)
}
})
}
}

View File

@ -2,7 +2,9 @@ package cluster
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"reflect"
"strings" "strings"
"time" "time"
@ -590,7 +592,7 @@ func (c *Cluster) deleteConnectionPooler(role PostgresRole) (err error) {
// Lack of connection pooler objects is not a fatal error, just log it if // Lack of connection pooler objects is not a fatal error, just log it if
// it was present before in the manifest // it was present before in the manifest
if c.ConnectionPooler[role] == nil || role == "" { if c.ConnectionPooler[role] == nil || role == "" {
c.logger.Debugf("no connection pooler to delete") c.logger.Debug("no connection pooler to delete")
return nil return nil
} }
@ -621,7 +623,7 @@ func (c *Cluster) deleteConnectionPooler(role PostgresRole) (err error) {
// Repeat the same for the service object // Repeat the same for the service object
service := c.ConnectionPooler[role].Service service := c.ConnectionPooler[role].Service
if service == nil { if service == nil {
c.logger.Debugf("no connection pooler service object to delete") c.logger.Debug("no connection pooler service object to delete")
} else { } else {
err = c.KubeClient. err = c.KubeClient.
@ -654,7 +656,7 @@ func (c *Cluster) deleteConnectionPoolerSecret() (err error) {
if err != nil { if err != nil {
c.logger.Debugf("could not get connection pooler secret %s: %v", secretName, err) c.logger.Debugf("could not get connection pooler secret %s: %v", secretName, err)
} else { } else {
if err = c.deleteSecret(secret.UID, *secret); err != nil { if err = c.deleteSecret(secret.UID); err != nil {
return fmt.Errorf("could not delete pooler secret: %v", err) return fmt.Errorf("could not delete pooler secret: %v", err)
} }
} }
@ -663,11 +665,19 @@ func (c *Cluster) deleteConnectionPoolerSecret() (err error) {
// Perform actual patching of a connection pooler deployment, assuming that all // Perform actual patching of a connection pooler deployment, assuming that all
// the check were already done before. // the check were already done before.
func updateConnectionPoolerDeployment(KubeClient k8sutil.KubernetesClient, newDeployment *appsv1.Deployment) (*appsv1.Deployment, error) { func updateConnectionPoolerDeployment(KubeClient k8sutil.KubernetesClient, newDeployment *appsv1.Deployment, doUpdate bool) (*appsv1.Deployment, error) {
if newDeployment == nil { if newDeployment == nil {
return nil, fmt.Errorf("there is no connection pooler in the cluster") return nil, fmt.Errorf("there is no connection pooler in the cluster")
} }
if doUpdate {
updatedDeployment, err := KubeClient.Deployments(newDeployment.Namespace).Update(context.TODO(), newDeployment, metav1.UpdateOptions{})
if err != nil {
return nil, fmt.Errorf("could not update pooler deployment to match desired state: %v", err)
}
return updatedDeployment, nil
}
patchData, err := specPatch(newDeployment.Spec) patchData, err := specPatch(newDeployment.Spec)
if err != nil { if err != nil {
return nil, fmt.Errorf("could not form patch for the connection pooler deployment: %v", err) return nil, fmt.Errorf("could not form patch for the connection pooler deployment: %v", err)
@ -691,8 +701,8 @@ func updateConnectionPoolerDeployment(KubeClient k8sutil.KubernetesClient, newDe
return deployment, nil return deployment, nil
} }
// updateConnectionPoolerAnnotations updates the annotations of connection pooler deployment // patchConnectionPoolerAnnotations updates the annotations of connection pooler deployment
func updateConnectionPoolerAnnotations(KubeClient k8sutil.KubernetesClient, deployment *appsv1.Deployment, annotations map[string]string) (*appsv1.Deployment, error) { func patchConnectionPoolerAnnotations(KubeClient k8sutil.KubernetesClient, deployment *appsv1.Deployment, annotations map[string]string) (*appsv1.Deployment, error) {
patchData, err := metaAnnotationsPatch(annotations) patchData, err := metaAnnotationsPatch(annotations)
if err != nil { if err != nil {
return nil, fmt.Errorf("could not form patch for the connection pooler deployment metadata: %v", err) return nil, fmt.Errorf("could not form patch for the connection pooler deployment metadata: %v", err)
@ -751,6 +761,7 @@ func (c *Cluster) needSyncConnectionPoolerDefaults(Config *Config, spec *acidv1.
if spec == nil { if spec == nil {
spec = &acidv1.ConnectionPooler{} spec = &acidv1.ConnectionPooler{}
} }
if spec.NumberOfInstances == nil && if spec.NumberOfInstances == nil &&
*deployment.Spec.Replicas != *config.NumberOfInstances { *deployment.Spec.Replicas != *config.NumberOfInstances {
@ -967,6 +978,7 @@ func (c *Cluster) syncConnectionPoolerWorker(oldSpec, newSpec *acidv1.Postgresql
err error err error
) )
updatedPodAnnotations := map[string]*string{}
syncReason := make([]string, 0) syncReason := make([]string, 0)
deployment, err = c.KubeClient. deployment, err = c.KubeClient.
Deployments(c.Namespace). Deployments(c.Namespace).
@ -1014,18 +1026,48 @@ func (c *Cluster) syncConnectionPoolerWorker(oldSpec, newSpec *acidv1.Postgresql
newConnectionPooler = &acidv1.ConnectionPooler{} newConnectionPooler = &acidv1.ConnectionPooler{}
} }
var specSync bool var specSync, updateDeployment bool
var specReason []string var specReason []string
if !reflect.DeepEqual(deployment.ObjectMeta.OwnerReferences, c.ownerReferences()) {
c.logger.Info("new connection pooler owner references do not match the current ones")
updateDeployment = true
}
if oldSpec != nil { if oldSpec != nil {
specSync, specReason = needSyncConnectionPoolerSpecs(oldConnectionPooler, newConnectionPooler, c.logger) specSync, specReason = needSyncConnectionPoolerSpecs(oldConnectionPooler, newConnectionPooler, c.logger)
syncReason = append(syncReason, specReason...) syncReason = append(syncReason, specReason...)
} }
newPodAnnotations := c.annotationsSet(c.generatePodAnnotations(&c.Spec))
deletedPodAnnotations := []string{}
if changed, reason := c.compareAnnotations(deployment.Spec.Template.Annotations, newPodAnnotations, &deletedPodAnnotations); changed {
specSync = true
syncReason = append(syncReason, []string{"new connection pooler's pod template annotations do not match the current ones: " + reason}...)
for _, anno := range deletedPodAnnotations {
updatedPodAnnotations[anno] = nil
}
templateMetadataReq := map[string]map[string]map[string]map[string]map[string]*string{
"spec": {"template": {"metadata": {"annotations": updatedPodAnnotations}}}}
patch, err := json.Marshal(templateMetadataReq)
if err != nil {
return nil, fmt.Errorf("could not marshal ObjectMeta for %s connection pooler's pod template: %v", role, err)
}
deployment, err = c.KubeClient.Deployments(c.Namespace).Patch(context.TODO(),
deployment.Name, types.StrategicMergePatchType, patch, metav1.PatchOptions{}, "")
if err != nil {
c.logger.Errorf("failed to patch %s connection pooler's pod template: %v", role, err)
return nil, err
}
deployment.Spec.Template.Annotations = newPodAnnotations
}
defaultsSync, defaultsReason := c.needSyncConnectionPoolerDefaults(&c.Config, newConnectionPooler, deployment) defaultsSync, defaultsReason := c.needSyncConnectionPoolerDefaults(&c.Config, newConnectionPooler, deployment)
syncReason = append(syncReason, defaultsReason...) syncReason = append(syncReason, defaultsReason...)
if specSync || defaultsSync { if specSync || defaultsSync || updateDeployment {
c.logger.Infof("update connection pooler deployment %s, reason: %+v", c.logger.Infof("update connection pooler deployment %s, reason: %+v",
c.connectionPoolerName(role), syncReason) c.connectionPoolerName(role), syncReason)
newDeployment, err = c.generateConnectionPoolerDeployment(c.ConnectionPooler[role]) newDeployment, err = c.generateConnectionPoolerDeployment(c.ConnectionPooler[role])
@ -1033,23 +1075,23 @@ func (c *Cluster) syncConnectionPoolerWorker(oldSpec, newSpec *acidv1.Postgresql
return syncReason, fmt.Errorf("could not generate deployment for connection pooler: %v", err) return syncReason, fmt.Errorf("could not generate deployment for connection pooler: %v", err)
} }
deployment, err = updateConnectionPoolerDeployment(c.KubeClient, newDeployment) deployment, err = updateConnectionPoolerDeployment(c.KubeClient, newDeployment, updateDeployment)
if err != nil { if err != nil {
return syncReason, err return syncReason, err
} }
c.ConnectionPooler[role].Deployment = deployment c.ConnectionPooler[role].Deployment = deployment
} }
}
newAnnotations := c.AnnotationsToPropagate(c.annotationsSet(c.ConnectionPooler[role].Deployment.Annotations)) newAnnotations := c.AnnotationsToPropagate(c.annotationsSet(nil)) // including the downscaling annotations
if newAnnotations != nil { if changed, _ := c.compareAnnotations(deployment.Annotations, newAnnotations, nil); changed {
deployment, err = updateConnectionPoolerAnnotations(c.KubeClient, c.ConnectionPooler[role].Deployment, newAnnotations) deployment, err = patchConnectionPoolerAnnotations(c.KubeClient, deployment, newAnnotations)
if err != nil { if err != nil {
return nil, err return nil, err
} }
c.ConnectionPooler[role].Deployment = deployment c.ConnectionPooler[role].Deployment = deployment
} }
}
// check if pooler pods must be replaced due to secret update // check if pooler pods must be replaced due to secret update
listOptions := metav1.ListOptions{ listOptions := metav1.ListOptions{
@ -1076,22 +1118,32 @@ func (c *Cluster) syncConnectionPoolerWorker(oldSpec, newSpec *acidv1.Postgresql
if err != nil { if err != nil {
return nil, fmt.Errorf("could not delete pooler pod: %v", err) return nil, fmt.Errorf("could not delete pooler pod: %v", err)
} }
} else if changed, _ := c.compareAnnotations(pod.Annotations, deployment.Spec.Template.Annotations, nil); changed {
metadataReq := map[string]map[string]map[string]*string{"metadata": {}}
for anno, val := range deployment.Spec.Template.Annotations {
updatedPodAnnotations[anno] = &val
}
metadataReq["metadata"]["annotations"] = updatedPodAnnotations
patch, err := json.Marshal(metadataReq)
if err != nil {
return nil, fmt.Errorf("could not marshal ObjectMeta for %s connection pooler's pods: %v", role, err)
}
_, err = c.KubeClient.Pods(pod.Namespace).Patch(context.TODO(), pod.Name, types.StrategicMergePatchType, patch, metav1.PatchOptions{})
if err != nil {
return nil, fmt.Errorf("could not patch annotations for %s connection pooler's pod %q: %v", role, pod.Name, err)
}
} }
} }
if service, err = c.KubeClient.Services(c.Namespace).Get(context.TODO(), c.connectionPoolerName(role), metav1.GetOptions{}); err == nil { if service, err = c.KubeClient.Services(c.Namespace).Get(context.TODO(), c.connectionPoolerName(role), metav1.GetOptions{}); err == nil {
c.ConnectionPooler[role].Service = service c.ConnectionPooler[role].Service = service
desiredSvc := c.generateConnectionPoolerService(c.ConnectionPooler[role]) desiredSvc := c.generateConnectionPoolerService(c.ConnectionPooler[role])
if match, reason := c.compareServices(service, desiredSvc); !match {
syncReason = append(syncReason, reason)
c.logServiceChanges(role, service, desiredSvc, false, reason)
newService, err = c.updateService(role, service, desiredSvc) newService, err = c.updateService(role, service, desiredSvc)
if err != nil { if err != nil {
return syncReason, fmt.Errorf("could not update %s service to match desired state: %v", role, err) return syncReason, fmt.Errorf("could not update %s service to match desired state: %v", role, err)
} }
c.ConnectionPooler[role].Service = newService c.ConnectionPooler[role].Service = newService
c.logger.Infof("%s service %q is in the desired state now", role, util.NameFromMeta(desiredSvc.ObjectMeta))
}
return NoSync, nil return NoSync, nil
} }

View File

@ -969,7 +969,7 @@ func TestPoolerTLS(t *testing.T) {
TLS: &acidv1.TLSDescription{ TLS: &acidv1.TLSDescription{
SecretName: tlsSecretName, CAFile: "ca.crt"}, SecretName: tlsSecretName, CAFile: "ca.crt"},
AdditionalVolumes: []acidv1.AdditionalVolume{ AdditionalVolumes: []acidv1.AdditionalVolume{
acidv1.AdditionalVolume{ {
Name: tlsSecretName, Name: tlsSecretName,
MountPath: mountPath, MountPath: mountPath,
VolumeSource: v1.VolumeSource{ VolumeSource: v1.VolumeSource{
@ -1077,6 +1077,9 @@ func TestConnectionPoolerServiceSpec(t *testing.T) {
ConnectionPoolerDefaultMemoryRequest: "100Mi", ConnectionPoolerDefaultMemoryRequest: "100Mi",
ConnectionPoolerDefaultMemoryLimit: "100Mi", ConnectionPoolerDefaultMemoryLimit: "100Mi",
}, },
Resources: config.Resources{
EnableOwnerReferences: util.True(),
},
}, },
}, k8sutil.KubernetesClient{}, acidv1.Postgresql{}, logger, eventRecorder) }, k8sutil.KubernetesClient{}, acidv1.Postgresql{}, logger, eventRecorder)
cluster.Statefulset = &appsv1.StatefulSet{ cluster.Statefulset = &appsv1.StatefulSet{

View File

@ -46,12 +46,15 @@ const (
createExtensionSQL = `CREATE EXTENSION IF NOT EXISTS "%s" SCHEMA "%s"` createExtensionSQL = `CREATE EXTENSION IF NOT EXISTS "%s" SCHEMA "%s"`
alterExtensionSQL = `ALTER EXTENSION "%s" SET SCHEMA "%s"` alterExtensionSQL = `ALTER EXTENSION "%s" SET SCHEMA "%s"`
getPublicationsSQL = `SELECT p.pubname, string_agg(pt.schemaname || '.' || pt.tablename, ', ' ORDER BY pt.schemaname, pt.tablename) getPublicationsSQL = `SELECT p.pubname, COALESCE(string_agg(pt.schemaname || '.' || pt.tablename, ', ' ORDER BY pt.schemaname, pt.tablename), '') AS pubtables
FROM pg_publication p FROM pg_publication p
LEFT JOIN pg_publication_tables pt ON pt.pubname = p.pubname LEFT JOIN pg_publication_tables pt ON pt.pubname = p.pubname
WHERE p.pubowner = 'postgres'::regrole
AND p.pubname LIKE 'fes_%'
GROUP BY p.pubname;` GROUP BY p.pubname;`
createPublicationSQL = `CREATE PUBLICATION "%s" FOR TABLE %s WITH (publish = 'insert, update');` createPublicationSQL = `CREATE PUBLICATION "%s" FOR TABLE %s WITH (publish = 'insert, update');`
alterPublicationSQL = `ALTER PUBLICATION "%s" SET TABLE %s;` alterPublicationSQL = `ALTER PUBLICATION "%s" SET TABLE %s;`
dropPublicationSQL = `DROP PUBLICATION "%s";`
globalDefaultPrivilegesSQL = `SET ROLE TO "%s"; globalDefaultPrivilegesSQL = `SET ROLE TO "%s";
ALTER DEFAULT PRIVILEGES GRANT USAGE ON SCHEMAS TO "%s","%s"; ALTER DEFAULT PRIVILEGES GRANT USAGE ON SCHEMAS TO "%s","%s";
@ -108,7 +111,7 @@ func (c *Cluster) pgConnectionString(dbname string) string {
func (c *Cluster) databaseAccessDisabled() bool { func (c *Cluster) databaseAccessDisabled() bool {
if !c.OpConfig.EnableDBAccess { if !c.OpConfig.EnableDBAccess {
c.logger.Debugf("database access is disabled") c.logger.Debug("database access is disabled")
} }
return !c.OpConfig.EnableDBAccess return !c.OpConfig.EnableDBAccess
@ -628,6 +631,14 @@ func (c *Cluster) getPublications() (publications map[string]string, err error)
return dbPublications, err return dbPublications, err
} }
func (c *Cluster) executeDropPublication(pubName string) error {
c.logger.Infof("dropping publication %q", pubName)
if _, err := c.pgDb.Exec(fmt.Sprintf(dropPublicationSQL, pubName)); err != nil {
return fmt.Errorf("could not execute drop publication: %v", err)
}
return nil
}
// executeCreatePublication creates new publication for given tables // executeCreatePublication creates new publication for given tables
// The caller is responsible for opening and closing the database connection. // The caller is responsible for opening and closing the database connection.
func (c *Cluster) executeCreatePublication(pubName, tableList string) error { func (c *Cluster) executeCreatePublication(pubName, tableList string) error {

View File

@ -59,7 +59,7 @@ func (c *Cluster) ExecCommand(podName *spec.NamespacedName, command ...string) (
return "", fmt.Errorf("failed to init executor: %v", err) return "", fmt.Errorf("failed to init executor: %v", err)
} }
err = exec.Stream(remotecommand.StreamOptions{ err = exec.StreamWithContext(context.TODO(), remotecommand.StreamOptions{
Stdout: &execOut, Stdout: &execOut,
Stderr: &execErr, Stderr: &execErr,
Tty: false, Tty: false,

View File

@ -4,7 +4,9 @@ import (
"context" "context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"maps"
"path" "path"
"slices"
"sort" "sort"
"strings" "strings"
@ -12,19 +14,16 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
appsv1 "k8s.io/api/apps/v1" appsv1 "k8s.io/api/apps/v1"
batchv1 "k8s.io/api/batch/v1"
v1 "k8s.io/api/core/v1" v1 "k8s.io/api/core/v1"
policyv1 "k8s.io/api/policy/v1" policyv1 "k8s.io/api/policy/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors" apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr" "k8s.io/apimachinery/pkg/util/intstr"
"golang.org/x/exp/maps"
"golang.org/x/exp/slices"
batchv1 "k8s.io/api/batch/v1"
"k8s.io/apimachinery/pkg/labels"
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1" acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
"github.com/zalando/postgres-operator/pkg/spec" "github.com/zalando/postgres-operator/pkg/spec"
"github.com/zalando/postgres-operator/pkg/util" "github.com/zalando/postgres-operator/pkg/util"
@ -47,11 +46,6 @@ const (
operatorPort = 8080 operatorPort = 8080
) )
type pgUser struct {
Password string `json:"password"`
Options []string `json:"options"`
}
type patroniDCS struct { type patroniDCS struct {
TTL uint32 `json:"ttl,omitempty"` TTL uint32 `json:"ttl,omitempty"`
LoopWait uint32 `json:"loop_wait,omitempty"` LoopWait uint32 `json:"loop_wait,omitempty"`
@ -79,19 +73,13 @@ func (c *Cluster) statefulSetName() string {
return c.Name return c.Name
} }
func (c *Cluster) endpointName(role PostgresRole) string {
name := c.Name
if role == Replica {
name = fmt.Sprintf("%s-%s", name, "repl")
}
return name
}
func (c *Cluster) serviceName(role PostgresRole) string { func (c *Cluster) serviceName(role PostgresRole) string {
name := c.Name name := c.Name
if role == Replica { switch role {
case Replica:
name = fmt.Sprintf("%s-%s", name, "repl") name = fmt.Sprintf("%s-%s", name, "repl")
case Patroni:
name = fmt.Sprintf("%s-%s", name, "config")
} }
return name return name
@ -120,10 +108,15 @@ func (c *Cluster) servicePort(role PostgresRole) int32 {
return pgPort return pgPort
} }
func (c *Cluster) podDisruptionBudgetName() string { func (c *Cluster) PrimaryPodDisruptionBudgetName() string {
return c.OpConfig.PDBNameFormat.Format("cluster", c.Name) return c.OpConfig.PDBNameFormat.Format("cluster", c.Name)
} }
func (c *Cluster) criticalOpPodDisruptionBudgetName() string {
pdbTemplate := config.StringTemplate("postgres-{cluster}-critical-op-pdb")
return pdbTemplate.Format("cluster", c.Name)
}
func makeDefaultResources(config *config.Config) acidv1.Resources { func makeDefaultResources(config *config.Config) acidv1.Resources {
defaultRequests := acidv1.ResourceDescription{ defaultRequests := acidv1.ResourceDescription{
@ -177,7 +170,7 @@ func (c *Cluster) enforceMinResourceLimits(resources *v1.ResourceRequirements) e
if isSmaller { if isSmaller {
msg = fmt.Sprintf("defined CPU limit %s for %q container is below required minimum %s and will be increased", msg = fmt.Sprintf("defined CPU limit %s for %q container is below required minimum %s and will be increased",
cpuLimit.String(), constants.PostgresContainerName, minCPULimit) cpuLimit.String(), constants.PostgresContainerName, minCPULimit)
c.logger.Warningf(msg) c.logger.Warningf("%s", msg)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "ResourceLimits", msg) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "ResourceLimits", msg)
resources.Limits[v1.ResourceCPU], _ = resource.ParseQuantity(minCPULimit) resources.Limits[v1.ResourceCPU], _ = resource.ParseQuantity(minCPULimit)
} }
@ -194,7 +187,7 @@ func (c *Cluster) enforceMinResourceLimits(resources *v1.ResourceRequirements) e
if isSmaller { if isSmaller {
msg = fmt.Sprintf("defined memory limit %s for %q container is below required minimum %s and will be increased", msg = fmt.Sprintf("defined memory limit %s for %q container is below required minimum %s and will be increased",
memoryLimit.String(), constants.PostgresContainerName, minMemoryLimit) memoryLimit.String(), constants.PostgresContainerName, minMemoryLimit)
c.logger.Warningf(msg) c.logger.Warningf("%s", msg)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "ResourceLimits", msg) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "ResourceLimits", msg)
resources.Limits[v1.ResourceMemory], _ = resource.ParseQuantity(minMemoryLimit) resources.Limits[v1.ResourceMemory], _ = resource.ParseQuantity(minMemoryLimit)
} }
@ -530,13 +523,14 @@ func (c *Cluster) nodeAffinity(nodeReadinessLabel map[string]string, nodeAffinit
}, },
} }
} else { } else {
if c.OpConfig.NodeReadinessLabelMerge == "OR" { switch c.OpConfig.NodeReadinessLabelMerge {
case "OR":
manifestTerms := nodeAffinityCopy.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms manifestTerms := nodeAffinityCopy.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms
manifestTerms = append(manifestTerms, nodeReadinessSelectorTerm) manifestTerms = append(manifestTerms, nodeReadinessSelectorTerm)
nodeAffinityCopy.RequiredDuringSchedulingIgnoredDuringExecution = &v1.NodeSelector{ nodeAffinityCopy.RequiredDuringSchedulingIgnoredDuringExecution = &v1.NodeSelector{
NodeSelectorTerms: manifestTerms, NodeSelectorTerms: manifestTerms,
} }
} else if c.OpConfig.NodeReadinessLabelMerge == "AND" { case "AND":
for i, nodeSelectorTerm := range nodeAffinityCopy.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms { for i, nodeSelectorTerm := range nodeAffinityCopy.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms {
manifestExpressions := nodeSelectorTerm.MatchExpressions manifestExpressions := nodeSelectorTerm.MatchExpressions
manifestExpressions = append(manifestExpressions, matchExpressions...) manifestExpressions = append(manifestExpressions, matchExpressions...)
@ -668,13 +662,19 @@ func isBootstrapOnlyParameter(param string) bool {
} }
func generateVolumeMounts(volume acidv1.Volume) []v1.VolumeMount { func generateVolumeMounts(volume acidv1.Volume) []v1.VolumeMount {
return []v1.VolumeMount{ volumeMount := []v1.VolumeMount{
{ {
Name: constants.DataVolumeName, Name: constants.DataVolumeName,
MountPath: constants.PostgresDataMount, //TODO: fetch from manifest MountPath: constants.PostgresDataMount, //TODO: fetch from manifest
SubPath: volume.SubPath,
}, },
} }
if volume.IsSubPathExpr != nil && *volume.IsSubPathExpr {
volumeMount[0].SubPathExpr = volume.SubPath
} else {
volumeMount[0].SubPath = volume.SubPath
}
return volumeMount
} }
func generateContainer( func generateContainer(
@ -744,7 +744,7 @@ func (c *Cluster) generateSidecarContainers(sidecars []acidv1.Sidecar,
} }
// adds common fields to sidecars // adds common fields to sidecars
func patchSidecarContainers(in []v1.Container, volumeMounts []v1.VolumeMount, superUserName string, credentialsSecretName string, logger *logrus.Entry) []v1.Container { func patchSidecarContainers(in []v1.Container, volumeMounts []v1.VolumeMount, superUserName string, credentialsSecretName string) []v1.Container {
result := []v1.Container{} result := []v1.Container{}
for _, container := range in { for _, container := range in {
@ -886,7 +886,7 @@ func (c *Cluster) generatePodTemplate(
addSecretVolume(&podSpec, additionalSecretMount, additionalSecretMountPath) addSecretVolume(&podSpec, additionalSecretMount, additionalSecretMountPath)
} }
if additionalVolumes != nil { if len(additionalVolumes) > 0 {
c.addAdditionalVolumes(&podSpec, additionalVolumes) c.addAdditionalVolumes(&podSpec, additionalVolumes)
} }
@ -1010,6 +1010,9 @@ func (c *Cluster) generateSpiloPodEnvVars(
if c.patroniUsesKubernetes() { if c.patroniUsesKubernetes() {
envVars = append(envVars, v1.EnvVar{Name: "DCS_ENABLE_KUBERNETES_API", Value: "true"}) envVars = append(envVars, v1.EnvVar{Name: "DCS_ENABLE_KUBERNETES_API", Value: "true"})
if c.OpConfig.EnablePodDisruptionBudget != nil && *c.OpConfig.EnablePodDisruptionBudget {
envVars = append(envVars, v1.EnvVar{Name: "KUBERNETES_BOOTSTRAP_LABELS", Value: "{\"critical-operation\":\"true\"}"})
}
} else { } else {
envVars = append(envVars, v1.EnvVar{Name: "ETCD_HOST", Value: c.OpConfig.EtcdHost}) envVars = append(envVars, v1.EnvVar{Name: "ETCD_HOST", Value: c.OpConfig.EtcdHost})
} }
@ -1227,6 +1230,7 @@ func getSidecarContainer(sidecar acidv1.Sidecar, index int, resources *v1.Resour
Resources: *resources, Resources: *resources,
Env: sidecar.Env, Env: sidecar.Env,
Ports: sidecar.Ports, Ports: sidecar.Ports,
Command: sidecar.Command,
} }
} }
@ -1294,7 +1298,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
return nil, fmt.Errorf("could not generate resource requirements: %v", err) return nil, fmt.Errorf("could not generate resource requirements: %v", err)
} }
if spec.InitContainers != nil && len(spec.InitContainers) > 0 { if len(spec.InitContainers) > 0 {
if c.OpConfig.EnableInitContainers != nil && !(*c.OpConfig.EnableInitContainers) { if c.OpConfig.EnableInitContainers != nil && !(*c.OpConfig.EnableInitContainers) {
c.logger.Warningf("initContainers specified but disabled in configuration - next statefulset creation would fail") c.logger.Warningf("initContainers specified but disabled in configuration - next statefulset creation would fail")
} }
@ -1397,7 +1401,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
// generate container specs for sidecars specified in the cluster manifest // generate container specs for sidecars specified in the cluster manifest
clusterSpecificSidecars := []v1.Container{} clusterSpecificSidecars := []v1.Container{}
if spec.Sidecars != nil && len(spec.Sidecars) > 0 { if len(spec.Sidecars) > 0 {
// warn if sidecars are defined, but globally disabled (does not apply to globally defined sidecars) // warn if sidecars are defined, but globally disabled (does not apply to globally defined sidecars)
if c.OpConfig.EnableSidecars != nil && !(*c.OpConfig.EnableSidecars) { if c.OpConfig.EnableSidecars != nil && !(*c.OpConfig.EnableSidecars) {
c.logger.Warningf("sidecars specified but disabled in configuration - next statefulset creation would fail") c.logger.Warningf("sidecars specified but disabled in configuration - next statefulset creation would fail")
@ -1449,7 +1453,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
containerName, containerName) containerName, containerName)
} }
sidecarContainers = patchSidecarContainers(sidecarContainers, volumeMounts, c.OpConfig.SuperUsername, c.credentialSecretName(c.OpConfig.SuperUsername), c.logger) sidecarContainers = patchSidecarContainers(sidecarContainers, volumeMounts, c.OpConfig.SuperUsername, c.credentialSecretName(c.OpConfig.SuperUsername))
tolerationSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration) tolerationSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration)
effectivePodPriorityClassName := util.Coalesce(spec.PodPriorityClassName, c.OpConfig.PodPriorityClassName) effectivePodPriorityClassName := util.Coalesce(spec.PodPriorityClassName, c.OpConfig.PodPriorityClassName)
@ -1501,11 +1505,12 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
updateStrategy := appsv1.StatefulSetUpdateStrategy{Type: appsv1.OnDeleteStatefulSetStrategyType} updateStrategy := appsv1.StatefulSetUpdateStrategy{Type: appsv1.OnDeleteStatefulSetStrategyType}
var podManagementPolicy appsv1.PodManagementPolicyType var podManagementPolicy appsv1.PodManagementPolicyType
if c.OpConfig.PodManagementPolicy == "ordered_ready" { switch c.OpConfig.PodManagementPolicy {
case "ordered_ready":
podManagementPolicy = appsv1.OrderedReadyPodManagement podManagementPolicy = appsv1.OrderedReadyPodManagement
} else if c.OpConfig.PodManagementPolicy == "parallel" { case "parallel":
podManagementPolicy = appsv1.ParallelPodManagement podManagementPolicy = appsv1.ParallelPodManagement
} else { default:
return nil, fmt.Errorf("could not set the pod management policy to the unknown value: %v", c.OpConfig.PodManagementPolicy) return nil, fmt.Errorf("could not set the pod management policy to the unknown value: %v", c.OpConfig.PodManagementPolicy)
} }
@ -1528,6 +1533,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
Namespace: c.Namespace, Namespace: c.Namespace,
Labels: c.labelsSet(true), Labels: c.labelsSet(true),
Annotations: c.AnnotationsToPropagate(c.annotationsSet(nil)), Annotations: c.AnnotationsToPropagate(c.annotationsSet(nil)),
OwnerReferences: c.ownerReferences(),
}, },
Spec: appsv1.StatefulSetSpec{ Spec: appsv1.StatefulSetSpec{
Replicas: &numberOfInstances, Replicas: &numberOfInstances,
@ -1602,7 +1608,7 @@ func (c *Cluster) generatePodAnnotations(spec *acidv1.PostgresSpec) map[string]s
for k, v := range c.OpConfig.CustomPodAnnotations { for k, v := range c.OpConfig.CustomPodAnnotations {
annotations[k] = v annotations[k] = v
} }
if spec != nil || spec.PodAnnotations != nil { if spec.PodAnnotations != nil {
for k, v := range spec.PodAnnotations { for k, v := range spec.PodAnnotations {
annotations[k] = v annotations[k] = v
} }
@ -1820,11 +1826,18 @@ func (c *Cluster) addAdditionalVolumes(podSpec *v1.PodSpec,
for _, additionalVolume := range additionalVolumes { for _, additionalVolume := range additionalVolumes {
for _, target := range additionalVolume.TargetContainers { for _, target := range additionalVolume.TargetContainers {
if podSpec.Containers[i].Name == target || target == "all" { if podSpec.Containers[i].Name == target || target == "all" {
mounts = append(mounts, v1.VolumeMount{ v := v1.VolumeMount{
Name: additionalVolume.Name, Name: additionalVolume.Name,
MountPath: additionalVolume.MountPath, MountPath: additionalVolume.MountPath,
SubPath: additionalVolume.SubPath, }
})
if additionalVolume.IsSubPathExpr != nil && *additionalVolume.IsSubPathExpr {
v.SubPathExpr = additionalVolume.SubPath
} else {
v.SubPath = additionalVolume.SubPath
}
mounts = append(mounts, v)
} }
} }
} }
@ -1856,7 +1869,7 @@ func (c *Cluster) generatePersistentVolumeClaimTemplate(volumeSize, volumeStorag
}, },
Spec: v1.PersistentVolumeClaimSpec{ Spec: v1.PersistentVolumeClaimSpec{
AccessModes: []v1.PersistentVolumeAccessMode{v1.ReadWriteOnce}, AccessModes: []v1.PersistentVolumeAccessMode{v1.ReadWriteOnce},
Resources: v1.ResourceRequirements{ Resources: v1.VolumeResourceRequirements{
Requests: v1.ResourceList{ Requests: v1.ResourceList{
v1.ResourceStorage: quantity, v1.ResourceStorage: quantity,
}, },
@ -1872,18 +1885,16 @@ func (c *Cluster) generatePersistentVolumeClaimTemplate(volumeSize, volumeStorag
func (c *Cluster) generateUserSecrets() map[string]*v1.Secret { func (c *Cluster) generateUserSecrets() map[string]*v1.Secret {
secrets := make(map[string]*v1.Secret, len(c.pgUsers)+len(c.systemUsers)) secrets := make(map[string]*v1.Secret, len(c.pgUsers)+len(c.systemUsers))
namespace := c.Namespace
for username, pgUser := range c.pgUsers { for username, pgUser := range c.pgUsers {
//Skip users with no password i.e. human users (they'll be authenticated using pam) //Skip users with no password i.e. human users (they'll be authenticated using pam)
secret := c.generateSingleUserSecret(pgUser.Namespace, pgUser) secret := c.generateSingleUserSecret(pgUser)
if secret != nil { if secret != nil {
secrets[username] = secret secrets[username] = secret
} }
namespace = pgUser.Namespace
} }
/* special case for the system user */ /* special case for the system user */
for _, systemUser := range c.systemUsers { for _, systemUser := range c.systemUsers {
secret := c.generateSingleUserSecret(namespace, systemUser) secret := c.generateSingleUserSecret(systemUser)
if secret != nil { if secret != nil {
secrets[systemUser.Name] = secret secrets[systemUser.Name] = secret
} }
@ -1892,7 +1903,7 @@ func (c *Cluster) generateUserSecrets() map[string]*v1.Secret {
return secrets return secrets
} }
func (c *Cluster) generateSingleUserSecret(namespace string, pgUser spec.PgUser) *v1.Secret { func (c *Cluster) generateSingleUserSecret(pgUser spec.PgUser) *v1.Secret {
//Skip users with no password i.e. human users (they'll be authenticated using pam) //Skip users with no password i.e. human users (they'll be authenticated using pam)
if pgUser.Password == "" { if pgUser.Password == "" {
if pgUser.Origin != spec.RoleOriginTeamsAPI { if pgUser.Origin != spec.RoleOriginTeamsAPI {
@ -1916,12 +1927,21 @@ func (c *Cluster) generateSingleUserSecret(namespace string, pgUser spec.PgUser)
lbls = c.connectionPoolerLabels("", false).MatchLabels lbls = c.connectionPoolerLabels("", false).MatchLabels
} }
// if secret lives in another namespace we cannot set ownerReferences
var ownerReferences []metav1.OwnerReference
if c.Config.OpConfig.EnableCrossNamespaceSecret && c.Postgresql.ObjectMeta.Namespace != pgUser.Namespace {
ownerReferences = nil
} else {
ownerReferences = c.ownerReferences()
}
secret := v1.Secret{ secret := v1.Secret{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: c.credentialSecretName(username), Name: c.credentialSecretName(username),
Namespace: pgUser.Namespace, Namespace: pgUser.Namespace,
Labels: lbls, Labels: lbls,
Annotations: c.annotationsSet(nil), Annotations: c.annotationsSet(nil),
OwnerReferences: ownerReferences,
}, },
Type: v1.SecretTypeOpaque, Type: v1.SecretTypeOpaque,
Data: map[string][]byte{ Data: map[string][]byte{
@ -1983,6 +2003,7 @@ func (c *Cluster) generateService(role PostgresRole, spec *acidv1.PostgresSpec)
Namespace: c.Namespace, Namespace: c.Namespace,
Labels: c.roleLabelsSet(true, role), Labels: c.roleLabelsSet(true, role),
Annotations: c.annotationsSet(c.generateServiceAnnotations(role, spec)), Annotations: c.annotationsSet(c.generateServiceAnnotations(role, spec)),
OwnerReferences: c.ownerReferences(),
}, },
Spec: serviceSpec, Spec: serviceSpec,
} }
@ -2048,9 +2069,11 @@ func (c *Cluster) getCustomServiceAnnotations(role PostgresRole, spec *acidv1.Po
func (c *Cluster) generateEndpoint(role PostgresRole, subsets []v1.EndpointSubset) *v1.Endpoints { func (c *Cluster) generateEndpoint(role PostgresRole, subsets []v1.EndpointSubset) *v1.Endpoints {
endpoints := &v1.Endpoints{ endpoints := &v1.Endpoints{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: c.endpointName(role), Name: c.serviceName(role),
Namespace: c.Namespace, Namespace: c.Namespace,
Annotations: c.annotationsSet(nil),
Labels: c.roleLabelsSet(true, role), Labels: c.roleLabelsSet(true, role),
OwnerReferences: c.ownerReferences(),
}, },
} }
if len(subsets) > 0 { if len(subsets) > 0 {
@ -2193,7 +2216,7 @@ func (c *Cluster) generateStandbyEnvironment(description *acidv1.StandbyDescript
return result return result
} }
func (c *Cluster) generatePodDisruptionBudget() *policyv1.PodDisruptionBudget { func (c *Cluster) generatePrimaryPodDisruptionBudget() *policyv1.PodDisruptionBudget {
minAvailable := intstr.FromInt(1) minAvailable := intstr.FromInt(1)
pdbEnabled := c.OpConfig.EnablePodDisruptionBudget pdbEnabled := c.OpConfig.EnablePodDisruptionBudget
pdbMasterLabelSelector := c.OpConfig.PDBMasterLabelSelector pdbMasterLabelSelector := c.OpConfig.PDBMasterLabelSelector
@ -2211,10 +2234,40 @@ func (c *Cluster) generatePodDisruptionBudget() *policyv1.PodDisruptionBudget {
return &policyv1.PodDisruptionBudget{ return &policyv1.PodDisruptionBudget{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: c.podDisruptionBudgetName(), Name: c.PrimaryPodDisruptionBudgetName(),
Namespace: c.Namespace, Namespace: c.Namespace,
Labels: c.labelsSet(true), Labels: c.labelsSet(true),
Annotations: c.annotationsSet(nil), Annotations: c.annotationsSet(nil),
OwnerReferences: c.ownerReferences(),
},
Spec: policyv1.PodDisruptionBudgetSpec{
MinAvailable: &minAvailable,
Selector: &metav1.LabelSelector{
MatchLabels: labels,
},
},
}
}
func (c *Cluster) generateCriticalOpPodDisruptionBudget() *policyv1.PodDisruptionBudget {
minAvailable := intstr.FromInt32(c.Spec.NumberOfInstances)
pdbEnabled := c.OpConfig.EnablePodDisruptionBudget
// if PodDisruptionBudget is disabled or if there are no DB pods, set the budget to 0.
if (pdbEnabled != nil && !(*pdbEnabled)) || c.Spec.NumberOfInstances <= 0 {
minAvailable = intstr.FromInt(0)
}
labels := c.labelsSet(false)
labels["critical-operation"] = "true"
return &policyv1.PodDisruptionBudget{
ObjectMeta: metav1.ObjectMeta{
Name: c.criticalOpPodDisruptionBudgetName(),
Namespace: c.Namespace,
Labels: c.labelsSet(true),
Annotations: c.annotationsSet(nil),
OwnerReferences: c.ownerReferences(),
}, },
Spec: policyv1.PodDisruptionBudgetSpec{ Spec: policyv1.PodDisruptionBudgetSpec{
MinAvailable: &minAvailable, MinAvailable: &minAvailable,
@ -2242,6 +2295,8 @@ func (c *Cluster) generateLogicalBackupJob() (*batchv1.CronJob, error) {
resourceRequirements *v1.ResourceRequirements resourceRequirements *v1.ResourceRequirements
) )
spec := &c.Spec
// NB: a cron job creates standard batch jobs according to schedule; these batch jobs manage pods and clean-up // NB: a cron job creates standard batch jobs according to schedule; these batch jobs manage pods and clean-up
c.logger.Debug("Generating logical backup pod template") c.logger.Debug("Generating logical backup pod template")
@ -2291,6 +2346,8 @@ func (c *Cluster) generateLogicalBackupJob() (*batchv1.CronJob, error) {
annotations := c.generatePodAnnotations(&c.Spec) annotations := c.generatePodAnnotations(&c.Spec)
tolerationsSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration)
// re-use the method that generates DB pod templates // re-use the method that generates DB pod templates
if podTemplate, err = c.generatePodTemplate( if podTemplate, err = c.generatePodTemplate(
c.Namespace, c.Namespace,
@ -2300,7 +2357,7 @@ func (c *Cluster) generateLogicalBackupJob() (*batchv1.CronJob, error) {
[]v1.Container{}, []v1.Container{},
[]v1.Container{}, []v1.Container{},
util.False(), util.False(),
&[]v1.Toleration{}, &tolerationsSpec,
nil, nil,
nil, nil,
nil, nil,
@ -2347,6 +2404,7 @@ func (c *Cluster) generateLogicalBackupJob() (*batchv1.CronJob, error) {
Namespace: c.Namespace, Namespace: c.Namespace,
Labels: c.labelsSet(true), Labels: c.labelsSet(true),
Annotations: c.annotationsSet(nil), Annotations: c.annotationsSet(nil),
OwnerReferences: c.ownerReferences(),
}, },
Spec: batchv1.CronJobSpec{ Spec: batchv1.CronJobSpec{
Schedule: schedule, Schedule: schedule,
@ -2360,6 +2418,8 @@ func (c *Cluster) generateLogicalBackupJob() (*batchv1.CronJob, error) {
func (c *Cluster) generateLogicalBackupPodEnvVars() []v1.EnvVar { func (c *Cluster) generateLogicalBackupPodEnvVars() []v1.EnvVar {
backupProvider := c.OpConfig.LogicalBackup.LogicalBackupProvider
envVars := []v1.EnvVar{ envVars := []v1.EnvVar{
{ {
Name: "SCOPE", Name: "SCOPE",
@ -2378,51 +2438,6 @@ func (c *Cluster) generateLogicalBackupPodEnvVars() []v1.EnvVar {
}, },
}, },
}, },
// Bucket env vars
{
Name: "LOGICAL_BACKUP_PROVIDER",
Value: c.OpConfig.LogicalBackup.LogicalBackupProvider,
},
{
Name: "LOGICAL_BACKUP_S3_BUCKET",
Value: c.OpConfig.LogicalBackup.LogicalBackupS3Bucket,
},
{
Name: "LOGICAL_BACKUP_S3_REGION",
Value: c.OpConfig.LogicalBackup.LogicalBackupS3Region,
},
{
Name: "LOGICAL_BACKUP_S3_ENDPOINT",
Value: c.OpConfig.LogicalBackup.LogicalBackupS3Endpoint,
},
{
Name: "LOGICAL_BACKUP_S3_SSE",
Value: c.OpConfig.LogicalBackup.LogicalBackupS3SSE,
},
{
Name: "LOGICAL_BACKUP_S3_RETENTION_TIME",
Value: c.OpConfig.LogicalBackup.LogicalBackupS3RetentionTime,
},
{
Name: "LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX",
Value: getBucketScopeSuffix(string(c.Postgresql.GetUID())),
},
{
Name: "LOGICAL_BACKUP_GOOGLE_APPLICATION_CREDENTIALS",
Value: c.OpConfig.LogicalBackup.LogicalBackupGoogleApplicationCredentials,
},
{
Name: "LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_NAME",
Value: c.OpConfig.LogicalBackup.LogicalBackupAzureStorageAccountName,
},
{
Name: "LOGICAL_BACKUP_AZURE_STORAGE_CONTAINER",
Value: c.OpConfig.LogicalBackup.LogicalBackupAzureStorageContainer,
},
{
Name: "LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_KEY",
Value: c.OpConfig.LogicalBackup.LogicalBackupAzureStorageAccountKey,
},
// Postgres env vars // Postgres env vars
{ {
Name: "PG_VERSION", Name: "PG_VERSION",
@ -2455,8 +2470,45 @@ func (c *Cluster) generateLogicalBackupPodEnvVars() []v1.EnvVar {
}, },
}, },
}, },
// Bucket env vars
{
Name: "LOGICAL_BACKUP_PROVIDER",
Value: backupProvider,
},
{
Name: "LOGICAL_BACKUP_S3_BUCKET",
Value: c.OpConfig.LogicalBackup.LogicalBackupS3Bucket,
},
{
Name: "LOGICAL_BACKUP_S3_BUCKET_PREFIX",
Value: c.OpConfig.LogicalBackup.LogicalBackupS3BucketPrefix,
},
{
Name: "LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX",
Value: getBucketScopeSuffix(string(c.Postgresql.GetUID())),
},
} }
switch backupProvider {
case "s3":
envVars = appendEnvVars(envVars, []v1.EnvVar{
{
Name: "LOGICAL_BACKUP_S3_REGION",
Value: c.OpConfig.LogicalBackup.LogicalBackupS3Region,
},
{
Name: "LOGICAL_BACKUP_S3_ENDPOINT",
Value: c.OpConfig.LogicalBackup.LogicalBackupS3Endpoint,
},
{
Name: "LOGICAL_BACKUP_S3_SSE",
Value: c.OpConfig.LogicalBackup.LogicalBackupS3SSE,
},
{
Name: "LOGICAL_BACKUP_S3_RETENTION_TIME",
Value: c.getLogicalBackupRetentionTime(),
}}...)
if c.OpConfig.LogicalBackup.LogicalBackupS3AccessKeyID != "" { if c.OpConfig.LogicalBackup.LogicalBackupS3AccessKeyID != "" {
envVars = append(envVars, v1.EnvVar{Name: "AWS_ACCESS_KEY_ID", Value: c.OpConfig.LogicalBackup.LogicalBackupS3AccessKeyID}) envVars = append(envVars, v1.EnvVar{Name: "AWS_ACCESS_KEY_ID", Value: c.OpConfig.LogicalBackup.LogicalBackupS3AccessKeyID})
} }
@ -2465,9 +2517,38 @@ func (c *Cluster) generateLogicalBackupPodEnvVars() []v1.EnvVar {
envVars = append(envVars, v1.EnvVar{Name: "AWS_SECRET_ACCESS_KEY", Value: c.OpConfig.LogicalBackup.LogicalBackupS3SecretAccessKey}) envVars = append(envVars, v1.EnvVar{Name: "AWS_SECRET_ACCESS_KEY", Value: c.OpConfig.LogicalBackup.LogicalBackupS3SecretAccessKey})
} }
case "gcs":
if c.OpConfig.LogicalBackup.LogicalBackupGoogleApplicationCredentials != "" {
envVars = append(envVars, v1.EnvVar{Name: "LOGICAL_BACKUP_GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.LogicalBackup.LogicalBackupGoogleApplicationCredentials})
}
case "az":
envVars = appendEnvVars(envVars, []v1.EnvVar{
{
Name: "LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_NAME",
Value: c.OpConfig.LogicalBackup.LogicalBackupAzureStorageAccountName,
},
{
Name: "LOGICAL_BACKUP_AZURE_STORAGE_CONTAINER",
Value: c.OpConfig.LogicalBackup.LogicalBackupAzureStorageContainer,
}}...)
if c.OpConfig.LogicalBackup.LogicalBackupAzureStorageAccountKey != "" {
envVars = append(envVars, v1.EnvVar{Name: "LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_KEY", Value: c.OpConfig.LogicalBackup.LogicalBackupAzureStorageAccountKey})
}
}
return envVars return envVars
} }
func (c *Cluster) getLogicalBackupRetentionTime() (retentionTime string) {
if c.Spec.LogicalBackupRetention != "" {
return c.Spec.LogicalBackupRetention
}
return c.OpConfig.LogicalBackup.LogicalBackupS3RetentionTime
}
// getLogicalBackupJobName returns the name; the job itself may not exists // getLogicalBackupJobName returns the name; the job itself may not exists
func (c *Cluster) getLogicalBackupJobName() (jobName string) { func (c *Cluster) getLogicalBackupJobName() (jobName string) {
return trimCronjobName(fmt.Sprintf("%s%s", c.OpConfig.LogicalBackupJobPrefix, c.clusterName().Name)) return trimCronjobName(fmt.Sprintf("%s%s", c.OpConfig.LogicalBackupJobPrefix, c.clusterName().Name))
@ -2480,24 +2561,28 @@ func (c *Cluster) getLogicalBackupJobName() (jobName string) {
// survived, we can't delete an object because it will affect the functioning // survived, we can't delete an object because it will affect the functioning
// cluster). // cluster).
func (c *Cluster) ownerReferences() []metav1.OwnerReference { func (c *Cluster) ownerReferences() []metav1.OwnerReference {
controller := true currentOwnerReferences := c.ObjectMeta.OwnerReferences
if c.OpConfig.EnableOwnerReferences == nil || !*c.OpConfig.EnableOwnerReferences {
if c.Statefulset == nil { return currentOwnerReferences
c.logger.Warning("Cannot get owner reference, no statefulset")
return []metav1.OwnerReference{}
} }
return []metav1.OwnerReference{ for _, ownerRef := range currentOwnerReferences {
{ if ownerRef.UID == c.Postgresql.ObjectMeta.UID {
UID: c.Statefulset.ObjectMeta.UID, return currentOwnerReferences
APIVersion: "apps/v1",
Kind: "StatefulSet",
Name: c.Statefulset.ObjectMeta.Name,
Controller: &controller,
},
} }
} }
controllerReference := metav1.OwnerReference{
UID: c.Postgresql.ObjectMeta.UID,
APIVersion: acidv1.SchemeGroupVersion.Identifier(),
Kind: acidv1.PostgresCRDResourceKind,
Name: c.Postgresql.ObjectMeta.Name,
Controller: util.True(),
}
return append(currentOwnerReferences, controllerReference)
}
func ensurePath(file string, defaultDir string, defaultFile string) string { func ensurePath(file string, defaultDir string, defaultFile string) string {
if file == "" { if file == "" {
return path.Join(defaultDir, defaultFile) return path.Join(defaultDir, defaultFile)

View File

@ -72,18 +72,18 @@ func TestGenerateSpiloJSONConfiguration(t *testing.T) {
}{ }{
{ {
subtest: "Patroni default configuration", subtest: "Patroni default configuration",
pgParam: &acidv1.PostgresqlParam{PgVersion: "16"}, pgParam: &acidv1.PostgresqlParam{PgVersion: "17"},
patroni: &acidv1.Patroni{}, patroni: &acidv1.Patroni{},
opConfig: &config.Config{ opConfig: &config.Config{
Auth: config.Auth{ Auth: config.Auth{
PamRoleName: "zalandos", PamRoleName: "zalandos",
}, },
}, },
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/16/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{}}}`, result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/17/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{}}}`,
}, },
{ {
subtest: "Patroni configured", subtest: "Patroni configured",
pgParam: &acidv1.PostgresqlParam{PgVersion: "16"}, pgParam: &acidv1.PostgresqlParam{PgVersion: "17"},
patroni: &acidv1.Patroni{ patroni: &acidv1.Patroni{
InitDB: map[string]string{ InitDB: map[string]string{
"encoding": "UTF8", "encoding": "UTF8",
@ -102,38 +102,38 @@ func TestGenerateSpiloJSONConfiguration(t *testing.T) {
FailsafeMode: util.True(), FailsafeMode: util.True(),
}, },
opConfig: &config.Config{}, opConfig: &config.Config{},
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/16/bin","pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"synchronous_mode":true,"synchronous_mode_strict":true,"synchronous_node_count":1,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}},"failsafe_mode":true}}}`, result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/17/bin","pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"]},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"},"data-checksums",{"encoding":"UTF8"},{"locale":"en_US.UTF-8"}],"dcs":{"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"synchronous_mode":true,"synchronous_mode_strict":true,"synchronous_node_count":1,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}},"failsafe_mode":true}}}`,
}, },
{ {
subtest: "Patroni failsafe_mode configured globally", subtest: "Patroni failsafe_mode configured globally",
pgParam: &acidv1.PostgresqlParam{PgVersion: "16"}, pgParam: &acidv1.PostgresqlParam{PgVersion: "17"},
patroni: &acidv1.Patroni{}, patroni: &acidv1.Patroni{},
opConfig: &config.Config{ opConfig: &config.Config{
EnablePatroniFailsafeMode: util.True(), EnablePatroniFailsafeMode: util.True(),
}, },
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/16/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":true}}}`, result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/17/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":true}}}`,
}, },
{ {
subtest: "Patroni failsafe_mode configured globally, disabled for cluster", subtest: "Patroni failsafe_mode configured globally, disabled for cluster",
pgParam: &acidv1.PostgresqlParam{PgVersion: "16"}, pgParam: &acidv1.PostgresqlParam{PgVersion: "17"},
patroni: &acidv1.Patroni{ patroni: &acidv1.Patroni{
FailsafeMode: util.False(), FailsafeMode: util.False(),
}, },
opConfig: &config.Config{ opConfig: &config.Config{
EnablePatroniFailsafeMode: util.True(), EnablePatroniFailsafeMode: util.True(),
}, },
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/16/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":false}}}`, result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/17/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":false}}}`,
}, },
{ {
subtest: "Patroni failsafe_mode disabled globally, configured for cluster", subtest: "Patroni failsafe_mode disabled globally, configured for cluster",
pgParam: &acidv1.PostgresqlParam{PgVersion: "16"}, pgParam: &acidv1.PostgresqlParam{PgVersion: "17"},
patroni: &acidv1.Patroni{ patroni: &acidv1.Patroni{
FailsafeMode: util.True(), FailsafeMode: util.True(),
}, },
opConfig: &config.Config{ opConfig: &config.Config{
EnablePatroniFailsafeMode: util.False(), EnablePatroniFailsafeMode: util.False(),
}, },
result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/16/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":true}}}`, result: `{"postgresql":{"bin_dir":"/usr/lib/postgresql/17/bin"},"bootstrap":{"initdb":[{"auth-host":"md5"},{"auth-local":"trust"}],"dcs":{"failsafe_mode":true}}}`,
}, },
} }
for _, tt := range tests { for _, tt := range tests {
@ -164,15 +164,15 @@ func TestExtractPgVersionFromBinPath(t *testing.T) {
}, },
{ {
subTest: "test current bin path against hard coded template", subTest: "test current bin path against hard coded template",
binPath: "/usr/lib/postgresql/16/bin", binPath: "/usr/lib/postgresql/17/bin",
template: pgBinariesLocationTemplate, template: pgBinariesLocationTemplate,
expected: "16", expected: "17",
}, },
{ {
subTest: "test alternative bin path against a matching template", subTest: "test alternative bin path against a matching template",
binPath: "/usr/pgsql-16/bin", binPath: "/usr/pgsql-17/bin",
template: "/usr/pgsql-%v/bin", template: "/usr/pgsql-%v/bin",
expected: "16", expected: "17",
}, },
} }
@ -1451,9 +1451,9 @@ func TestNodeAffinity(t *testing.T) {
nodeAff := &v1.NodeAffinity{ nodeAff := &v1.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &v1.NodeSelector{ RequiredDuringSchedulingIgnoredDuringExecution: &v1.NodeSelector{
NodeSelectorTerms: []v1.NodeSelectorTerm{ NodeSelectorTerms: []v1.NodeSelectorTerm{
v1.NodeSelectorTerm{ {
MatchExpressions: []v1.NodeSelectorRequirement{ MatchExpressions: []v1.NodeSelectorRequirement{
v1.NodeSelectorRequirement{ {
Key: "test-label", Key: "test-label",
Operator: v1.NodeSelectorOpIn, Operator: v1.NodeSelectorOpIn,
Values: []string{ Values: []string{
@ -1566,22 +1566,28 @@ func TestPodAffinity(t *testing.T) {
} }
func testDeploymentOwnerReference(cluster *Cluster, deployment *appsv1.Deployment) error { func testDeploymentOwnerReference(cluster *Cluster, deployment *appsv1.Deployment) error {
if len(deployment.ObjectMeta.OwnerReferences) == 0 {
return nil
}
owner := deployment.ObjectMeta.OwnerReferences[0] owner := deployment.ObjectMeta.OwnerReferences[0]
if owner.Name != cluster.Statefulset.ObjectMeta.Name { if owner.Name != cluster.Postgresql.ObjectMeta.Name {
return fmt.Errorf("Ownere reference is incorrect, got %s, expected %s", return fmt.Errorf("Owner reference is incorrect, got %s, expected %s",
owner.Name, cluster.Statefulset.ObjectMeta.Name) owner.Name, cluster.Postgresql.ObjectMeta.Name)
} }
return nil return nil
} }
func testServiceOwnerReference(cluster *Cluster, service *v1.Service, role PostgresRole) error { func testServiceOwnerReference(cluster *Cluster, service *v1.Service, role PostgresRole) error {
if len(service.ObjectMeta.OwnerReferences) == 0 {
return nil
}
owner := service.ObjectMeta.OwnerReferences[0] owner := service.ObjectMeta.OwnerReferences[0]
if owner.Name != cluster.Statefulset.ObjectMeta.Name { if owner.Name != cluster.Postgresql.ObjectMeta.Name {
return fmt.Errorf("Ownere reference is incorrect, got %s, expected %s", return fmt.Errorf("Owner reference is incorrect, got %s, expected %s",
owner.Name, cluster.Statefulset.ObjectMeta.Name) owner.Name, cluster.Postgresql.ObjectMeta.Name)
} }
return nil return nil
@ -1667,7 +1673,7 @@ func TestTLS(t *testing.T) {
TLS: &acidv1.TLSDescription{ TLS: &acidv1.TLSDescription{
SecretName: tlsSecretName, CAFile: "ca.crt"}, SecretName: tlsSecretName, CAFile: "ca.crt"},
AdditionalVolumes: []acidv1.AdditionalVolume{ AdditionalVolumes: []acidv1.AdditionalVolume{
acidv1.AdditionalVolume{ {
Name: tlsSecretName, Name: tlsSecretName,
MountPath: mountPath, MountPath: mountPath,
VolumeSource: v1.VolumeSource{ VolumeSource: v1.VolumeSource{
@ -1889,6 +1895,25 @@ func TestAdditionalVolume(t *testing.T) {
EmptyDir: &v1.EmptyDirVolumeSource{}, EmptyDir: &v1.EmptyDirVolumeSource{},
}, },
}, },
{
Name: "test5",
MountPath: "/test5",
SubPath: "subpath",
TargetContainers: nil, // should mount only to postgres
VolumeSource: v1.VolumeSource{
EmptyDir: &v1.EmptyDirVolumeSource{},
},
},
{
Name: "test6",
MountPath: "/test6",
SubPath: "$(POD_NAME)",
IsSubPathExpr: util.True(),
TargetContainers: nil, // should mount only to postgres
VolumeSource: v1.VolumeSource{
EmptyDir: &v1.EmptyDirVolumeSource{},
},
},
} }
pg := acidv1.Postgresql{ pg := acidv1.Postgresql{
@ -1904,6 +1929,8 @@ func TestAdditionalVolume(t *testing.T) {
}, },
Volume: acidv1.Volume{ Volume: acidv1.Volume{
Size: "1G", Size: "1G",
SubPath: "$(POD_NAME)",
IsSubPathExpr: util.True(),
}, },
AdditionalVolumes: additionalVolumes, AdditionalVolumes: additionalVolumes,
Sidecars: []acidv1.Sidecar{ Sidecars: []acidv1.Sidecar{
@ -1938,16 +1965,22 @@ func TestAdditionalVolume(t *testing.T) {
subTest string subTest string
container string container string
expectedMounts []string expectedMounts []string
expectedSubPaths []string
expectedSubPathExprs []string
}{ }{
{ {
subTest: "checking volume mounts of postgres container", subTest: "checking volume mounts of postgres container",
container: constants.PostgresContainerName, container: constants.PostgresContainerName,
expectedMounts: []string{"pgdata", "test1", "test3", "test4"}, expectedMounts: []string{"pgdata", "test1", "test3", "test4", "test5", "test6"},
expectedSubPaths: []string{"", "", "", "", "subpath", ""},
expectedSubPathExprs: []string{"$(POD_NAME)", "", "", "", "", "$(POD_NAME)"},
}, },
{ {
subTest: "checking volume mounts of sidecar container", subTest: "checking volume mounts of sidecar container",
container: "sidecar", container: "sidecar",
expectedMounts: []string{"pgdata", "test1", "test2"}, expectedMounts: []string{"pgdata", "test1", "test2"},
expectedSubPaths: []string{"", "", ""},
expectedSubPathExprs: []string{"$(POD_NAME)", "", ""},
}, },
} }
@ -1957,14 +1990,29 @@ func TestAdditionalVolume(t *testing.T) {
continue continue
} }
mounts := []string{} mounts := []string{}
subPaths := []string{}
subPathExprs := []string{}
for _, volumeMounts := range container.VolumeMounts { for _, volumeMounts := range container.VolumeMounts {
mounts = append(mounts, volumeMounts.Name) mounts = append(mounts, volumeMounts.Name)
subPaths = append(subPaths, volumeMounts.SubPath)
subPathExprs = append(subPathExprs, volumeMounts.SubPathExpr)
} }
if !util.IsEqualIgnoreOrder(mounts, tt.expectedMounts) { if !util.IsEqualIgnoreOrder(mounts, tt.expectedMounts) {
t.Errorf("%s %s: different volume mounts: got %v, epxected %v", t.Errorf("%s %s: different volume mounts: got %v, expected %v",
t.Name(), tt.subTest, mounts, tt.expectedMounts) t.Name(), tt.subTest, mounts, tt.expectedMounts)
} }
if !util.IsEqualIgnoreOrder(subPaths, tt.expectedSubPaths) {
t.Errorf("%s %s: different volume subPaths: got %v, expected %v",
t.Name(), tt.subTest, subPaths, tt.expectedSubPaths)
}
if !util.IsEqualIgnoreOrder(subPathExprs, tt.expectedSubPathExprs) {
t.Errorf("%s %s: different volume subPathExprs: got %v, expected %v",
t.Name(), tt.subTest, subPathExprs, tt.expectedSubPathExprs)
}
} }
} }
} }
@ -2100,7 +2148,7 @@ func TestSidecars(t *testing.T) {
spec = acidv1.PostgresSpec{ spec = acidv1.PostgresSpec{
PostgresqlParam: acidv1.PostgresqlParam{ PostgresqlParam: acidv1.PostgresqlParam{
PgVersion: "16", PgVersion: "17",
Parameters: map[string]string{ Parameters: map[string]string{
"max_connections": "100", "max_connections": "100",
}, },
@ -2114,17 +2162,17 @@ func TestSidecars(t *testing.T) {
Size: "1G", Size: "1G",
}, },
Sidecars: []acidv1.Sidecar{ Sidecars: []acidv1.Sidecar{
acidv1.Sidecar{ {
Name: "cluster-specific-sidecar", Name: "cluster-specific-sidecar",
}, },
acidv1.Sidecar{ {
Name: "cluster-specific-sidecar-with-resources", Name: "cluster-specific-sidecar-with-resources",
Resources: &acidv1.Resources{ Resources: &acidv1.Resources{
ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("210m"), Memory: k8sutil.StringToPointer("0.8Gi")}, ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("210m"), Memory: k8sutil.StringToPointer("0.8Gi")},
ResourceLimits: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("510m"), Memory: k8sutil.StringToPointer("1.4Gi")}, ResourceLimits: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("510m"), Memory: k8sutil.StringToPointer("1.4Gi")},
}, },
}, },
acidv1.Sidecar{ {
Name: "replace-sidecar", Name: "replace-sidecar",
DockerImage: "override-image", DockerImage: "override-image",
}, },
@ -2152,11 +2200,11 @@ func TestSidecars(t *testing.T) {
"deprecated-global-sidecar": "image:123", "deprecated-global-sidecar": "image:123",
}, },
SidecarContainers: []v1.Container{ SidecarContainers: []v1.Container{
v1.Container{ {
Name: "global-sidecar", Name: "global-sidecar",
}, },
// will be replaced by a cluster specific sidecar with the same name // will be replaced by a cluster specific sidecar with the same name
v1.Container{ {
Name: "replace-sidecar", Name: "replace-sidecar",
Image: "replaced-image", Image: "replaced-image",
}, },
@ -2211,7 +2259,7 @@ func TestSidecars(t *testing.T) {
}, },
} }
mounts := []v1.VolumeMount{ mounts := []v1.VolumeMount{
v1.VolumeMount{ {
Name: "pgdata", Name: "pgdata",
MountPath: "/home/postgres/pgdata", MountPath: "/home/postgres/pgdata",
}, },
@ -2278,13 +2326,81 @@ func TestSidecars(t *testing.T) {
} }
func TestGeneratePodDisruptionBudget(t *testing.T) { func TestGeneratePodDisruptionBudget(t *testing.T) {
testName := "Test PodDisruptionBudget spec generation"
hasName := func(pdbName string) func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
return func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
if pdbName != podDisruptionBudget.ObjectMeta.Name {
return fmt.Errorf("PodDisruptionBudget name is incorrect, got %s, expected %s",
podDisruptionBudget.ObjectMeta.Name, pdbName)
}
return nil
}
}
hasMinAvailable := func(expectedMinAvailable int) func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
return func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
actual := podDisruptionBudget.Spec.MinAvailable.IntVal
if actual != int32(expectedMinAvailable) {
return fmt.Errorf("PodDisruptionBudget MinAvailable is incorrect, got %d, expected %d",
actual, expectedMinAvailable)
}
return nil
}
}
testLabelsAndSelectors := func(isPrimary bool) func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
return func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
masterLabelSelectorDisabled := cluster.OpConfig.PDBMasterLabelSelector != nil && !*cluster.OpConfig.PDBMasterLabelSelector
if podDisruptionBudget.ObjectMeta.Namespace != "myapp" {
return fmt.Errorf("Object Namespace incorrect.")
}
expectedLabels := map[string]string{"team": "myapp", "cluster-name": "myapp-database"}
if !reflect.DeepEqual(podDisruptionBudget.Labels, expectedLabels) {
return fmt.Errorf("Labels incorrect, got %#v, expected %#v", podDisruptionBudget.Labels, expectedLabels)
}
if !masterLabelSelectorDisabled {
if isPrimary {
expectedLabels := &metav1.LabelSelector{
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"}}
if !reflect.DeepEqual(podDisruptionBudget.Spec.Selector, expectedLabels) {
return fmt.Errorf("MatchLabels incorrect, got %#v, expected %#v", podDisruptionBudget.Spec.Selector, expectedLabels)
}
} else {
expectedLabels := &metav1.LabelSelector{
MatchLabels: map[string]string{"cluster-name": "myapp-database", "critical-operation": "true"}}
if !reflect.DeepEqual(podDisruptionBudget.Spec.Selector, expectedLabels) {
return fmt.Errorf("MatchLabels incorrect, got %#v, expected %#v", podDisruptionBudget.Spec.Selector, expectedLabels)
}
}
}
return nil
}
}
testPodDisruptionBudgetOwnerReference := func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error {
if len(podDisruptionBudget.ObjectMeta.OwnerReferences) == 0 {
return nil
}
owner := podDisruptionBudget.ObjectMeta.OwnerReferences[0]
if owner.Name != cluster.Postgresql.ObjectMeta.Name {
return fmt.Errorf("Owner reference is incorrect, got %s, expected %s",
owner.Name, cluster.Postgresql.ObjectMeta.Name)
}
return nil
}
tests := []struct { tests := []struct {
c *Cluster scenario string
out policyv1.PodDisruptionBudget spec *Cluster
check []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error
}{ }{
// With multiple instances.
{ {
New( scenario: "With multiple instances",
spec: New(
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}}, Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
k8sutil.KubernetesClient{}, k8sutil.KubernetesClient{},
acidv1.Postgresql{ acidv1.Postgresql{
@ -2292,23 +2408,16 @@ func TestGeneratePodDisruptionBudget(t *testing.T) {
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}}, Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
logger, logger,
eventRecorder), eventRecorder),
policyv1.PodDisruptionBudget{ check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
ObjectMeta: metav1.ObjectMeta{ testPodDisruptionBudgetOwnerReference,
Name: "postgres-myapp-database-pdb", hasName("postgres-myapp-database-pdb"),
Namespace: "myapp", hasMinAvailable(1),
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"}, testLabelsAndSelectors(true),
},
Spec: policyv1.PodDisruptionBudgetSpec{
MinAvailable: util.ToIntStr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
}, },
}, },
},
},
// With zero instances.
{ {
New( scenario: "With zero instances",
spec: New(
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}}, Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
k8sutil.KubernetesClient{}, k8sutil.KubernetesClient{},
acidv1.Postgresql{ acidv1.Postgresql{
@ -2316,23 +2425,16 @@ func TestGeneratePodDisruptionBudget(t *testing.T) {
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 0}}, Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 0}},
logger, logger,
eventRecorder), eventRecorder),
policyv1.PodDisruptionBudget{ check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
ObjectMeta: metav1.ObjectMeta{ testPodDisruptionBudgetOwnerReference,
Name: "postgres-myapp-database-pdb", hasName("postgres-myapp-database-pdb"),
Namespace: "myapp", hasMinAvailable(0),
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"}, testLabelsAndSelectors(true),
},
Spec: policyv1.PodDisruptionBudgetSpec{
MinAvailable: util.ToIntStr(0),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
}, },
}, },
},
},
// With PodDisruptionBudget disabled.
{ {
New( scenario: "With PodDisruptionBudget disabled",
spec: New(
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.False()}}, Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.False()}},
k8sutil.KubernetesClient{}, k8sutil.KubernetesClient{},
acidv1.Postgresql{ acidv1.Postgresql{
@ -2340,23 +2442,16 @@ func TestGeneratePodDisruptionBudget(t *testing.T) {
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}}, Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
logger, logger,
eventRecorder), eventRecorder),
policyv1.PodDisruptionBudget{ check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
ObjectMeta: metav1.ObjectMeta{ testPodDisruptionBudgetOwnerReference,
Name: "postgres-myapp-database-pdb", hasName("postgres-myapp-database-pdb"),
Namespace: "myapp", hasMinAvailable(0),
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"}, testLabelsAndSelectors(true),
},
Spec: policyv1.PodDisruptionBudgetSpec{
MinAvailable: util.ToIntStr(0),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
}, },
}, },
},
},
// With non-default PDBNameFormat and PodDisruptionBudget explicitly enabled.
{ {
New( scenario: "With non-default PDBNameFormat and PodDisruptionBudget explicitly enabled",
spec: New(
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-databass-budget", EnablePodDisruptionBudget: util.True()}}, Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-databass-budget", EnablePodDisruptionBudget: util.True()}},
k8sutil.KubernetesClient{}, k8sutil.KubernetesClient{},
acidv1.Postgresql{ acidv1.Postgresql{
@ -2364,50 +2459,143 @@ func TestGeneratePodDisruptionBudget(t *testing.T) {
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}}, Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
logger, logger,
eventRecorder), eventRecorder),
policyv1.PodDisruptionBudget{ check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
ObjectMeta: metav1.ObjectMeta{ testPodDisruptionBudgetOwnerReference,
Name: "postgres-myapp-database-databass-budget", hasName("postgres-myapp-database-databass-budget"),
Namespace: "myapp", hasMinAvailable(1),
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"}, testLabelsAndSelectors(true),
},
Spec: policyv1.PodDisruptionBudgetSpec{
MinAvailable: util.ToIntStr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
}, },
}, },
},
},
// With PDBMasterLabelSelector disabled.
{ {
New( scenario: "With PDBMasterLabelSelector disabled",
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", PDBMasterLabelSelector: util.False()}}, spec: New(
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.True(), PDBMasterLabelSelector: util.False()}},
k8sutil.KubernetesClient{}, k8sutil.KubernetesClient{},
acidv1.Postgresql{ acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"}, ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}}, Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
logger, logger,
eventRecorder), eventRecorder),
policyv1.PodDisruptionBudget{ check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
ObjectMeta: metav1.ObjectMeta{ testPodDisruptionBudgetOwnerReference,
Name: "postgres-myapp-database-pdb", hasName("postgres-myapp-database-pdb"),
Namespace: "myapp", hasMinAvailable(1),
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"}, testLabelsAndSelectors(true),
},
Spec: policyv1.PodDisruptionBudgetSpec{
MinAvailable: util.ToIntStr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{"cluster-name": "myapp-database"},
}, },
}, },
{
scenario: "With OwnerReference enabled",
spec: New(
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role", EnableOwnerReferences: util.True()}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.True()}},
k8sutil.KubernetesClient{},
acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
logger,
eventRecorder),
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
testPodDisruptionBudgetOwnerReference,
hasName("postgres-myapp-database-pdb"),
hasMinAvailable(1),
testLabelsAndSelectors(true),
}, },
}, },
} }
for _, tt := range tests { for _, tt := range tests {
result := tt.c.generatePodDisruptionBudget() result := tt.spec.generatePrimaryPodDisruptionBudget()
if !reflect.DeepEqual(*result, tt.out) { for _, check := range tt.check {
t.Errorf("Expected PodDisruptionBudget: %#v, got %#v", tt.out, *result) err := check(tt.spec, result)
if err != nil {
t.Errorf("%s [%s]: PodDisruptionBudget spec is incorrect, %+v",
testName, tt.scenario, err)
}
}
}
testCriticalOp := []struct {
scenario string
spec *Cluster
check []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error
}{
{
scenario: "With multiple instances",
spec: New(
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
k8sutil.KubernetesClient{},
acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
logger,
eventRecorder),
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
testPodDisruptionBudgetOwnerReference,
hasName("postgres-myapp-database-critical-op-pdb"),
hasMinAvailable(3),
testLabelsAndSelectors(false),
},
},
{
scenario: "With zero instances",
spec: New(
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
k8sutil.KubernetesClient{},
acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 0}},
logger,
eventRecorder),
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
testPodDisruptionBudgetOwnerReference,
hasName("postgres-myapp-database-critical-op-pdb"),
hasMinAvailable(0),
testLabelsAndSelectors(false),
},
},
{
scenario: "With PodDisruptionBudget disabled",
spec: New(
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.False()}},
k8sutil.KubernetesClient{},
acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
logger,
eventRecorder),
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
testPodDisruptionBudgetOwnerReference,
hasName("postgres-myapp-database-critical-op-pdb"),
hasMinAvailable(0),
testLabelsAndSelectors(false),
},
},
{
scenario: "With OwnerReference enabled",
spec: New(
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role", EnableOwnerReferences: util.True()}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: util.True()}},
k8sutil.KubernetesClient{},
acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
logger,
eventRecorder),
check: []func(cluster *Cluster, podDisruptionBudget *policyv1.PodDisruptionBudget) error{
testPodDisruptionBudgetOwnerReference,
hasName("postgres-myapp-database-critical-op-pdb"),
hasMinAvailable(3),
testLabelsAndSelectors(false),
},
},
}
for _, tt := range testCriticalOp {
result := tt.spec.generateCriticalOpPodDisruptionBudget()
for _, check := range tt.check {
err := check(tt.spec, result)
if err != nil {
t.Errorf("%s [%s]: PodDisruptionBudget spec is incorrect, %+v",
testName, tt.scenario, err)
}
} }
} }
} }
@ -2426,17 +2614,17 @@ func TestGenerateService(t *testing.T) {
Size: "1G", Size: "1G",
}, },
Sidecars: []acidv1.Sidecar{ Sidecars: []acidv1.Sidecar{
acidv1.Sidecar{ {
Name: "cluster-specific-sidecar", Name: "cluster-specific-sidecar",
}, },
acidv1.Sidecar{ {
Name: "cluster-specific-sidecar-with-resources", Name: "cluster-specific-sidecar-with-resources",
Resources: &acidv1.Resources{ Resources: &acidv1.Resources{
ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("210m"), Memory: k8sutil.StringToPointer("0.8Gi")}, ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("210m"), Memory: k8sutil.StringToPointer("0.8Gi")},
ResourceLimits: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("510m"), Memory: k8sutil.StringToPointer("1.4Gi")}, ResourceLimits: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("510m"), Memory: k8sutil.StringToPointer("1.4Gi")},
}, },
}, },
acidv1.Sidecar{ {
Name: "replace-sidecar", Name: "replace-sidecar",
DockerImage: "override-image", DockerImage: "override-image",
}, },
@ -2465,11 +2653,11 @@ func TestGenerateService(t *testing.T) {
"deprecated-global-sidecar": "image:123", "deprecated-global-sidecar": "image:123",
}, },
SidecarContainers: []v1.Container{ SidecarContainers: []v1.Container{
v1.Container{ {
Name: "global-sidecar", Name: "global-sidecar",
}, },
// will be replaced by a cluster specific sidecar with the same name // will be replaced by a cluster specific sidecar with the same name
v1.Container{ {
Name: "replace-sidecar", Name: "replace-sidecar",
Image: "replaced-image", Image: "replaced-image",
}, },
@ -2564,27 +2752,27 @@ func newLBFakeClient() (k8sutil.KubernetesClient, *fake.Clientset) {
func getServices(serviceType v1.ServiceType, sourceRanges []string, extTrafficPolicy, clusterName string) []v1.ServiceSpec { func getServices(serviceType v1.ServiceType, sourceRanges []string, extTrafficPolicy, clusterName string) []v1.ServiceSpec {
return []v1.ServiceSpec{ return []v1.ServiceSpec{
v1.ServiceSpec{ {
ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy), ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy),
LoadBalancerSourceRanges: sourceRanges, LoadBalancerSourceRanges: sourceRanges,
Ports: []v1.ServicePort{{Name: "postgresql", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}}, Ports: []v1.ServicePort{{Name: "postgresql", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}},
Type: serviceType, Type: serviceType,
}, },
v1.ServiceSpec{ {
ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy), ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy),
LoadBalancerSourceRanges: sourceRanges, LoadBalancerSourceRanges: sourceRanges,
Ports: []v1.ServicePort{{Name: clusterName + "-pooler", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}}, Ports: []v1.ServicePort{{Name: clusterName + "-pooler", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}},
Selector: map[string]string{"connection-pooler": clusterName + "-pooler"}, Selector: map[string]string{"connection-pooler": clusterName + "-pooler"},
Type: serviceType, Type: serviceType,
}, },
v1.ServiceSpec{ {
ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy), ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy),
LoadBalancerSourceRanges: sourceRanges, LoadBalancerSourceRanges: sourceRanges,
Ports: []v1.ServicePort{{Name: "postgresql", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}}, Ports: []v1.ServicePort{{Name: "postgresql", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}},
Selector: map[string]string{"spilo-role": "replica", "application": "spilo", "cluster-name": clusterName}, Selector: map[string]string{"spilo-role": "replica", "application": "spilo", "cluster-name": clusterName},
Type: serviceType, Type: serviceType,
}, },
v1.ServiceSpec{ {
ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy), ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyType(extTrafficPolicy),
LoadBalancerSourceRanges: sourceRanges, LoadBalancerSourceRanges: sourceRanges,
Ports: []v1.ServicePort{{Name: clusterName + "-pooler-repl", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}}, Ports: []v1.ServicePort{{Name: clusterName + "-pooler-repl", Port: 5432, TargetPort: intstr.IntOrString{IntVal: 5432}}},
@ -2804,7 +2992,7 @@ func TestGenerateResourceRequirements(t *testing.T) {
}, },
Spec: acidv1.PostgresSpec{ Spec: acidv1.PostgresSpec{
Sidecars: []acidv1.Sidecar{ Sidecars: []acidv1.Sidecar{
acidv1.Sidecar{ {
Name: sidecarName, Name: sidecarName,
}, },
}, },
@ -2903,6 +3091,44 @@ func TestGenerateResourceRequirements(t *testing.T) {
ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("100m"), Memory: k8sutil.StringToPointer("100Mi")}, ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("100m"), Memory: k8sutil.StringToPointer("100Mi")},
}, },
}, },
{
subTest: "test generation of resources when min limits are all set to zero",
config: config.Config{
Resources: config.Resources{
ClusterLabels: map[string]string{"application": "spilo"},
ClusterNameLabel: clusterNameLabel,
DefaultCPURequest: "0",
DefaultCPULimit: "0",
MaxCPURequest: "0",
MinCPULimit: "0",
DefaultMemoryRequest: "0",
DefaultMemoryLimit: "0",
MaxMemoryRequest: "0",
MinMemoryLimit: "0",
PodRoleLabel: "spilo-role",
},
PodManagementPolicy: "ordered_ready",
SetMemoryRequestToLimit: false,
},
pgSpec: acidv1.Postgresql{
ObjectMeta: metav1.ObjectMeta{
Name: clusterName,
Namespace: namespace,
},
Spec: acidv1.PostgresSpec{
Resources: &acidv1.Resources{
ResourceLimits: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("5m"), Memory: k8sutil.StringToPointer("5Mi")},
},
TeamID: "acid",
Volume: acidv1.Volume{
Size: "1G",
},
},
},
expectedResources: acidv1.Resources{
ResourceLimits: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("5m"), Memory: k8sutil.StringToPointer("5Mi")},
},
},
{ {
subTest: "test matchLimitsWithRequestsIfSmaller", subTest: "test matchLimitsWithRequestsIfSmaller",
config: config.Config{ config: config.Config{
@ -3005,7 +3231,7 @@ func TestGenerateResourceRequirements(t *testing.T) {
}, },
Spec: acidv1.PostgresSpec{ Spec: acidv1.PostgresSpec{
Sidecars: []acidv1.Sidecar{ Sidecars: []acidv1.Sidecar{
acidv1.Sidecar{ {
Name: sidecarName, Name: sidecarName,
Resources: &acidv1.Resources{ Resources: &acidv1.Resources{
ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("10m"), Memory: k8sutil.StringToPointer("10Mi")}, ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("10m"), Memory: k8sutil.StringToPointer("10Mi")},
@ -3094,7 +3320,7 @@ func TestGenerateResourceRequirements(t *testing.T) {
}, },
Spec: acidv1.PostgresSpec{ Spec: acidv1.PostgresSpec{
Sidecars: []acidv1.Sidecar{ Sidecars: []acidv1.Sidecar{
acidv1.Sidecar{ {
Name: sidecarName, Name: sidecarName,
Resources: &acidv1.Resources{ Resources: &acidv1.Resources{
ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("10m"), Memory: k8sutil.StringToPointer("10Mi")}, ResourceRequests: acidv1.ResourceDescription{CPU: k8sutil.StringToPointer("10m"), Memory: k8sutil.StringToPointer("10Mi")},
@ -3499,6 +3725,11 @@ func TestGenerateLogicalBackupJob(t *testing.T) {
cluster.Spec.LogicalBackupSchedule = tt.specSchedule cluster.Spec.LogicalBackupSchedule = tt.specSchedule
cronJob, err := cluster.generateLogicalBackupJob() cronJob, err := cluster.generateLogicalBackupJob()
assert.NoError(t, err) assert.NoError(t, err)
if !reflect.DeepEqual(cronJob.ObjectMeta.OwnerReferences, cluster.ownerReferences()) {
t.Errorf("%s - %s: expected owner references %#v, got %#v", t.Name(), tt.subTest, cluster.ownerReferences(), cronJob.ObjectMeta.OwnerReferences)
}
if cronJob.Spec.Schedule != tt.expectedSchedule { if cronJob.Spec.Schedule != tt.expectedSchedule {
t.Errorf("%s - %s: expected schedule %s, got %s", t.Name(), tt.subTest, tt.expectedSchedule, cronJob.Spec.Schedule) t.Errorf("%s - %s: expected schedule %s, got %s", t.Name(), tt.subTest, tt.expectedSchedule, cronJob.Spec.Schedule)
} }
@ -3524,6 +3755,191 @@ func TestGenerateLogicalBackupJob(t *testing.T) {
} }
} }
func TestGenerateLogicalBackupPodEnvVars(t *testing.T) {
var (
dummyUUID = "efd12e58-5786-11e8-b5a7-06148230260c"
dummyBucket = "dummy-backup-location"
)
expectedLogicalBackupS3Bucket := []ExpectedValue{
{
envIndex: 9,
envVarConstant: "LOGICAL_BACKUP_PROVIDER",
envVarValue: "s3",
},
{
envIndex: 10,
envVarConstant: "LOGICAL_BACKUP_S3_BUCKET",
envVarValue: dummyBucket,
},
{
envIndex: 11,
envVarConstant: "LOGICAL_BACKUP_S3_BUCKET_PREFIX",
envVarValue: "spilo",
},
{
envIndex: 12,
envVarConstant: "LOGICAL_BACKUP_S3_BUCKET_SCOPE_SUFFIX",
envVarValue: "/" + dummyUUID,
},
{
envIndex: 13,
envVarConstant: "LOGICAL_BACKUP_S3_REGION",
envVarValue: "eu-central-1",
},
{
envIndex: 14,
envVarConstant: "LOGICAL_BACKUP_S3_ENDPOINT",
envVarValue: "",
},
{
envIndex: 15,
envVarConstant: "LOGICAL_BACKUP_S3_SSE",
envVarValue: "",
},
{
envIndex: 16,
envVarConstant: "LOGICAL_BACKUP_S3_RETENTION_TIME",
envVarValue: "1 month",
},
}
expectedLogicalBackupGCPCreds := []ExpectedValue{
{
envIndex: 9,
envVarConstant: "LOGICAL_BACKUP_PROVIDER",
envVarValue: "gcs",
},
{
envIndex: 13,
envVarConstant: "LOGICAL_BACKUP_GOOGLE_APPLICATION_CREDENTIALS",
envVarValue: "some-path-to-credentials",
},
}
expectedLogicalBackupAzureStorage := []ExpectedValue{
{
envIndex: 9,
envVarConstant: "LOGICAL_BACKUP_PROVIDER",
envVarValue: "az",
},
{
envIndex: 13,
envVarConstant: "LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_NAME",
envVarValue: "some-azure-storage-account-name",
},
{
envIndex: 14,
envVarConstant: "LOGICAL_BACKUP_AZURE_STORAGE_CONTAINER",
envVarValue: "some-azure-storage-container",
},
{
envIndex: 15,
envVarConstant: "LOGICAL_BACKUP_AZURE_STORAGE_ACCOUNT_KEY",
envVarValue: "some-azure-storage-account-key",
},
}
expectedLogicalBackupRetentionTime := []ExpectedValue{
{
envIndex: 16,
envVarConstant: "LOGICAL_BACKUP_S3_RETENTION_TIME",
envVarValue: "3 months",
},
}
tests := []struct {
subTest string
opConfig config.Config
expectedValues []ExpectedValue
pgsql acidv1.Postgresql
}{
{
subTest: "logical backup with provider: s3",
opConfig: config.Config{
LogicalBackup: config.LogicalBackup{
LogicalBackupProvider: "s3",
LogicalBackupS3Bucket: dummyBucket,
LogicalBackupS3BucketPrefix: "spilo",
LogicalBackupS3Region: "eu-central-1",
LogicalBackupS3RetentionTime: "1 month",
},
},
expectedValues: expectedLogicalBackupS3Bucket,
},
{
subTest: "logical backup with provider: gcs",
opConfig: config.Config{
LogicalBackup: config.LogicalBackup{
LogicalBackupProvider: "gcs",
LogicalBackupS3Bucket: dummyBucket,
LogicalBackupGoogleApplicationCredentials: "some-path-to-credentials",
},
},
expectedValues: expectedLogicalBackupGCPCreds,
},
{
subTest: "logical backup with provider: az",
opConfig: config.Config{
LogicalBackup: config.LogicalBackup{
LogicalBackupProvider: "az",
LogicalBackupS3Bucket: dummyBucket,
LogicalBackupAzureStorageAccountName: "some-azure-storage-account-name",
LogicalBackupAzureStorageContainer: "some-azure-storage-container",
LogicalBackupAzureStorageAccountKey: "some-azure-storage-account-key",
},
},
expectedValues: expectedLogicalBackupAzureStorage,
},
{
subTest: "will override retention time parameter",
opConfig: config.Config{
LogicalBackup: config.LogicalBackup{
LogicalBackupProvider: "s3",
LogicalBackupS3RetentionTime: "1 month",
},
},
expectedValues: expectedLogicalBackupRetentionTime,
pgsql: acidv1.Postgresql{
Spec: acidv1.PostgresSpec{
LogicalBackupRetention: "3 months",
},
},
},
}
for _, tt := range tests {
c := newMockCluster(tt.opConfig)
pgsql := tt.pgsql
c.Postgresql = pgsql
c.UID = types.UID(dummyUUID)
actualEnvs := c.generateLogicalBackupPodEnvVars()
for _, ev := range tt.expectedValues {
env := actualEnvs[ev.envIndex]
if env.Name != ev.envVarConstant {
t.Errorf("%s %s: expected env name %s, have %s instead",
t.Name(), tt.subTest, ev.envVarConstant, env.Name)
}
if ev.envVarValueRef != nil {
if !reflect.DeepEqual(env.ValueFrom, ev.envVarValueRef) {
t.Errorf("%s %s: expected env value reference %#v, have %#v instead",
t.Name(), tt.subTest, ev.envVarValueRef, env.ValueFrom)
}
continue
}
if env.Value != ev.envVarValue {
t.Errorf("%s %s: expected env value %s, have %s instead",
t.Name(), tt.subTest, ev.envVarValue, env.Value)
}
}
}
}
func TestGenerateCapabilities(t *testing.T) { func TestGenerateCapabilities(t *testing.T) {
tests := []struct { tests := []struct {
subTest string subTest string

View File

@ -1,24 +1,33 @@
package cluster package cluster
import ( import (
"context"
"encoding/json"
"fmt" "fmt"
"strings" "strings"
"github.com/Masterminds/semver"
"github.com/zalando/postgres-operator/pkg/spec" "github.com/zalando/postgres-operator/pkg/spec"
"github.com/zalando/postgres-operator/pkg/util" "github.com/zalando/postgres-operator/pkg/util"
v1 "k8s.io/api/core/v1" v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
) )
// VersionMap Map of version numbers // VersionMap Map of version numbers
var VersionMap = map[string]int{ var VersionMap = map[string]int{
"11": 110000,
"12": 120000,
"13": 130000, "13": 130000,
"14": 140000, "14": 140000,
"15": 150000, "15": 150000,
"16": 160000, "16": 160000,
"17": 170000,
} }
const (
majorVersionUpgradeSuccessAnnotation = "last-major-upgrade-success"
majorVersionUpgradeFailureAnnotation = "last-major-upgrade-failure"
)
// IsBiggerPostgresVersion Compare two Postgres version numbers // IsBiggerPostgresVersion Compare two Postgres version numbers
func IsBiggerPostgresVersion(old string, new string) bool { func IsBiggerPostgresVersion(old string, new string) bool {
oldN := VersionMap[old] oldN := VersionMap[old]
@ -35,7 +44,7 @@ func (c *Cluster) GetDesiredMajorVersionAsInt() int {
func (c *Cluster) GetDesiredMajorVersion() string { func (c *Cluster) GetDesiredMajorVersion() string {
if c.Config.OpConfig.MajorVersionUpgradeMode == "full" { if c.Config.OpConfig.MajorVersionUpgradeMode == "full" {
// e.g. current is 12, minimal is 12 allowing 12 to 16 clusters, everything below is upgraded // e.g. current is 13, minimal is 13 allowing 13 to 17 clusters, everything below is upgraded
if IsBiggerPostgresVersion(c.Spec.PgVersion, c.Config.OpConfig.MinimalMajorVersion) { if IsBiggerPostgresVersion(c.Spec.PgVersion, c.Config.OpConfig.MinimalMajorVersion) {
c.logger.Infof("overwriting configured major version %s to %s", c.Spec.PgVersion, c.Config.OpConfig.TargetMajorVersion) c.logger.Infof("overwriting configured major version %s to %s", c.Spec.PgVersion, c.Config.OpConfig.TargetMajorVersion)
return c.Config.OpConfig.TargetMajorVersion return c.Config.OpConfig.TargetMajorVersion
@ -55,6 +64,63 @@ func (c *Cluster) isUpgradeAllowedForTeam(owningTeam string) bool {
return util.SliceContains(allowedTeams, owningTeam) return util.SliceContains(allowedTeams, owningTeam)
} }
func (c *Cluster) annotatePostgresResource(isSuccess bool) error {
annotations := make(map[string]string)
currentTime := metav1.Now().Format("2006-01-02T15:04:05Z")
if isSuccess {
annotations[majorVersionUpgradeSuccessAnnotation] = currentTime
} else {
annotations[majorVersionUpgradeFailureAnnotation] = currentTime
}
patchData, err := metaAnnotationsPatch(annotations)
if err != nil {
c.logger.Errorf("could not form patch for %s postgresql resource: %v", c.Name, err)
return err
}
_, err = c.KubeClient.Postgresqls(c.Namespace).Patch(context.Background(), c.Name, types.MergePatchType, patchData, metav1.PatchOptions{})
if err != nil {
c.logger.Errorf("failed to patch annotations to postgresql resource: %v", err)
return err
}
return nil
}
func (c *Cluster) removeFailuresAnnotation() error {
annotationToRemove := []map[string]string{
{
"op": "remove",
"path": fmt.Sprintf("/metadata/annotations/%s", majorVersionUpgradeFailureAnnotation),
},
}
removePatch, err := json.Marshal(annotationToRemove)
if err != nil {
c.logger.Errorf("could not form removal patch for %s postgresql resource: %v", c.Name, err)
return err
}
_, err = c.KubeClient.Postgresqls(c.Namespace).Patch(context.Background(), c.Name, types.JSONPatchType, removePatch, metav1.PatchOptions{})
if err != nil {
c.logger.Errorf("failed to remove annotations from postgresql resource: %v", err)
return err
}
return nil
}
func (c *Cluster) criticalOperationLabel(pods []v1.Pod, value *string) error {
metadataReq := map[string]map[string]map[string]*string{"metadata": {"labels": {"critical-operation": value}}}
patchReq, err := json.Marshal(metadataReq)
if err != nil {
return fmt.Errorf("could not marshal ObjectMeta: %v", err)
}
for _, pod := range pods {
_, err = c.KubeClient.Pods(c.Namespace).Patch(context.TODO(), pod.Name, types.StrategicMergePatchType, patchReq, metav1.PatchOptions{})
if err != nil {
return err
}
}
return nil
}
/* /*
Execute upgrade when mode is set to manual or full or when the owning team is allowed for upgrade (and mode is "off"). Execute upgrade when mode is set to manual or full or when the owning team is allowed for upgrade (and mode is "off").
@ -70,6 +136,10 @@ func (c *Cluster) majorVersionUpgrade() error {
desiredVersion := c.GetDesiredMajorVersionAsInt() desiredVersion := c.GetDesiredMajorVersionAsInt()
if c.currentMajorVersion >= desiredVersion { if c.currentMajorVersion >= desiredVersion {
if _, exists := c.ObjectMeta.Annotations[majorVersionUpgradeFailureAnnotation]; exists { // if failure annotation exists, remove it
c.removeFailuresAnnotation()
c.logger.Infof("removing failure annotation as the cluster is already up to date")
}
c.logger.Infof("cluster version up to date. current: %d, min desired: %d", c.currentMajorVersion, desiredVersion) c.logger.Infof("cluster version up to date. current: %d, min desired: %d", c.currentMajorVersion, desiredVersion)
return nil return nil
} }
@ -80,59 +150,137 @@ func (c *Cluster) majorVersionUpgrade() error {
} }
allRunning := true allRunning := true
isStandbyCluster := false
var masterPod *v1.Pod var masterPod *v1.Pod
for i, pod := range pods { for i, pod := range pods {
ps, _ := c.patroni.GetMemberData(&pod) ps, _ := c.patroni.GetMemberData(&pod)
if ps.Role == "standby_leader" {
isStandbyCluster = true
c.currentMajorVersion = ps.ServerVersion
break
}
if ps.State != "running" { if ps.State != "running" {
allRunning = false allRunning = false
c.logger.Infof("identified non running pod, potentially skipping major version upgrade") c.logger.Infof("identified non running pod, potentially skipping major version upgrade")
} }
if ps.Role == "master" { if ps.Role == "master" || ps.Role == "primary" {
masterPod = &pods[i] masterPod = &pods[i]
c.currentMajorVersion = ps.ServerVersion c.currentMajorVersion = ps.ServerVersion
} }
} }
// Recheck version with newest data from Patroni if masterPod == nil {
if c.currentMajorVersion >= desiredVersion { c.logger.Infof("no master in the cluster, skipping major version upgrade")
c.logger.Infof("recheck cluster version is already up to date. current: %d, min desired: %d", c.currentMajorVersion, desiredVersion)
return nil return nil
} }
// Recheck version with newest data from Patroni
if c.currentMajorVersion >= desiredVersion {
if _, exists := c.ObjectMeta.Annotations[majorVersionUpgradeFailureAnnotation]; exists { // if failure annotation exists, remove it
c.removeFailuresAnnotation()
c.logger.Infof("removing failure annotation as the cluster is already up to date")
}
c.logger.Infof("recheck cluster version is already up to date. current: %d, min desired: %d", c.currentMajorVersion, desiredVersion)
return nil
} else if isStandbyCluster {
c.logger.Warnf("skipping major version upgrade for %s/%s standby cluster. Re-deploy standby cluster with the required Postgres version specified", c.Namespace, c.Name)
return nil
}
if _, exists := c.ObjectMeta.Annotations[majorVersionUpgradeFailureAnnotation]; exists {
c.logger.Infof("last major upgrade failed, skipping upgrade")
return nil
}
if !isInMaintenanceWindow(c.Spec.MaintenanceWindows) {
c.logger.Infof("skipping major version upgrade, not in maintenance window")
return nil
}
members, err := c.patroni.GetClusterMembers(masterPod)
if err != nil {
c.logger.Error("could not get cluster members data from Patroni API, skipping major version upgrade")
return err
}
patroniData, err := c.patroni.GetMemberData(masterPod)
if err != nil {
c.logger.Error("could not get members data from Patroni API, skipping major version upgrade")
return err
}
patroniVer, err := semver.NewVersion(patroniData.Patroni.Version)
if err != nil {
c.logger.Error("error parsing Patroni version")
patroniVer, _ = semver.NewVersion("3.0.4")
}
verConstraint, _ := semver.NewConstraint(">= 3.0.4")
checkStreaming, _ := verConstraint.Validate(patroniVer)
for _, member := range members {
if PostgresRole(member.Role) == Leader {
continue
}
if checkStreaming && member.State != "streaming" {
c.logger.Infof("skipping major version upgrade, replica %s is not streaming from primary", member.Name)
return nil
}
if member.Lag > 16*1024*1024 {
c.logger.Infof("skipping major version upgrade, replication lag on member %s is too high", member.Name)
return nil
}
}
isUpgradeSuccess := true
numberOfPods := len(pods) numberOfPods := len(pods)
if allRunning && masterPod != nil { if allRunning {
c.logger.Infof("healthy cluster ready to upgrade, current: %d desired: %d", c.currentMajorVersion, desiredVersion) c.logger.Infof("healthy cluster ready to upgrade, current: %d desired: %d", c.currentMajorVersion, desiredVersion)
if c.currentMajorVersion < desiredVersion { if c.currentMajorVersion < desiredVersion {
defer func() error {
if err = c.criticalOperationLabel(pods, nil); err != nil {
return fmt.Errorf("failed to remove critical-operation label: %s", err)
}
return nil
}()
val := "true"
if err = c.criticalOperationLabel(pods, &val); err != nil {
return fmt.Errorf("failed to assign critical-operation label: %s", err)
}
podName := &spec.NamespacedName{Namespace: masterPod.Namespace, Name: masterPod.Name} podName := &spec.NamespacedName{Namespace: masterPod.Namespace, Name: masterPod.Name}
c.logger.Infof("triggering major version upgrade on pod %s of %d pods", masterPod.Name, numberOfPods) c.logger.Infof("triggering major version upgrade on pod %s of %d pods", masterPod.Name, numberOfPods)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Major Version Upgrade", "starting major version upgrade on pod %s of %d pods", masterPod.Name, numberOfPods) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Major Version Upgrade", "starting major version upgrade on pod %s of %d pods", masterPod.Name, numberOfPods)
upgradeCommand := fmt.Sprintf("set -o pipefail && /usr/bin/python3 /scripts/inplace_upgrade.py %d 2>&1 | tee last_upgrade.log", numberOfPods) upgradeCommand := fmt.Sprintf("set -o pipefail && /usr/bin/python3 /scripts/inplace_upgrade.py %d 2>&1 | tee last_upgrade.log", numberOfPods)
c.logger.Debugf("checking if the spilo image runs with root or non-root (check for user id=0)") c.logger.Debug("checking if the spilo image runs with root or non-root (check for user id=0)")
resultIdCheck, errIdCheck := c.ExecCommand(podName, "/bin/bash", "-c", "/usr/bin/id -u") resultIdCheck, errIdCheck := c.ExecCommand(podName, "/bin/bash", "-c", "/usr/bin/id -u")
if errIdCheck != nil { if errIdCheck != nil {
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Major Version Upgrade", "checking user id to run upgrade from %d to %d FAILED: %v", c.currentMajorVersion, desiredVersion, errIdCheck) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Major Version Upgrade", "checking user id to run upgrade from %d to %d FAILED: %v", c.currentMajorVersion, desiredVersion, errIdCheck)
} }
resultIdCheck = strings.TrimSuffix(resultIdCheck, "\n") resultIdCheck = strings.TrimSuffix(resultIdCheck, "\n")
var result string var result, scriptErrMsg string
if resultIdCheck != "0" { if resultIdCheck != "0" {
c.logger.Infof("user id was identified as: %s, hence default user is non-root already", resultIdCheck) c.logger.Infof("user id was identified as: %s, hence default user is non-root already", resultIdCheck)
result, err = c.ExecCommand(podName, "/bin/bash", "-c", upgradeCommand) result, err = c.ExecCommand(podName, "/bin/bash", "-c", upgradeCommand)
scriptErrMsg, _ = c.ExecCommand(podName, "/bin/bash", "-c", "tail -n 1 last_upgrade.log")
} else { } else {
c.logger.Infof("user id was identified as: %s, using su to reach the postgres user", resultIdCheck) c.logger.Infof("user id was identified as: %s, using su to reach the postgres user", resultIdCheck)
result, err = c.ExecCommand(podName, "/bin/su", "postgres", "-c", upgradeCommand) result, err = c.ExecCommand(podName, "/bin/su", "postgres", "-c", upgradeCommand)
scriptErrMsg, _ = c.ExecCommand(podName, "/bin/bash", "-c", "tail -n 1 last_upgrade.log")
} }
if err != nil { if err != nil {
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Major Version Upgrade", "upgrade from %d to %d FAILED: %v", c.currentMajorVersion, desiredVersion, err) isUpgradeSuccess = false
return err c.annotatePostgresResource(isUpgradeSuccess)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeWarning, "Major Version Upgrade", "upgrade from %d to %d FAILED: %v", c.currentMajorVersion, desiredVersion, scriptErrMsg)
return fmt.Errorf("%s", scriptErrMsg)
} }
c.logger.Infof("upgrade action triggered and command completed: %s", result[:100])
c.annotatePostgresResource(isUpgradeSuccess)
c.logger.Infof("upgrade action triggered and command completed: %s", result[:100])
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Major Version Upgrade", "upgrade from %d to %d finished", c.currentMajorVersion, desiredVersion) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Major Version Upgrade", "upgrade from %d to %d finished", c.currentMajorVersion, desiredVersion)
} }
} }

View File

@ -3,12 +3,11 @@ package cluster
import ( import (
"context" "context"
"fmt" "fmt"
"slices"
"sort" "sort"
"strconv" "strconv"
"time" "time"
"golang.org/x/exp/slices"
appsv1 "k8s.io/api/apps/v1" appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1" v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@ -59,7 +58,7 @@ func (c *Cluster) markRollingUpdateFlagForPod(pod *v1.Pod, msg string) error {
return nil return nil
} }
c.logger.Debugf("mark rolling update annotation for %s: reason %s", pod.Name, msg) c.logger.Infof("mark rolling update annotation for %s: reason %s", pod.Name, msg)
flag := make(map[string]string) flag := make(map[string]string)
flag[rollingUpdatePodAnnotationKey] = strconv.FormatBool(true) flag[rollingUpdatePodAnnotationKey] = strconv.FormatBool(true)
@ -110,7 +109,7 @@ func (c *Cluster) getRollingUpdateFlagFromPod(pod *v1.Pod) (flag bool) {
} }
func (c *Cluster) deletePods() error { func (c *Cluster) deletePods() error {
c.logger.Debugln("deleting pods") c.logger.Debug("deleting pods")
pods, err := c.listPods() pods, err := c.listPods()
if err != nil { if err != nil {
return err return err
@ -127,9 +126,9 @@ func (c *Cluster) deletePods() error {
} }
} }
if len(pods) > 0 { if len(pods) > 0 {
c.logger.Debugln("pods have been deleted") c.logger.Debug("pods have been deleted")
} else { } else {
c.logger.Debugln("no pods to delete") c.logger.Debug("no pods to delete")
} }
return nil return nil
@ -230,7 +229,7 @@ func (c *Cluster) MigrateMasterPod(podName spec.NamespacedName) error {
return fmt.Errorf("could not get node %q: %v", oldMaster.Spec.NodeName, err) return fmt.Errorf("could not get node %q: %v", oldMaster.Spec.NodeName, err)
} }
if !eol { if !eol {
c.logger.Debugf("no action needed: master pod is already on a live node") c.logger.Debug("no action needed: master pod is already on a live node")
return nil return nil
} }
@ -280,11 +279,16 @@ func (c *Cluster) MigrateMasterPod(podName spec.NamespacedName) error {
return fmt.Errorf("could not move pod: %v", err) return fmt.Errorf("could not move pod: %v", err)
} }
scheduleSwitchover := false
if !isInMaintenanceWindow(c.Spec.MaintenanceWindows) {
c.logger.Infof("postponing switchover, not in maintenance window")
scheduleSwitchover = true
}
err = retryutil.Retry(1*time.Minute, 5*time.Minute, err = retryutil.Retry(1*time.Minute, 5*time.Minute,
func() (bool, error) { func() (bool, error) {
err := c.Switchover(oldMaster, masterCandidateName) err := c.Switchover(oldMaster, masterCandidateName, scheduleSwitchover)
if err != nil { if err != nil {
c.logger.Errorf("could not failover to pod %q: %v", masterCandidateName, err) c.logger.Errorf("could not switchover to pod %q: %v", masterCandidateName, err)
return false, nil return false, nil
} }
return true, nil return true, nil
@ -428,9 +432,10 @@ func (c *Cluster) recreatePods(pods []v1.Pod, switchoverCandidates []spec.Namesp
} }
newRole := PostgresRole(newPod.Labels[c.OpConfig.PodRoleLabel]) newRole := PostgresRole(newPod.Labels[c.OpConfig.PodRoleLabel])
if newRole == Replica { switch newRole {
case Replica:
replicas = append(replicas, util.NameFromMeta(pod.ObjectMeta)) replicas = append(replicas, util.NameFromMeta(pod.ObjectMeta))
} else if newRole == Master { case Master:
newMasterPod = newPod newMasterPod = newPod
} }
} }
@ -445,7 +450,7 @@ func (c *Cluster) recreatePods(pods []v1.Pod, switchoverCandidates []spec.Namesp
// do not recreate master now so it will keep the update flag and switchover will be retried on next sync // do not recreate master now so it will keep the update flag and switchover will be retried on next sync
return fmt.Errorf("skipping switchover: %v", err) return fmt.Errorf("skipping switchover: %v", err)
} }
if err := c.Switchover(masterPod, masterCandidate); err != nil { if err := c.Switchover(masterPod, masterCandidate, false); err != nil {
return fmt.Errorf("could not perform switch over: %v", err) return fmt.Errorf("could not perform switch over: %v", err)
} }
} else if newMasterPod == nil && len(replicas) == 0 { } else if newMasterPod == nil && len(replicas) == 0 {
@ -480,6 +485,9 @@ func (c *Cluster) getSwitchoverCandidate(master *v1.Pod) (spec.NamespacedName, e
if PostgresRole(member.Role) == SyncStandby { if PostgresRole(member.Role) == SyncStandby {
syncCandidates = append(syncCandidates, member) syncCandidates = append(syncCandidates, member)
} }
if PostgresRole(member.Role) != Leader && PostgresRole(member.Role) != StandbyLeader && slices.Contains([]string{"running", "streaming", "in archive recovery"}, member.State) {
candidates = append(candidates, member)
}
} }
// if synchronous mode is enabled and no SyncStandy was found // if synchronous mode is enabled and no SyncStandy was found
@ -489,6 +497,12 @@ func (c *Cluster) getSwitchoverCandidate(master *v1.Pod) (spec.NamespacedName, e
return false, nil return false, nil
} }
// retry also in asynchronous mode when no replica candidate was found
if !c.Spec.Patroni.SynchronousMode && len(candidates) == 0 {
c.logger.Warnf("no replica candidate found - retrying fetching cluster members")
return false, nil
}
return true, nil return true, nil
}, },
) )
@ -502,25 +516,13 @@ func (c *Cluster) getSwitchoverCandidate(master *v1.Pod) (spec.NamespacedName, e
return syncCandidates[i].Lag < syncCandidates[j].Lag return syncCandidates[i].Lag < syncCandidates[j].Lag
}) })
return spec.NamespacedName{Namespace: master.Namespace, Name: syncCandidates[0].Name}, nil return spec.NamespacedName{Namespace: master.Namespace, Name: syncCandidates[0].Name}, nil
} else {
// in asynchronous mode find running replicas
for _, member := range members {
if PostgresRole(member.Role) == Leader || PostgresRole(member.Role) == StandbyLeader {
continue
} }
if slices.Contains([]string{"running", "streaming", "in archive recovery"}, member.State) {
candidates = append(candidates, member)
}
}
if len(candidates) > 0 { if len(candidates) > 0 {
sort.Slice(candidates, func(i, j int) bool { sort.Slice(candidates, func(i, j int) bool {
return candidates[i].Lag < candidates[j].Lag return candidates[i].Lag < candidates[j].Lag
}) })
return spec.NamespacedName{Namespace: master.Namespace, Name: candidates[0].Name}, nil return spec.NamespacedName{Namespace: master.Namespace, Name: candidates[0].Name}, nil
} }
}
return spec.NamespacedName{}, fmt.Errorf("no switchover candidate found") return spec.NamespacedName{}, fmt.Errorf("no switchover candidate found")
} }

View File

@ -62,7 +62,7 @@ func TestGetSwitchoverCandidate(t *testing.T) {
expectedError: nil, expectedError: nil,
}, },
{ {
subtest: "choose first replica when lag is equal evrywhere", subtest: "choose first replica when lag is equal everywhere",
clusterJson: `{"members": [{"name": "acid-test-cluster-0", "role": "leader", "state": "running", "api_url": "http://192.168.100.1:8008/patroni", "host": "192.168.100.1", "port": 5432, "timeline": 1}, {"name": "acid-test-cluster-1", "role": "replica", "state": "streaming", "api_url": "http://192.168.100.2:8008/patroni", "host": "192.168.100.2", "port": 5432, "timeline": 1, "lag": 5}, {"name": "acid-test-cluster-2", "role": "replica", "state": "running", "api_url": "http://192.168.100.3:8008/patroni", "host": "192.168.100.3", "port": 5432, "timeline": 1, "lag": 5}]}`, clusterJson: `{"members": [{"name": "acid-test-cluster-0", "role": "leader", "state": "running", "api_url": "http://192.168.100.1:8008/patroni", "host": "192.168.100.1", "port": 5432, "timeline": 1}, {"name": "acid-test-cluster-1", "role": "replica", "state": "streaming", "api_url": "http://192.168.100.2:8008/patroni", "host": "192.168.100.2", "port": 5432, "timeline": 1, "lag": 5}, {"name": "acid-test-cluster-2", "role": "replica", "state": "running", "api_url": "http://192.168.100.3:8008/patroni", "host": "192.168.100.3", "port": 5432, "timeline": 1, "lag": 5}]}`,
syncModeEnabled: false, syncModeEnabled: false,
expectedCandidate: spec.NamespacedName{Namespace: namespace, Name: "acid-test-cluster-1"}, expectedCandidate: spec.NamespacedName{Namespace: namespace, Name: "acid-test-cluster-1"},
@ -73,7 +73,7 @@ func TestGetSwitchoverCandidate(t *testing.T) {
clusterJson: `{"members": [{"name": "acid-test-cluster-0", "role": "leader", "state": "running", "api_url": "http://192.168.100.1:8008/patroni", "host": "192.168.100.1", "port": 5432, "timeline": 2}, {"name": "acid-test-cluster-1", "role": "replica", "state": "starting", "api_url": "http://192.168.100.2:8008/patroni", "host": "192.168.100.2", "port": 5432, "timeline": 2}]}`, clusterJson: `{"members": [{"name": "acid-test-cluster-0", "role": "leader", "state": "running", "api_url": "http://192.168.100.1:8008/patroni", "host": "192.168.100.1", "port": 5432, "timeline": 2}, {"name": "acid-test-cluster-1", "role": "replica", "state": "starting", "api_url": "http://192.168.100.2:8008/patroni", "host": "192.168.100.2", "port": 5432, "timeline": 2}]}`,
syncModeEnabled: false, syncModeEnabled: false,
expectedCandidate: spec.NamespacedName{}, expectedCandidate: spec.NamespacedName{},
expectedError: fmt.Errorf("no switchover candidate found"), expectedError: fmt.Errorf("failed to get Patroni cluster members: unexpected end of JSON input"),
}, },
{ {
subtest: "replicas with different status", subtest: "replicas with different status",

View File

@ -23,28 +23,49 @@ const (
) )
func (c *Cluster) listResources() error { func (c *Cluster) listResources() error {
if c.PodDisruptionBudget != nil { if c.PrimaryPodDisruptionBudget != nil {
c.logger.Infof("found pod disruption budget: %q (uid: %q)", util.NameFromMeta(c.PodDisruptionBudget.ObjectMeta), c.PodDisruptionBudget.UID) c.logger.Infof("found primary pod disruption budget: %q (uid: %q)", util.NameFromMeta(c.PrimaryPodDisruptionBudget.ObjectMeta), c.PrimaryPodDisruptionBudget.UID)
}
if c.CriticalOpPodDisruptionBudget != nil {
c.logger.Infof("found pod disruption budget for critical operations: %q (uid: %q)", util.NameFromMeta(c.CriticalOpPodDisruptionBudget.ObjectMeta), c.CriticalOpPodDisruptionBudget.UID)
} }
if c.Statefulset != nil { if c.Statefulset != nil {
c.logger.Infof("found statefulset: %q (uid: %q)", util.NameFromMeta(c.Statefulset.ObjectMeta), c.Statefulset.UID) c.logger.Infof("found statefulset: %q (uid: %q)", util.NameFromMeta(c.Statefulset.ObjectMeta), c.Statefulset.UID)
} }
for _, obj := range c.Secrets { for appId, stream := range c.Streams {
c.logger.Infof("found secret: %q (uid: %q) namesapce: %s", util.NameFromMeta(obj.ObjectMeta), obj.UID, obj.ObjectMeta.Namespace) c.logger.Infof("found stream: %q with application id %q (uid: %q)", util.NameFromMeta(stream.ObjectMeta), appId, stream.UID)
} }
if !c.patroniKubernetesUseConfigMaps() { if c.LogicalBackupJob != nil {
for role, endpoint := range c.Endpoints { c.logger.Infof("found logical backup job: %q (uid: %q)", util.NameFromMeta(c.LogicalBackupJob.ObjectMeta), c.LogicalBackupJob.UID)
c.logger.Infof("found %s endpoint: %q (uid: %q)", role, util.NameFromMeta(endpoint.ObjectMeta), endpoint.UID)
} }
for uid, secret := range c.Secrets {
c.logger.Infof("found secret: %q (uid: %q) namespace: %s", util.NameFromMeta(secret.ObjectMeta), uid, secret.ObjectMeta.Namespace)
} }
for role, service := range c.Services { for role, service := range c.Services {
c.logger.Infof("found %s service: %q (uid: %q)", role, util.NameFromMeta(service.ObjectMeta), service.UID) c.logger.Infof("found %s service: %q (uid: %q)", role, util.NameFromMeta(service.ObjectMeta), service.UID)
} }
for role, endpoint := range c.Endpoints {
c.logger.Infof("found %s endpoint: %q (uid: %q)", role, util.NameFromMeta(endpoint.ObjectMeta), endpoint.UID)
}
if c.patroniKubernetesUseConfigMaps() {
for suffix, configmap := range c.PatroniConfigMaps {
c.logger.Infof("found %s Patroni config map: %q (uid: %q)", suffix, util.NameFromMeta(configmap.ObjectMeta), configmap.UID)
}
} else {
for suffix, endpoint := range c.PatroniEndpoints {
c.logger.Infof("found %s Patroni endpoint: %q (uid: %q)", suffix, util.NameFromMeta(endpoint.ObjectMeta), endpoint.UID)
}
}
pods, err := c.listPods() pods, err := c.listPods()
if err != nil { if err != nil {
return fmt.Errorf("could not get the list of pods: %v", err) return fmt.Errorf("could not get the list of pods: %v", err)
@ -54,13 +75,17 @@ func (c *Cluster) listResources() error {
c.logger.Infof("found pod: %q (uid: %q)", util.NameFromMeta(obj.ObjectMeta), obj.UID) c.logger.Infof("found pod: %q (uid: %q)", util.NameFromMeta(obj.ObjectMeta), obj.UID)
} }
pvcs, err := c.listPersistentVolumeClaims() for uid, pvc := range c.VolumeClaims {
if err != nil { c.logger.Infof("found persistent volume claim: %q (uid: %q)", util.NameFromMeta(pvc.ObjectMeta), uid)
return fmt.Errorf("could not get the list of PVCs: %v", err)
} }
for _, obj := range pvcs { for role, poolerObjs := range c.ConnectionPooler {
c.logger.Infof("found PVC: %q (uid: %q)", util.NameFromMeta(obj.ObjectMeta), obj.UID) if poolerObjs.Deployment != nil {
c.logger.Infof("found %s pooler deployment: %q (uid: %q) ", role, util.NameFromMeta(poolerObjs.Deployment.ObjectMeta), poolerObjs.Deployment.UID)
}
if poolerObjs.Service != nil {
c.logger.Infof("found %s pooler service: %q (uid: %q) ", role, util.NameFromMeta(poolerObjs.Service.ObjectMeta), poolerObjs.Service.UID)
}
} }
return nil return nil
@ -69,12 +94,12 @@ func (c *Cluster) listResources() error {
func (c *Cluster) createStatefulSet() (*appsv1.StatefulSet, error) { func (c *Cluster) createStatefulSet() (*appsv1.StatefulSet, error) {
c.setProcessName("creating statefulset") c.setProcessName("creating statefulset")
// check if it's allowed that spec contains initContainers // check if it's allowed that spec contains initContainers
if c.Spec.InitContainers != nil && len(c.Spec.InitContainers) > 0 && if len(c.Spec.InitContainers) > 0 &&
c.OpConfig.EnableInitContainers != nil && !(*c.OpConfig.EnableInitContainers) { c.OpConfig.EnableInitContainers != nil && !(*c.OpConfig.EnableInitContainers) {
return nil, fmt.Errorf("initContainers specified but disabled in configuration") return nil, fmt.Errorf("initContainers specified but disabled in configuration")
} }
// check if it's allowed that spec contains sidecars // check if it's allowed that spec contains sidecars
if c.Spec.Sidecars != nil && len(c.Spec.Sidecars) > 0 && if len(c.Spec.Sidecars) > 0 &&
c.OpConfig.EnableSidecars != nil && !(*c.OpConfig.EnableSidecars) { c.OpConfig.EnableSidecars != nil && !(*c.OpConfig.EnableSidecars) {
return nil, fmt.Errorf("sidecar containers specified but disabled in configuration") return nil, fmt.Errorf("sidecar containers specified but disabled in configuration")
} }
@ -142,8 +167,8 @@ func (c *Cluster) preScaleDown(newStatefulSet *appsv1.StatefulSet) error {
return fmt.Errorf("pod %q does not belong to cluster", podName) return fmt.Errorf("pod %q does not belong to cluster", podName)
} }
if err := c.patroni.Switchover(&masterPod[0], masterCandidatePod.Name); err != nil { if err := c.patroni.Switchover(&masterPod[0], masterCandidatePod.Name, ""); err != nil {
return fmt.Errorf("could not failover: %v", err) return fmt.Errorf("could not switchover: %v", err)
} }
return nil return nil
@ -162,7 +187,7 @@ func (c *Cluster) updateStatefulSet(newStatefulSet *appsv1.StatefulSet) error {
c.logger.Warningf("could not scale down: %v", err) c.logger.Warningf("could not scale down: %v", err)
} }
} }
c.logger.Debugf("updating statefulset") c.logger.Debug("updating statefulset")
patchData, err := specPatch(newStatefulSet.Spec) patchData, err := specPatch(newStatefulSet.Spec)
if err != nil { if err != nil {
@ -193,7 +218,7 @@ func (c *Cluster) replaceStatefulSet(newStatefulSet *appsv1.StatefulSet) error {
} }
statefulSetName := util.NameFromMeta(c.Statefulset.ObjectMeta) statefulSetName := util.NameFromMeta(c.Statefulset.ObjectMeta)
c.logger.Debugf("replacing statefulset") c.logger.Debug("replacing statefulset")
// Delete the current statefulset without deleting the pods // Delete the current statefulset without deleting the pods
deletePropagationPolicy := metav1.DeletePropagationOrphan deletePropagationPolicy := metav1.DeletePropagationOrphan
@ -207,7 +232,7 @@ func (c *Cluster) replaceStatefulSet(newStatefulSet *appsv1.StatefulSet) error {
// make sure we clear the stored statefulset status if the subsequent create fails. // make sure we clear the stored statefulset status if the subsequent create fails.
c.Statefulset = nil c.Statefulset = nil
// wait until the statefulset is truly deleted // wait until the statefulset is truly deleted
c.logger.Debugf("waiting for the statefulset to be deleted") c.logger.Debug("waiting for the statefulset to be deleted")
err = retryutil.Retry(c.OpConfig.ResourceCheckInterval, c.OpConfig.ResourceCheckTimeout, err = retryutil.Retry(c.OpConfig.ResourceCheckInterval, c.OpConfig.ResourceCheckTimeout,
func() (bool, error) { func() (bool, error) {
@ -241,7 +266,7 @@ func (c *Cluster) replaceStatefulSet(newStatefulSet *appsv1.StatefulSet) error {
func (c *Cluster) deleteStatefulSet() error { func (c *Cluster) deleteStatefulSet() error {
c.setProcessName("deleting statefulset") c.setProcessName("deleting statefulset")
c.logger.Debugln("deleting statefulset") c.logger.Debug("deleting statefulset")
if c.Statefulset == nil { if c.Statefulset == nil {
c.logger.Debug("there is no statefulset in the cluster") c.logger.Debug("there is no statefulset in the cluster")
return nil return nil
@ -263,10 +288,10 @@ func (c *Cluster) deleteStatefulSet() error {
if c.OpConfig.EnablePersistentVolumeClaimDeletion != nil && *c.OpConfig.EnablePersistentVolumeClaimDeletion { if c.OpConfig.EnablePersistentVolumeClaimDeletion != nil && *c.OpConfig.EnablePersistentVolumeClaimDeletion {
if err := c.deletePersistentVolumeClaims(); err != nil { if err := c.deletePersistentVolumeClaims(); err != nil {
return fmt.Errorf("could not delete PersistentVolumeClaims: %v", err) return fmt.Errorf("could not delete persistent volume claims: %v", err)
} }
} else { } else {
c.logger.Info("not deleting PersistentVolumeClaims because disabled in configuration") c.logger.Info("not deleting persistent volume claims because disabled in configuration")
} }
return nil return nil
@ -286,33 +311,14 @@ func (c *Cluster) createService(role PostgresRole) (*v1.Service, error) {
} }
func (c *Cluster) updateService(role PostgresRole, oldService *v1.Service, newService *v1.Service) (*v1.Service, error) { func (c *Cluster) updateService(role PostgresRole, oldService *v1.Service, newService *v1.Service) (*v1.Service, error) {
var ( var err error
svc *v1.Service svc := oldService
err error
)
c.setProcessName("updating %v service", role)
serviceName := util.NameFromMeta(oldService.ObjectMeta) serviceName := util.NameFromMeta(oldService.ObjectMeta)
match, reason := c.compareServices(oldService, newService)
// update the service annotation in order to propagate ELB notation. if !match {
if len(newService.ObjectMeta.Annotations) > 0 { c.logServiceChanges(role, oldService, newService, false, reason)
if annotationsPatchData, err := metaAnnotationsPatch(newService.ObjectMeta.Annotations); err == nil { c.setProcessName("updating %v service", role)
_, err = c.KubeClient.Services(serviceName.Namespace).Patch(
context.TODO(),
serviceName.Name,
types.MergePatchType,
[]byte(annotationsPatchData),
metav1.PatchOptions{},
"")
if err != nil {
return nil, fmt.Errorf("could not replace annotations for the service %q: %v", serviceName, err)
}
} else {
return nil, fmt.Errorf("could not form patch for the service metadata: %v", err)
}
}
// now, patch the service spec, but when disabling LoadBalancers do update instead // now, patch the service spec, but when disabling LoadBalancers do update instead
// patch does not work because of LoadBalancerSourceRanges field (even if set to nil) // patch does not work because of LoadBalancerSourceRanges field (even if set to nil)
@ -321,20 +327,21 @@ func (c *Cluster) updateService(role PostgresRole, oldService *v1.Service, newSe
if newServiceType == "ClusterIP" && newServiceType != oldServiceType { if newServiceType == "ClusterIP" && newServiceType != oldServiceType {
newService.ResourceVersion = oldService.ResourceVersion newService.ResourceVersion = oldService.ResourceVersion
newService.Spec.ClusterIP = oldService.Spec.ClusterIP newService.Spec.ClusterIP = oldService.Spec.ClusterIP
}
svc, err = c.KubeClient.Services(serviceName.Namespace).Update(context.TODO(), newService, metav1.UpdateOptions{}) svc, err = c.KubeClient.Services(serviceName.Namespace).Update(context.TODO(), newService, metav1.UpdateOptions{})
if err != nil { if err != nil {
return nil, fmt.Errorf("could not update service %q: %v", serviceName, err) return nil, fmt.Errorf("could not update service %q: %v", serviceName, err)
} }
} else {
patchData, err := specPatch(newService.Spec)
if err != nil {
return nil, fmt.Errorf("could not form patch for the service %q: %v", serviceName, err)
} }
svc, err = c.KubeClient.Services(serviceName.Namespace).Patch( if changed, _ := c.compareAnnotations(oldService.Annotations, newService.Annotations, nil); changed {
context.TODO(), serviceName.Name, types.MergePatchType, patchData, metav1.PatchOptions{}, "") patchData, err := metaAnnotationsPatch(newService.Annotations)
if err != nil { if err != nil {
return nil, fmt.Errorf("could not patch service %q: %v", serviceName, err) return nil, fmt.Errorf("could not form patch for service %q annotations: %v", oldService.Name, err)
}
svc, err = c.KubeClient.Services(serviceName.Namespace).Patch(context.TODO(), newService.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
if err != nil {
return nil, fmt.Errorf("could not patch annotations for service %q: %v", oldService.Name, err)
} }
} }
@ -342,7 +349,8 @@ func (c *Cluster) updateService(role PostgresRole, oldService *v1.Service, newSe
} }
func (c *Cluster) deleteService(role PostgresRole) error { func (c *Cluster) deleteService(role PostgresRole) error {
c.logger.Debugf("deleting service %s", role) c.setProcessName("deleting service")
c.logger.Debugf("deleting %s service", role)
if c.Services[role] == nil { if c.Services[role] == nil {
c.logger.Debugf("No service for %s role was found, nothing to delete", role) c.logger.Debugf("No service for %s role was found, nothing to delete", role)
@ -350,11 +358,10 @@ func (c *Cluster) deleteService(role PostgresRole) error {
} }
if err := c.KubeClient.Services(c.Services[role].Namespace).Delete(context.TODO(), c.Services[role].Name, c.deleteOptions); err != nil { if err := c.KubeClient.Services(c.Services[role].Namespace).Delete(context.TODO(), c.Services[role].Name, c.deleteOptions); err != nil {
if k8sutil.ResourceNotFound(err) { if !k8sutil.ResourceNotFound(err) {
c.logger.Debugf("%s service has already been deleted", role) return fmt.Errorf("could not delete %s service: %v", role, err)
} else if err != nil {
return err
} }
c.logger.Debugf("%s service has already been deleted", role)
} }
c.logger.Infof("%s service %q has been deleted", role, util.NameFromMeta(c.Services[role].ObjectMeta)) c.logger.Infof("%s service %q has been deleted", role, util.NameFromMeta(c.Services[role].ObjectMeta))
@ -415,59 +422,128 @@ func (c *Cluster) generateEndpointSubsets(role PostgresRole) []v1.EndpointSubset
return result return result
} }
func (c *Cluster) createPodDisruptionBudget() (*policyv1.PodDisruptionBudget, error) { func (c *Cluster) createPrimaryPodDisruptionBudget() error {
podDisruptionBudgetSpec := c.generatePodDisruptionBudget() c.logger.Debug("creating primary pod disruption budget")
if c.PrimaryPodDisruptionBudget != nil {
c.logger.Warning("primary pod disruption budget already exists in the cluster")
return nil
}
podDisruptionBudgetSpec := c.generatePrimaryPodDisruptionBudget()
podDisruptionBudget, err := c.KubeClient. podDisruptionBudget, err := c.KubeClient.
PodDisruptionBudgets(podDisruptionBudgetSpec.Namespace). PodDisruptionBudgets(podDisruptionBudgetSpec.Namespace).
Create(context.TODO(), podDisruptionBudgetSpec, metav1.CreateOptions{}) Create(context.TODO(), podDisruptionBudgetSpec, metav1.CreateOptions{})
if err != nil { if err != nil {
return nil, err return err
} }
c.PodDisruptionBudget = podDisruptionBudget c.logger.Infof("primary pod disruption budget %q has been successfully created", util.NameFromMeta(podDisruptionBudget.ObjectMeta))
c.PrimaryPodDisruptionBudget = podDisruptionBudget
return podDisruptionBudget, nil return nil
} }
func (c *Cluster) updatePodDisruptionBudget(pdb *policyv1.PodDisruptionBudget) error { func (c *Cluster) createCriticalOpPodDisruptionBudget() error {
if c.PodDisruptionBudget == nil { c.logger.Debug("creating pod disruption budget for critical operations")
return fmt.Errorf("there is no pod disruption budget in the cluster") if c.CriticalOpPodDisruptionBudget != nil {
c.logger.Warning("pod disruption budget for critical operations already exists in the cluster")
return nil
} }
if err := c.deletePodDisruptionBudget(); err != nil { podDisruptionBudgetSpec := c.generateCriticalOpPodDisruptionBudget()
return fmt.Errorf("could not delete pod disruption budget: %v", err) podDisruptionBudget, err := c.KubeClient.
PodDisruptionBudgets(podDisruptionBudgetSpec.Namespace).
Create(context.TODO(), podDisruptionBudgetSpec, metav1.CreateOptions{})
if err != nil {
return err
}
c.logger.Infof("pod disruption budget for critical operations %q has been successfully created", util.NameFromMeta(podDisruptionBudget.ObjectMeta))
c.CriticalOpPodDisruptionBudget = podDisruptionBudget
return nil
}
func (c *Cluster) createPodDisruptionBudgets() error {
errors := make([]string, 0)
err := c.createPrimaryPodDisruptionBudget()
if err != nil {
errors = append(errors, fmt.Sprintf("could not create primary pod disruption budget: %v", err))
}
err = c.createCriticalOpPodDisruptionBudget()
if err != nil {
errors = append(errors, fmt.Sprintf("could not create pod disruption budget for critical operations: %v", err))
}
if len(errors) > 0 {
return fmt.Errorf("%v", strings.Join(errors, `', '`))
}
return nil
}
func (c *Cluster) updatePrimaryPodDisruptionBudget(pdb *policyv1.PodDisruptionBudget) error {
c.logger.Debug("updating primary pod disruption budget")
if c.PrimaryPodDisruptionBudget == nil {
return fmt.Errorf("there is no primary pod disruption budget in the cluster")
}
if err := c.deletePrimaryPodDisruptionBudget(); err != nil {
return fmt.Errorf("could not delete primary pod disruption budget: %v", err)
} }
newPdb, err := c.KubeClient. newPdb, err := c.KubeClient.
PodDisruptionBudgets(pdb.Namespace). PodDisruptionBudgets(pdb.Namespace).
Create(context.TODO(), pdb, metav1.CreateOptions{}) Create(context.TODO(), pdb, metav1.CreateOptions{})
if err != nil { if err != nil {
return fmt.Errorf("could not create pod disruption budget: %v", err) return fmt.Errorf("could not create primary pod disruption budget: %v", err)
} }
c.PodDisruptionBudget = newPdb c.PrimaryPodDisruptionBudget = newPdb
return nil return nil
} }
func (c *Cluster) deletePodDisruptionBudget() error { func (c *Cluster) updateCriticalOpPodDisruptionBudget(pdb *policyv1.PodDisruptionBudget) error {
c.logger.Debug("deleting pod disruption budget") c.logger.Debug("updating pod disruption budget for critical operations")
if c.PodDisruptionBudget == nil { if c.CriticalOpPodDisruptionBudget == nil {
c.logger.Debug("there is no pod disruption budget in the cluster") return fmt.Errorf("there is no pod disruption budget for critical operations in the cluster")
}
if err := c.deleteCriticalOpPodDisruptionBudget(); err != nil {
return fmt.Errorf("could not delete pod disruption budget for critical operations: %v", err)
}
newPdb, err := c.KubeClient.
PodDisruptionBudgets(pdb.Namespace).
Create(context.TODO(), pdb, metav1.CreateOptions{})
if err != nil {
return fmt.Errorf("could not create pod disruption budget for critical operations: %v", err)
}
c.CriticalOpPodDisruptionBudget = newPdb
return nil return nil
} }
pdbName := util.NameFromMeta(c.PodDisruptionBudget.ObjectMeta) func (c *Cluster) deletePrimaryPodDisruptionBudget() error {
c.logger.Debug("deleting primary pod disruption budget")
if c.PrimaryPodDisruptionBudget == nil {
c.logger.Debug("there is no primary pod disruption budget in the cluster")
return nil
}
pdbName := util.NameFromMeta(c.PrimaryPodDisruptionBudget.ObjectMeta)
err := c.KubeClient. err := c.KubeClient.
PodDisruptionBudgets(c.PodDisruptionBudget.Namespace). PodDisruptionBudgets(c.PrimaryPodDisruptionBudget.Namespace).
Delete(context.TODO(), c.PodDisruptionBudget.Name, c.deleteOptions) Delete(context.TODO(), c.PrimaryPodDisruptionBudget.Name, c.deleteOptions)
if k8sutil.ResourceNotFound(err) { if k8sutil.ResourceNotFound(err) {
c.logger.Debugf("PodDisruptionBudget %q has already been deleted", util.NameFromMeta(c.PodDisruptionBudget.ObjectMeta)) c.logger.Debugf("PodDisruptionBudget %q has already been deleted", util.NameFromMeta(c.PrimaryPodDisruptionBudget.ObjectMeta))
} else if err != nil { } else if err != nil {
return fmt.Errorf("could not delete PodDisruptionBudget: %v", err) return fmt.Errorf("could not delete primary pod disruption budget: %v", err)
} }
c.logger.Infof("pod disruption budget %q has been deleted", util.NameFromMeta(c.PodDisruptionBudget.ObjectMeta)) c.logger.Infof("pod disruption budget %q has been deleted", util.NameFromMeta(c.PrimaryPodDisruptionBudget.ObjectMeta))
c.PodDisruptionBudget = nil c.PrimaryPodDisruptionBudget = nil
err = retryutil.Retry(c.OpConfig.ResourceCheckInterval, c.OpConfig.ResourceCheckTimeout, err = retryutil.Retry(c.OpConfig.ResourceCheckInterval, c.OpConfig.ResourceCheckTimeout,
func() (bool, error) { func() (bool, error) {
@ -481,26 +557,80 @@ func (c *Cluster) deletePodDisruptionBudget() error {
return false, err2 return false, err2
}) })
if err != nil { if err != nil {
return fmt.Errorf("could not delete pod disruption budget: %v", err) return fmt.Errorf("could not delete primary pod disruption budget: %v", err)
} }
return nil return nil
} }
func (c *Cluster) deleteCriticalOpPodDisruptionBudget() error {
c.logger.Debug("deleting pod disruption budget for critical operations")
if c.CriticalOpPodDisruptionBudget == nil {
c.logger.Debug("there is no pod disruption budget for critical operations in the cluster")
return nil
}
pdbName := util.NameFromMeta(c.CriticalOpPodDisruptionBudget.ObjectMeta)
err := c.KubeClient.
PodDisruptionBudgets(c.CriticalOpPodDisruptionBudget.Namespace).
Delete(context.TODO(), c.CriticalOpPodDisruptionBudget.Name, c.deleteOptions)
if k8sutil.ResourceNotFound(err) {
c.logger.Debugf("PodDisruptionBudget %q has already been deleted", util.NameFromMeta(c.CriticalOpPodDisruptionBudget.ObjectMeta))
} else if err != nil {
return fmt.Errorf("could not delete pod disruption budget for critical operations: %v", err)
}
c.logger.Infof("pod disruption budget %q has been deleted", util.NameFromMeta(c.CriticalOpPodDisruptionBudget.ObjectMeta))
c.CriticalOpPodDisruptionBudget = nil
err = retryutil.Retry(c.OpConfig.ResourceCheckInterval, c.OpConfig.ResourceCheckTimeout,
func() (bool, error) {
_, err2 := c.KubeClient.PodDisruptionBudgets(pdbName.Namespace).Get(context.TODO(), pdbName.Name, metav1.GetOptions{})
if err2 == nil {
return false, nil
}
if k8sutil.ResourceNotFound(err2) {
return true, nil
}
return false, err2
})
if err != nil {
return fmt.Errorf("could not delete pod disruption budget for critical operations: %v", err)
}
return nil
}
func (c *Cluster) deletePodDisruptionBudgets() error {
errors := make([]string, 0)
if err := c.deletePrimaryPodDisruptionBudget(); err != nil {
errors = append(errors, fmt.Sprintf("%v", err))
}
if err := c.deleteCriticalOpPodDisruptionBudget(); err != nil {
errors = append(errors, fmt.Sprintf("%v", err))
}
if len(errors) > 0 {
return fmt.Errorf("%v", strings.Join(errors, `', '`))
}
return nil
}
func (c *Cluster) deleteEndpoint(role PostgresRole) error { func (c *Cluster) deleteEndpoint(role PostgresRole) error {
c.setProcessName("deleting endpoint") c.setProcessName("deleting endpoint")
c.logger.Debugln("deleting endpoint") c.logger.Debugf("deleting %s endpoint", role)
if c.Endpoints[role] == nil { if c.Endpoints[role] == nil {
c.logger.Debugf("there is no %s endpoint in the cluster", role) c.logger.Debugf("there is no %s endpoint in the cluster", role)
return nil return nil
} }
if err := c.KubeClient.Endpoints(c.Endpoints[role].Namespace).Delete(context.TODO(), c.Endpoints[role].Name, c.deleteOptions); err != nil { if err := c.KubeClient.Endpoints(c.Endpoints[role].Namespace).Delete(context.TODO(), c.Endpoints[role].Name, c.deleteOptions); err != nil {
if k8sutil.ResourceNotFound(err) { if !k8sutil.ResourceNotFound(err) {
c.logger.Debugf("%s endpoint has already been deleted", role) return fmt.Errorf("could not delete %s endpoint: %v", role, err)
} else if err != nil {
return fmt.Errorf("could not delete endpoint: %v", err)
} }
c.logger.Debugf("%s endpoint has already been deleted", role)
} }
c.logger.Infof("%s endpoint %q has been deleted", role, util.NameFromMeta(c.Endpoints[role].ObjectMeta)) c.logger.Infof("%s endpoint %q has been deleted", role, util.NameFromMeta(c.Endpoints[role].ObjectMeta))
@ -509,12 +639,83 @@ func (c *Cluster) deleteEndpoint(role PostgresRole) error {
return nil return nil
} }
func (c *Cluster) deletePatroniResources() error {
c.setProcessName("deleting Patroni resources")
errors := make([]string, 0)
if err := c.deleteService(Patroni); err != nil {
errors = append(errors, fmt.Sprintf("%v", err))
}
for _, suffix := range patroniObjectSuffixes {
if c.patroniKubernetesUseConfigMaps() {
if err := c.deletePatroniConfigMap(suffix); err != nil {
errors = append(errors, fmt.Sprintf("%v", err))
}
} else {
if err := c.deletePatroniEndpoint(suffix); err != nil {
errors = append(errors, fmt.Sprintf("%v", err))
}
}
}
if len(errors) > 0 {
return fmt.Errorf("%v", strings.Join(errors, `', '`))
}
return nil
}
func (c *Cluster) deletePatroniConfigMap(suffix string) error {
c.setProcessName("deleting Patroni config map")
c.logger.Debugf("deleting %s Patroni config map", suffix)
cm := c.PatroniConfigMaps[suffix]
if cm == nil {
c.logger.Debugf("there is no %s Patroni config map in the cluster", suffix)
return nil
}
if err := c.KubeClient.ConfigMaps(cm.Namespace).Delete(context.TODO(), cm.Name, c.deleteOptions); err != nil {
if !k8sutil.ResourceNotFound(err) {
return fmt.Errorf("could not delete %s Patroni config map %q: %v", suffix, cm.Name, err)
}
c.logger.Debugf("%s Patroni config map has already been deleted", suffix)
}
c.logger.Infof("%s Patroni config map %q has been deleted", suffix, util.NameFromMeta(cm.ObjectMeta))
delete(c.PatroniConfigMaps, suffix)
return nil
}
func (c *Cluster) deletePatroniEndpoint(suffix string) error {
c.setProcessName("deleting Patroni endpoint")
c.logger.Debugf("deleting %s Patroni endpoint", suffix)
ep := c.PatroniEndpoints[suffix]
if ep == nil {
c.logger.Debugf("there is no %s Patroni endpoint in the cluster", suffix)
return nil
}
if err := c.KubeClient.Endpoints(ep.Namespace).Delete(context.TODO(), ep.Name, c.deleteOptions); err != nil {
if !k8sutil.ResourceNotFound(err) {
return fmt.Errorf("could not delete %s Patroni endpoint %q: %v", suffix, ep.Name, err)
}
c.logger.Debugf("%s Patroni endpoint has already been deleted", suffix)
}
c.logger.Infof("%s Patroni endpoint %q has been deleted", suffix, util.NameFromMeta(ep.ObjectMeta))
delete(c.PatroniEndpoints, suffix)
return nil
}
func (c *Cluster) deleteSecrets() error { func (c *Cluster) deleteSecrets() error {
c.setProcessName("deleting secrets") c.setProcessName("deleting secrets")
errors := make([]string, 0) errors := make([]string, 0)
for uid, secret := range c.Secrets { for uid := range c.Secrets {
err := c.deleteSecret(uid, *secret) err := c.deleteSecret(uid)
if err != nil { if err != nil {
errors = append(errors, fmt.Sprintf("%v", err)) errors = append(errors, fmt.Sprintf("%v", err))
} }
@ -527,8 +728,9 @@ func (c *Cluster) deleteSecrets() error {
return nil return nil
} }
func (c *Cluster) deleteSecret(uid types.UID, secret v1.Secret) error { func (c *Cluster) deleteSecret(uid types.UID) error {
c.setProcessName("deleting secret") c.setProcessName("deleting secret")
secret := c.Secrets[uid]
secretName := util.NameFromMeta(secret.ObjectMeta) secretName := util.NameFromMeta(secret.ObjectMeta)
c.logger.Debugf("deleting secret %q", secretName) c.logger.Debugf("deleting secret %q", secretName)
err := c.KubeClient.Secrets(secret.Namespace).Delete(context.TODO(), secret.Name, c.deleteOptions) err := c.KubeClient.Secrets(secret.Namespace).Delete(context.TODO(), secret.Name, c.deleteOptions)
@ -556,12 +758,12 @@ func (c *Cluster) createLogicalBackupJob() (err error) {
if err != nil { if err != nil {
return fmt.Errorf("could not generate k8s cron job spec: %v", err) return fmt.Errorf("could not generate k8s cron job spec: %v", err)
} }
c.logger.Debugf("Generated cronJobSpec: %v", logicalBackupJobSpec)
_, err = c.KubeClient.CronJobsGetter.CronJobs(c.Namespace).Create(context.TODO(), logicalBackupJobSpec, metav1.CreateOptions{}) cronJob, err := c.KubeClient.CronJobsGetter.CronJobs(c.Namespace).Create(context.TODO(), logicalBackupJobSpec, metav1.CreateOptions{})
if err != nil { if err != nil {
return fmt.Errorf("could not create k8s cron job: %v", err) return fmt.Errorf("could not create k8s cron job: %v", err)
} }
c.LogicalBackupJob = cronJob
return nil return nil
} }
@ -575,7 +777,7 @@ func (c *Cluster) patchLogicalBackupJob(newJob *batchv1.CronJob) error {
} }
// update the backup job spec // update the backup job spec
_, err = c.KubeClient.CronJobsGetter.CronJobs(c.Namespace).Patch( cronJob, err := c.KubeClient.CronJobsGetter.CronJobs(c.Namespace).Patch(
context.TODO(), context.TODO(),
c.getLogicalBackupJobName(), c.getLogicalBackupJobName(),
types.MergePatchType, types.MergePatchType,
@ -585,20 +787,24 @@ func (c *Cluster) patchLogicalBackupJob(newJob *batchv1.CronJob) error {
if err != nil { if err != nil {
return fmt.Errorf("could not patch logical backup job: %v", err) return fmt.Errorf("could not patch logical backup job: %v", err)
} }
c.LogicalBackupJob = cronJob
return nil return nil
} }
func (c *Cluster) deleteLogicalBackupJob() error { func (c *Cluster) deleteLogicalBackupJob() error {
if c.LogicalBackupJob == nil {
return nil
}
c.logger.Info("removing the logical backup job") c.logger.Info("removing the logical backup job")
err := c.KubeClient.CronJobsGetter.CronJobs(c.Namespace).Delete(context.TODO(), c.getLogicalBackupJobName(), c.deleteOptions) err := c.KubeClient.CronJobsGetter.CronJobs(c.LogicalBackupJob.Namespace).Delete(context.TODO(), c.getLogicalBackupJobName(), c.deleteOptions)
if k8sutil.ResourceNotFound(err) { if k8sutil.ResourceNotFound(err) {
c.logger.Debugf("logical backup cron job %q has already been deleted", c.getLogicalBackupJobName()) c.logger.Debugf("logical backup cron job %q has already been deleted", c.getLogicalBackupJobName())
} else if err != nil { } else if err != nil {
return err return err
} }
c.LogicalBackupJob = nil
return nil return nil
} }
@ -628,7 +834,12 @@ func (c *Cluster) GetStatefulSet() *appsv1.StatefulSet {
return c.Statefulset return c.Statefulset
} }
// GetPodDisruptionBudget returns cluster's kubernetes PodDisruptionBudget // GetPrimaryPodDisruptionBudget returns cluster's primary kubernetes PodDisruptionBudget
func (c *Cluster) GetPodDisruptionBudget() *policyv1.PodDisruptionBudget { func (c *Cluster) GetPrimaryPodDisruptionBudget() *policyv1.PodDisruptionBudget {
return c.PodDisruptionBudget return c.PrimaryPodDisruptionBudget
}
// GetCriticalOpPodDisruptionBudget returns cluster's kubernetes PodDisruptionBudget for critical operations
func (c *Cluster) GetCriticalOpPodDisruptionBudget() *policyv1.PodDisruptionBudget {
return c.CriticalOpPodDisruptionBudget
} }

View File

@ -29,41 +29,48 @@ func (c *Cluster) createStreams(appId string) (*zalandov1.FabricEventStream, err
return streamCRD, nil return streamCRD, nil
} }
func (c *Cluster) updateStreams(newEventStreams *zalandov1.FabricEventStream) error { func (c *Cluster) updateStreams(newEventStreams *zalandov1.FabricEventStream) (patchedStream *zalandov1.FabricEventStream, err error) {
c.setProcessName("updating event streams") c.setProcessName("updating event streams")
patch, err := json.Marshal(newEventStreams) patch, err := json.Marshal(newEventStreams)
if err != nil { if err != nil {
return fmt.Errorf("could not marshal new event stream CRD %q: %v", newEventStreams.Name, err) return nil, fmt.Errorf("could not marshal new event stream CRD %q: %v", newEventStreams.Name, err)
} }
if _, err := c.KubeClient.FabricEventStreams(newEventStreams.Namespace).Patch( if patchedStream, err = c.KubeClient.FabricEventStreams(newEventStreams.Namespace).Patch(
context.TODO(), newEventStreams.Name, types.MergePatchType, patch, metav1.PatchOptions{}); err != nil { context.TODO(), newEventStreams.Name, types.MergePatchType, patch, metav1.PatchOptions{}); err != nil {
return err return nil, err
} }
return patchedStream, nil
}
func (c *Cluster) deleteStream(appId string) error {
c.setProcessName("deleting event stream")
c.logger.Debugf("deleting event stream with applicationId %s", appId)
err := c.KubeClient.FabricEventStreams(c.Streams[appId].Namespace).Delete(context.TODO(), c.Streams[appId].Name, metav1.DeleteOptions{})
if err != nil {
return fmt.Errorf("could not delete event stream %q with applicationId %s: %v", c.Streams[appId].Name, appId, err)
}
c.logger.Infof("event stream %q with applicationId %s has been successfully deleted", c.Streams[appId].Name, appId)
delete(c.Streams, appId)
return nil return nil
} }
func (c *Cluster) deleteStreams() error { func (c *Cluster) deleteStreams() error {
c.setProcessName("deleting event streams")
// check if stream CRD is installed before trying a delete // check if stream CRD is installed before trying a delete
_, err := c.KubeClient.CustomResourceDefinitions().Get(context.TODO(), constants.EventStreamCRDName, metav1.GetOptions{}) _, err := c.KubeClient.CustomResourceDefinitions().Get(context.TODO(), constants.EventStreamCRDName, metav1.GetOptions{})
if k8sutil.ResourceNotFound(err) { if k8sutil.ResourceNotFound(err) {
return nil return nil
} }
c.setProcessName("deleting event streams")
errors := make([]string, 0) errors := make([]string, 0)
listOptions := metav1.ListOptions{
LabelSelector: c.labelsSet(true).String(), for appId := range c.Streams {
} err := c.deleteStream(appId)
streams, err := c.KubeClient.FabricEventStreams(c.Namespace).List(context.TODO(), listOptions)
if err != nil { if err != nil {
return fmt.Errorf("could not list of FabricEventStreams: %v", err) errors = append(errors, fmt.Sprintf("%v", err))
}
for _, stream := range streams.Items {
err = c.KubeClient.FabricEventStreams(stream.Namespace).Delete(context.TODO(), stream.Name, metav1.DeleteOptions{})
if err != nil {
errors = append(errors, fmt.Sprintf("could not delete event stream %q: %v", stream.Name, err))
} }
} }
@ -74,7 +81,7 @@ func (c *Cluster) deleteStreams() error {
return nil return nil
} }
func gatherApplicationIds(streams []acidv1.Stream) []string { func getDistinctApplicationIds(streams []acidv1.Stream) []string {
appIds := make([]string, 0) appIds := make([]string, 0)
for _, stream := range streams { for _, stream := range streams {
if !util.SliceContains(appIds, stream.ApplicationId) { if !util.SliceContains(appIds, stream.ApplicationId) {
@ -85,9 +92,10 @@ func gatherApplicationIds(streams []acidv1.Stream) []string {
return appIds return appIds
} }
func (c *Cluster) syncPublication(publication, dbName string, tables map[string]acidv1.StreamTable) error { func (c *Cluster) syncPublication(dbName string, databaseSlotsList map[string]zalandov1.Slot, slotsToSync *map[string]map[string]string) error {
createPublications := make(map[string]string) createPublications := make(map[string]string)
alterPublications := make(map[string]string) alterPublications := make(map[string]string)
deletePublications := []string{}
defer func() { defer func() {
if err := c.closeDbConn(); err != nil { if err := c.closeDbConn(); err != nil {
@ -97,7 +105,7 @@ func (c *Cluster) syncPublication(publication, dbName string, tables map[string]
// check for existing publications // check for existing publications
if err := c.initDbConnWithName(dbName); err != nil { if err := c.initDbConnWithName(dbName); err != nil {
return fmt.Errorf("could not init database connection") return fmt.Errorf("could not init database connection: %v", err)
} }
currentPublications, err := c.getPublications() currentPublications, err := c.getPublications()
@ -105,9 +113,11 @@ func (c *Cluster) syncPublication(publication, dbName string, tables map[string]
return fmt.Errorf("could not get current publications: %v", err) return fmt.Errorf("could not get current publications: %v", err)
} }
tableNames := make([]string, len(tables)) for slotName, slotAndPublication := range databaseSlotsList {
newTables := slotAndPublication.Publication
tableNames := make([]string, len(newTables))
i := 0 i := 0
for t := range tables { for t := range newTables {
tableName, schemaName := getTableSchema(t) tableName, schemaName := getTableSchema(t)
tableNames[i] = fmt.Sprintf("%s.%s", schemaName, tableName) tableNames[i] = fmt.Sprintf("%s.%s", schemaName, tableName)
i++ i++
@ -115,26 +125,58 @@ func (c *Cluster) syncPublication(publication, dbName string, tables map[string]
sort.Strings(tableNames) sort.Strings(tableNames)
tableList := strings.Join(tableNames, ", ") tableList := strings.Join(tableNames, ", ")
currentTables, exists := currentPublications[publication] currentTables, exists := currentPublications[slotName]
// if newTables is empty it means that it's definition was removed from streams section
// but when slot is defined in manifest we should sync publications, too
// by reusing current tables we make sure it is not
if len(newTables) == 0 {
tableList = currentTables
}
if !exists { if !exists {
createPublications[publication] = tableList createPublications[slotName] = tableList
} else if currentTables != tableList { } else if currentTables != tableList {
alterPublications[publication] = tableList alterPublications[slotName] = tableList
} else {
(*slotsToSync)[slotName] = slotAndPublication.Slot
}
} }
if len(createPublications)+len(alterPublications) == 0 { // check if there is any deletion
for slotName := range currentPublications {
if _, exists := databaseSlotsList[slotName]; !exists {
deletePublications = append(deletePublications, slotName)
}
}
if len(createPublications)+len(alterPublications)+len(deletePublications) == 0 {
return nil return nil
} }
errors := make([]string, 0)
for publicationName, tables := range createPublications { for publicationName, tables := range createPublications {
if err = c.executeCreatePublication(publicationName, tables); err != nil { if err = c.executeCreatePublication(publicationName, tables); err != nil {
return fmt.Errorf("creation of publication %q failed: %v", publicationName, err) errors = append(errors, fmt.Sprintf("creation of publication %q failed: %v", publicationName, err))
continue
} }
(*slotsToSync)[publicationName] = databaseSlotsList[publicationName].Slot
} }
for publicationName, tables := range alterPublications { for publicationName, tables := range alterPublications {
if err = c.executeAlterPublication(publicationName, tables); err != nil { if err = c.executeAlterPublication(publicationName, tables); err != nil {
return fmt.Errorf("update of publication %q failed: %v", publicationName, err) errors = append(errors, fmt.Sprintf("update of publication %q failed: %v", publicationName, err))
continue
} }
(*slotsToSync)[publicationName] = databaseSlotsList[publicationName].Slot
}
for _, publicationName := range deletePublications {
if err = c.executeDropPublication(publicationName); err != nil {
errors = append(errors, fmt.Sprintf("deletion of publication %q failed: %v", publicationName, err))
continue
}
(*slotsToSync)[publicationName] = nil
}
if len(errors) > 0 {
return fmt.Errorf("%v", strings.Join(errors, `', '`))
} }
return nil return nil
@ -142,16 +184,25 @@ func (c *Cluster) syncPublication(publication, dbName string, tables map[string]
func (c *Cluster) generateFabricEventStream(appId string) *zalandov1.FabricEventStream { func (c *Cluster) generateFabricEventStream(appId string) *zalandov1.FabricEventStream {
eventStreams := make([]zalandov1.EventStream, 0) eventStreams := make([]zalandov1.EventStream, 0)
resourceAnnotations := map[string]string{}
var err, err2 error
for _, stream := range c.Spec.Streams { for _, stream := range c.Spec.Streams {
if stream.ApplicationId != appId { if stream.ApplicationId != appId {
continue continue
} }
err = setResourceAnnotation(&resourceAnnotations, stream.CPU, constants.EventStreamCpuAnnotationKey)
err2 = setResourceAnnotation(&resourceAnnotations, stream.Memory, constants.EventStreamMemoryAnnotationKey)
if err != nil || err2 != nil {
c.logger.Warningf("could not set resource annotation for event stream: %v", err)
}
for tableName, table := range stream.Tables { for tableName, table := range stream.Tables {
streamSource := c.getEventStreamSource(stream, tableName, table.IdColumn) streamSource := c.getEventStreamSource(stream, tableName, table.IdColumn)
streamFlow := getEventStreamFlow(stream, table.PayloadColumn) streamFlow := getEventStreamFlow(table.PayloadColumn)
streamSink := getEventStreamSink(stream, table.EventType) streamSink := getEventStreamSink(stream, table.EventType)
streamRecovery := getEventStreamRecovery(stream, table.RecoveryEventType, table.EventType) streamRecovery := getEventStreamRecovery(stream, table.RecoveryEventType, table.EventType, table.IgnoreRecovery)
eventStreams = append(eventStreams, zalandov1.EventStream{ eventStreams = append(eventStreams, zalandov1.EventStream{
EventStreamFlow: streamFlow, EventStreamFlow: streamFlow,
@ -171,8 +222,7 @@ func (c *Cluster) generateFabricEventStream(appId string) *zalandov1.FabricEvent
Name: fmt.Sprintf("%s-%s", c.Name, strings.ToLower(util.RandomPassword(5))), Name: fmt.Sprintf("%s-%s", c.Name, strings.ToLower(util.RandomPassword(5))),
Namespace: c.Namespace, Namespace: c.Namespace,
Labels: c.labelsSet(true), Labels: c.labelsSet(true),
Annotations: c.AnnotationsToPropagate(c.annotationsSet(nil)), Annotations: c.AnnotationsToPropagate(c.annotationsSet(resourceAnnotations)),
// make cluster StatefulSet the owner (like with connection pooler objects)
OwnerReferences: c.ownerReferences(), OwnerReferences: c.ownerReferences(),
}, },
Spec: zalandov1.FabricEventStreamSpec{ Spec: zalandov1.FabricEventStreamSpec{
@ -182,6 +232,27 @@ func (c *Cluster) generateFabricEventStream(appId string) *zalandov1.FabricEvent
} }
} }
func setResourceAnnotation(annotations *map[string]string, resource *string, key string) error {
var (
isSmaller bool
err error
)
if resource != nil {
currentValue, exists := (*annotations)[key]
if exists {
isSmaller, err = util.IsSmallerQuantity(currentValue, *resource)
if err != nil {
return fmt.Errorf("could not compare resource in %q annotation: %v", key, err)
}
}
if isSmaller || !exists {
(*annotations)[key] = *resource
}
}
return nil
}
func (c *Cluster) getEventStreamSource(stream acidv1.Stream, tableName string, idColumn *string) zalandov1.EventStreamSource { func (c *Cluster) getEventStreamSource(stream acidv1.Stream, tableName string, idColumn *string) zalandov1.EventStreamSource {
table, schema := getTableSchema(tableName) table, schema := getTableSchema(tableName)
streamFilter := stream.Filter[tableName] streamFilter := stream.Filter[tableName]
@ -197,7 +268,7 @@ func (c *Cluster) getEventStreamSource(stream acidv1.Stream, tableName string, i
} }
} }
func getEventStreamFlow(stream acidv1.Stream, payloadColumn *string) zalandov1.EventStreamFlow { func getEventStreamFlow(payloadColumn *string) zalandov1.EventStreamFlow {
return zalandov1.EventStreamFlow{ return zalandov1.EventStreamFlow{
Type: constants.EventStreamFlowPgGenericType, Type: constants.EventStreamFlowPgGenericType,
PayloadColumn: payloadColumn, PayloadColumn: payloadColumn,
@ -212,7 +283,7 @@ func getEventStreamSink(stream acidv1.Stream, eventType string) zalandov1.EventS
} }
} }
func getEventStreamRecovery(stream acidv1.Stream, recoveryEventType, eventType string) zalandov1.EventStreamRecovery { func getEventStreamRecovery(stream acidv1.Stream, recoveryEventType, eventType string, ignoreRecovery *bool) zalandov1.EventStreamRecovery {
if (stream.EnableRecovery != nil && !*stream.EnableRecovery) || if (stream.EnableRecovery != nil && !*stream.EnableRecovery) ||
(stream.EnableRecovery == nil && recoveryEventType == "") { (stream.EnableRecovery == nil && recoveryEventType == "") {
return zalandov1.EventStreamRecovery{ return zalandov1.EventStreamRecovery{
@ -220,6 +291,12 @@ func getEventStreamRecovery(stream acidv1.Stream, recoveryEventType, eventType s
} }
} }
if ignoreRecovery != nil && *ignoreRecovery {
return zalandov1.EventStreamRecovery{
Type: constants.EventStreamRecoveryIgnoreType,
}
}
if stream.EnableRecovery != nil && *stream.EnableRecovery && recoveryEventType == "" { if stream.EnableRecovery != nil && *stream.EnableRecovery && recoveryEventType == "" {
recoveryEventType = fmt.Sprintf("%s-%s", eventType, constants.EventStreamRecoverySuffix) recoveryEventType = fmt.Sprintf("%s-%s", eventType, constants.EventStreamRecoverySuffix)
} }
@ -275,59 +352,84 @@ func (c *Cluster) syncStreams() error {
_, err := c.KubeClient.CustomResourceDefinitions().Get(context.TODO(), constants.EventStreamCRDName, metav1.GetOptions{}) _, err := c.KubeClient.CustomResourceDefinitions().Get(context.TODO(), constants.EventStreamCRDName, metav1.GetOptions{})
if k8sutil.ResourceNotFound(err) { if k8sutil.ResourceNotFound(err) {
c.logger.Debugf("event stream CRD not installed, skipping") c.logger.Debug("event stream CRD not installed, skipping")
return nil return nil
} }
slots := make(map[string]map[string]string) // create map with every database and empty slot defintion
slotsToSync := make(map[string]map[string]string) // we need it to detect removal of streams from databases
publications := make(map[string]map[string]acidv1.StreamTable) if err := c.initDbConn(); err != nil {
requiredPatroniConfig := c.Spec.Patroni return fmt.Errorf("could not init database connection")
}
if len(requiredPatroniConfig.Slots) > 0 { defer func() {
slots = requiredPatroniConfig.Slots if err := c.closeDbConn(); err != nil {
c.logger.Errorf("could not close database connection: %v", err)
}
}()
listDatabases, err := c.getDatabases()
if err != nil {
return fmt.Errorf("could not get list of databases: %v", err)
}
databaseSlots := make(map[string]map[string]zalandov1.Slot)
for dbName := range listDatabases {
if dbName != "template0" && dbName != "template1" {
databaseSlots[dbName] = map[string]zalandov1.Slot{}
}
} }
// gather list of required slots and publications // need to take explicitly defined slots into account whey syncing Patroni config
slotsToSync := make(map[string]map[string]string)
requiredPatroniConfig := c.Spec.Patroni
if len(requiredPatroniConfig.Slots) > 0 {
for slotName, slotConfig := range requiredPatroniConfig.Slots {
slotsToSync[slotName] = slotConfig
if _, exists := databaseSlots[slotConfig["database"]]; exists {
databaseSlots[slotConfig["database"]][slotName] = zalandov1.Slot{
Slot: slotConfig,
Publication: make(map[string]acidv1.StreamTable),
}
}
}
}
// get list of required slots and publications, group by database
for _, stream := range c.Spec.Streams { for _, stream := range c.Spec.Streams {
if _, exists := databaseSlots[stream.Database]; !exists {
c.logger.Warningf("database %q does not exist in the cluster", stream.Database)
continue
}
slot := map[string]string{ slot := map[string]string{
"database": stream.Database, "database": stream.Database,
"plugin": constants.EventStreamSourcePluginType, "plugin": constants.EventStreamSourcePluginType,
"type": "logical", "type": "logical",
} }
slotName := getSlotName(stream.Database, stream.ApplicationId) slotName := getSlotName(stream.Database, stream.ApplicationId)
if _, exists := slots[slotName]; !exists { slotAndPublication, exists := databaseSlots[stream.Database][slotName]
slots[slotName] = slot if !exists {
publications[slotName] = stream.Tables databaseSlots[stream.Database][slotName] = zalandov1.Slot{
Slot: slot,
Publication: stream.Tables,
}
} else { } else {
streamTables := publications[slotName] streamTables := slotAndPublication.Publication
for tableName, table := range stream.Tables { for tableName, table := range stream.Tables {
if _, exists := streamTables[tableName]; !exists { if _, exists := streamTables[tableName]; !exists {
streamTables[tableName] = table streamTables[tableName] = table
} }
} }
publications[slotName] = streamTables slotAndPublication.Publication = streamTables
databaseSlots[stream.Database][slotName] = slotAndPublication
} }
} }
// create publications to each created slot // sync publication in a database
c.logger.Debug("syncing database publications") c.logger.Debug("syncing database publications")
for publication, tables := range publications { for dbName, databaseSlotsList := range databaseSlots {
// but first check for existing publications err := c.syncPublication(dbName, databaseSlotsList, &slotsToSync)
dbName := slots[publication]["database"]
err = c.syncPublication(publication, dbName, tables)
if err != nil { if err != nil {
c.logger.Warningf("could not sync publication %q in database %q: %v", publication, dbName, err) c.logger.Warningf("could not sync all publications in database %q: %v", dbName, err)
continue continue
} }
slotsToSync[publication] = slots[publication]
}
// no slots to sync = no streams defined or publications created
if len(slotsToSync) > 0 {
requiredPatroniConfig.Slots = slotsToSync
} else {
return nil
} }
c.logger.Debug("syncing logical replication slots") c.logger.Debug("syncing logical replication slots")
@ -337,70 +439,145 @@ func (c *Cluster) syncStreams() error {
} }
// sync logical replication slots in Patroni config // sync logical replication slots in Patroni config
requiredPatroniConfig.Slots = slotsToSync
configPatched, _, _, err := c.syncPatroniConfig(pods, requiredPatroniConfig, nil) configPatched, _, _, err := c.syncPatroniConfig(pods, requiredPatroniConfig, nil)
if err != nil { if err != nil {
c.logger.Warningf("Patroni config updated? %v - errors during config sync: %v", configPatched, err) c.logger.Warningf("Patroni config updated? %v - errors during config sync: %v", configPatched, err)
} }
// finally sync stream CRDs // finally sync stream CRDs
err = c.createOrUpdateStreams() // get distinct application IDs from streams section
if err != nil { // there will be a separate event stream resource for each ID
return err appIds := getDistinctApplicationIds(c.Spec.Streams)
for _, appId := range appIds {
if hasSlotsInSync(appId, databaseSlots, slotsToSync) {
if err = c.syncStream(appId); err != nil {
c.logger.Warningf("could not sync event streams with applicationId %s: %v", appId, err)
}
} else {
c.logger.Warningf("database replication slots %#v for streams with applicationId %s not in sync, skipping event stream sync", slotsToSync, appId)
}
}
// check if there is any deletion
if err = c.cleanupRemovedStreams(appIds); err != nil {
return fmt.Errorf("%v", err)
} }
return nil return nil
} }
func (c *Cluster) createOrUpdateStreams() error { func hasSlotsInSync(appId string, databaseSlots map[string]map[string]zalandov1.Slot, slotsToSync map[string]map[string]string) bool {
allSlotsInSync := true
// fetch different application IDs from streams section for dbName, slots := range databaseSlots {
// there will be a separate event stream resource for each ID for slotName := range slots {
appIds := gatherApplicationIds(c.Spec.Streams) if slotName == getSlotName(dbName, appId) {
if slot, exists := slotsToSync[slotName]; !exists || slot == nil {
// list all existing stream CRDs allSlotsInSync = false
listOptions := metav1.ListOptions{
LabelSelector: c.labelsSet(true).String(),
}
streams, err := c.KubeClient.FabricEventStreams(c.Namespace).List(context.TODO(), listOptions)
if err != nil {
return fmt.Errorf("could not list of FabricEventStreams: %v", err)
}
for _, appId := range appIds {
streamExists := false
// update stream when it exists and EventStreams array differs
for _, stream := range streams.Items {
if appId == stream.Spec.ApplicationId {
streamExists = true
desiredStreams := c.generateFabricEventStream(appId)
if match, reason := sameStreams(stream.Spec.EventStreams, desiredStreams.Spec.EventStreams); !match {
c.logger.Debugf("updating event streams: %s", reason)
desiredStreams.ObjectMeta = stream.ObjectMeta
err = c.updateStreams(desiredStreams)
if err != nil {
return fmt.Errorf("failed updating event stream %s: %v", stream.Name, err)
}
c.logger.Infof("event stream %q has been successfully updated", stream.Name)
}
continue continue
} }
} }
}
}
return allSlotsInSync
}
func (c *Cluster) syncStream(appId string) error {
var (
streams *zalandov1.FabricEventStreamList
err error
)
c.setProcessName("syncing stream with applicationId %s", appId)
c.logger.Debugf("syncing stream with applicationId %s", appId)
listOptions := metav1.ListOptions{
LabelSelector: c.labelsSet(false).String(),
}
streams, err = c.KubeClient.FabricEventStreams(c.Namespace).List(context.TODO(), listOptions)
if err != nil {
return fmt.Errorf("could not list of FabricEventStreams for applicationId %s: %v", appId, err)
}
streamExists := false
for _, stream := range streams.Items {
if stream.Spec.ApplicationId != appId {
continue
}
streamExists = true
c.Streams[appId] = &stream
desiredStreams := c.generateFabricEventStream(appId)
if !reflect.DeepEqual(stream.ObjectMeta.OwnerReferences, desiredStreams.ObjectMeta.OwnerReferences) {
c.logger.Infof("owner references of event streams with applicationId %s do not match the current ones", appId)
stream.ObjectMeta.OwnerReferences = desiredStreams.ObjectMeta.OwnerReferences
c.setProcessName("updating event streams with applicationId %s", appId)
updatedStream, err := c.KubeClient.FabricEventStreams(stream.Namespace).Update(context.TODO(), &stream, metav1.UpdateOptions{})
if err != nil {
return fmt.Errorf("could not update event streams with applicationId %s: %v", appId, err)
}
c.Streams[appId] = updatedStream
}
if match, reason := c.compareStreams(&stream, desiredStreams); !match {
c.logger.Infof("updating event streams with applicationId %s: %s", appId, reason)
// make sure to keep the old name with randomly generated suffix
desiredStreams.ObjectMeta.Name = stream.ObjectMeta.Name
updatedStream, err := c.updateStreams(desiredStreams)
if err != nil {
return fmt.Errorf("failed updating event streams %s with applicationId %s: %v", stream.Name, appId, err)
}
c.Streams[appId] = updatedStream
c.logger.Infof("event streams %q with applicationId %s have been successfully updated", updatedStream.Name, appId)
}
break
}
if !streamExists { if !streamExists {
c.logger.Infof("event streams with applicationId %s do not exist, create it", appId) c.logger.Infof("event streams with applicationId %s do not exist, create it", appId)
streamCRD, err := c.createStreams(appId) createdStream, err := c.createStreams(appId)
if err != nil { if err != nil {
return fmt.Errorf("failed creating event streams with applicationId %s: %v", appId, err) return fmt.Errorf("failed creating event streams with applicationId %s: %v", appId, err)
} }
c.logger.Infof("event streams %q have been successfully created", streamCRD.Name) c.logger.Infof("event streams %q have been successfully created", createdStream.Name)
} c.Streams[appId] = createdStream
} }
return nil return nil
} }
func sameStreams(curEventStreams, newEventStreams []zalandov1.EventStream) (match bool, reason string) { func (c *Cluster) compareStreams(curEventStreams, newEventStreams *zalandov1.FabricEventStream) (match bool, reason string) {
reasons := make([]string, 0)
desiredAnnotations := make(map[string]string)
match = true
// stream operator can add extra annotations so incl. current annotations in desired annotations
for curKey, curValue := range curEventStreams.Annotations {
if _, exists := desiredAnnotations[curKey]; !exists {
desiredAnnotations[curKey] = curValue
}
}
// add/or override annotations if cpu and memory values were changed
for newKey, newValue := range newEventStreams.Annotations {
desiredAnnotations[newKey] = newValue
}
if changed, reason := c.compareAnnotations(curEventStreams.ObjectMeta.Annotations, desiredAnnotations, nil); changed {
match = false
reasons = append(reasons, fmt.Sprintf("new streams annotations do not match: %s", reason))
}
if !reflect.DeepEqual(curEventStreams.ObjectMeta.Labels, newEventStreams.ObjectMeta.Labels) {
match = false
reasons = append(reasons, "new streams labels do not match the current ones")
}
if changed, reason := sameEventStreams(curEventStreams.Spec.EventStreams, newEventStreams.Spec.EventStreams); !changed {
match = false
reasons = append(reasons, fmt.Sprintf("new streams EventStreams array does not match : %s", reason))
}
return match, strings.Join(reasons, ", ")
}
func sameEventStreams(curEventStreams, newEventStreams []zalandov1.EventStream) (match bool, reason string) {
if len(newEventStreams) != len(curEventStreams) { if len(newEventStreams) != len(curEventStreams) {
return false, "number of defined streams is different" return false, "number of defined streams is different"
} }
@ -424,3 +601,22 @@ func sameStreams(curEventStreams, newEventStreams []zalandov1.EventStream) (matc
return true, "" return true, ""
} }
func (c *Cluster) cleanupRemovedStreams(appIds []string) error {
errors := make([]string, 0)
for appId := range c.Streams {
if !util.SliceContains(appIds, appId) {
c.logger.Infof("event streams with applicationId %s do not exist in the manifest, delete it", appId)
err := c.deleteStream(appId)
if err != nil {
errors = append(errors, fmt.Sprintf("failed deleting event streams with applicationId %s: %v", appId, err))
}
}
}
if len(errors) > 0 {
return fmt.Errorf("could not delete all removed event streams: %v", strings.Join(errors, `', '`))
}
return nil
}

View File

@ -2,6 +2,7 @@ package cluster
import ( import (
"fmt" "fmt"
"reflect"
"strings" "strings"
"context" "context"
@ -18,29 +19,25 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes/fake"
) )
func newFakeK8sStreamClient() (k8sutil.KubernetesClient, *fake.Clientset) {
zalandoClientSet := fakezalandov1.NewSimpleClientset()
clientSet := fake.NewSimpleClientset()
return k8sutil.KubernetesClient{
FabricEventStreamsGetter: zalandoClientSet.ZalandoV1(),
PostgresqlsGetter: zalandoClientSet.AcidV1(),
PodsGetter: clientSet.CoreV1(),
StatefulSetsGetter: clientSet.AppsV1(),
}, clientSet
}
var ( var (
clusterName string = "acid-test-cluster" clusterName string = "acid-stream-cluster"
namespace string = "default" namespace string = "default"
appId string = "test-app" appId string = "test-app"
dbName string = "foo" dbName string = "foo"
fesUser string = fmt.Sprintf("%s%s", constants.EventStreamSourceSlotPrefix, constants.UserRoleNameSuffix) fesUser string = fmt.Sprintf("%s%s", constants.EventStreamSourceSlotPrefix, constants.UserRoleNameSuffix)
slotName string = fmt.Sprintf("%s_%s_%s", constants.EventStreamSourceSlotPrefix, dbName, strings.Replace(appId, "-", "_", -1)) slotName string = fmt.Sprintf("%s_%s_%s", constants.EventStreamSourceSlotPrefix, dbName, strings.Replace(appId, "-", "_", -1))
zalandoClientSet = fakezalandov1.NewSimpleClientset()
client = k8sutil.KubernetesClient{
FabricEventStreamsGetter: zalandoClientSet.ZalandoV1(),
PostgresqlsGetter: zalandoClientSet.AcidV1(),
PodsGetter: clientSet.CoreV1(),
StatefulSetsGetter: clientSet.AppsV1(),
}
pg = acidv1.Postgresql{ pg = acidv1.Postgresql{
TypeMeta: metav1.TypeMeta{ TypeMeta: metav1.TypeMeta{
Kind: "Postgresql", Kind: "Postgresql",
@ -59,21 +56,26 @@ var (
ApplicationId: appId, ApplicationId: appId,
Database: "foo", Database: "foo",
Tables: map[string]acidv1.StreamTable{ Tables: map[string]acidv1.StreamTable{
"data.bar": acidv1.StreamTable{ "data.bar": {
EventType: "stream-type-a", EventType: "stream-type-a",
IdColumn: k8sutil.StringToPointer("b_id"), IdColumn: k8sutil.StringToPointer("b_id"),
PayloadColumn: k8sutil.StringToPointer("b_payload"), PayloadColumn: k8sutil.StringToPointer("b_payload"),
}, },
"data.foobar": acidv1.StreamTable{ "data.foobar": {
EventType: "stream-type-b", EventType: "stream-type-b",
RecoveryEventType: "stream-type-b-dlq", RecoveryEventType: "stream-type-b-dlq",
}, },
"data.foofoobar": {
EventType: "stream-type-c",
IgnoreRecovery: util.True(),
},
}, },
EnableRecovery: util.True(), EnableRecovery: util.True(),
Filter: map[string]*string{ Filter: map[string]*string{
"data.bar": k8sutil.StringToPointer("[?(@.source.txId > 500 && @.source.lsn > 123456)]"), "data.bar": k8sutil.StringToPointer("[?(@.source.txId > 500 && @.source.lsn > 123456)]"),
}, },
BatchSize: k8sutil.UInt32ToPointer(uint32(100)), BatchSize: k8sutil.UInt32ToPointer(uint32(100)),
CPU: k8sutil.StringToPointer("250m"),
}, },
}, },
TeamID: "acid", TeamID: "acid",
@ -91,8 +93,16 @@ var (
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-12345", clusterName), Name: fmt.Sprintf("%s-12345", clusterName),
Namespace: namespace, Namespace: namespace,
Annotations: map[string]string{
constants.EventStreamCpuAnnotationKey: "250m",
},
Labels: map[string]string{
"application": "spilo",
"cluster-name": clusterName,
"team": "acid",
},
OwnerReferences: []metav1.OwnerReference{ OwnerReferences: []metav1.OwnerReference{
metav1.OwnerReference{ {
APIVersion: "apps/v1", APIVersion: "apps/v1",
Kind: "StatefulSet", Kind: "StatefulSet",
Name: "acid-test-cluster", Name: "acid-test-cluster",
@ -103,7 +113,7 @@ var (
Spec: zalandov1.FabricEventStreamSpec{ Spec: zalandov1.FabricEventStreamSpec{
ApplicationId: appId, ApplicationId: appId,
EventStreams: []zalandov1.EventStream{ EventStreams: []zalandov1.EventStream{
zalandov1.EventStream{ {
EventStreamFlow: zalandov1.EventStreamFlow{ EventStreamFlow: zalandov1.EventStreamFlow{
PayloadColumn: k8sutil.StringToPointer("b_payload"), PayloadColumn: k8sutil.StringToPointer("b_payload"),
Type: constants.EventStreamFlowPgGenericType, Type: constants.EventStreamFlowPgGenericType,
@ -142,7 +152,7 @@ var (
Type: constants.EventStreamSourcePGType, Type: constants.EventStreamSourcePGType,
}, },
}, },
zalandov1.EventStream{ {
EventStreamFlow: zalandov1.EventStreamFlow{ EventStreamFlow: zalandov1.EventStreamFlow{
Type: constants.EventStreamFlowPgGenericType, Type: constants.EventStreamFlowPgGenericType,
}, },
@ -178,24 +188,42 @@ var (
Type: constants.EventStreamSourcePGType, Type: constants.EventStreamSourcePGType,
}, },
}, },
{
EventStreamFlow: zalandov1.EventStreamFlow{
Type: constants.EventStreamFlowPgGenericType,
},
EventStreamRecovery: zalandov1.EventStreamRecovery{
Type: constants.EventStreamRecoveryIgnoreType,
},
EventStreamSink: zalandov1.EventStreamSink{
EventType: "stream-type-c",
MaxBatchSize: k8sutil.UInt32ToPointer(uint32(100)),
Type: constants.EventStreamSinkNakadiType,
},
EventStreamSource: zalandov1.EventStreamSource{
Connection: zalandov1.Connection{
DBAuth: zalandov1.DBAuth{
Name: fmt.Sprintf("fes-user.%s.credentials.postgresql.acid.zalan.do", clusterName),
PasswordKey: "password",
Type: constants.EventStreamSourceAuthType,
UserKey: "username",
},
Url: fmt.Sprintf("jdbc:postgresql://%s.%s/foo?user=%s&ssl=true&sslmode=require", clusterName, namespace, fesUser),
SlotName: slotName,
PluginType: constants.EventStreamSourcePluginType,
},
Schema: "data",
EventStreamTable: zalandov1.EventStreamTable{
Name: "foofoobar",
},
Type: constants.EventStreamSourcePGType,
},
},
}, },
}, },
} }
)
func TestGatherApplicationIds(t *testing.T) { cluster = New(
testAppIds := []string{appId}
appIds := gatherApplicationIds(pg.Spec.Streams)
if !util.IsEqualIgnoreOrder(testAppIds, appIds) {
t.Errorf("gathered applicationIds do not match, expected %#v, got %#v", testAppIds, appIds)
}
}
func TestGenerateFabricEventStream(t *testing.T) {
client, _ := newFakeK8sStreamClient()
var cluster = New(
Config{ Config{
OpConfig: config.Config{ OpConfig: config.Config{
Auth: config.Auth{ Auth: config.Auth{
@ -213,60 +241,335 @@ func TestGenerateFabricEventStream(t *testing.T) {
}, },
}, },
}, client, pg, logger, eventRecorder) }, client, pg, logger, eventRecorder)
)
func TestGatherApplicationIds(t *testing.T) {
testAppIds := []string{appId}
appIds := getDistinctApplicationIds(pg.Spec.Streams)
if !util.IsEqualIgnoreOrder(testAppIds, appIds) {
t.Errorf("list of applicationIds does not match, expected %#v, got %#v", testAppIds, appIds)
}
}
func TestHasSlotsInSync(t *testing.T) {
cluster.Name = clusterName cluster.Name = clusterName
cluster.Namespace = namespace cluster.Namespace = namespace
// create statefulset to have ownerReference for streams appId2 := fmt.Sprintf("%s-2", appId)
_, err := cluster.createStatefulSet() dbNotExists := "dbnotexists"
assert.NoError(t, err) slotNotExists := fmt.Sprintf("%s_%s_%s", constants.EventStreamSourceSlotPrefix, dbNotExists, strings.Replace(appId, "-", "_", -1))
slotNotExistsAppId2 := fmt.Sprintf("%s_%s_%s", constants.EventStreamSourceSlotPrefix, dbNotExists, strings.Replace(appId2, "-", "_", -1))
tests := []struct {
subTest string
applicationId string
expectedSlots map[string]map[string]zalandov1.Slot
actualSlots map[string]map[string]string
slotsInSync bool
}{
{
subTest: fmt.Sprintf("slots in sync for applicationId %s", appId),
applicationId: appId,
expectedSlots: map[string]map[string]zalandov1.Slot{
dbName: {
slotName: zalandov1.Slot{
Slot: map[string]string{
"databases": dbName,
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
Publication: map[string]acidv1.StreamTable{
"test1": {
EventType: "stream-type-a",
},
},
},
},
},
actualSlots: map[string]map[string]string{
slotName: {
"databases": dbName,
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
},
slotsInSync: true,
}, {
subTest: fmt.Sprintf("slots empty for applicationId %s after create or update of publication failed", appId),
applicationId: appId,
expectedSlots: map[string]map[string]zalandov1.Slot{
dbNotExists: {
slotNotExists: zalandov1.Slot{
Slot: map[string]string{
"databases": dbName,
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
Publication: map[string]acidv1.StreamTable{
"test1": {
EventType: "stream-type-a",
},
},
},
},
},
actualSlots: map[string]map[string]string{},
slotsInSync: false,
}, {
subTest: fmt.Sprintf("slot with empty definition for applicationId %s after publication git deleted", appId),
applicationId: appId,
expectedSlots: map[string]map[string]zalandov1.Slot{
dbNotExists: {
slotNotExists: zalandov1.Slot{
Slot: map[string]string{
"databases": dbName,
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
Publication: map[string]acidv1.StreamTable{
"test1": {
EventType: "stream-type-a",
},
},
},
},
},
actualSlots: map[string]map[string]string{
slotName: nil,
},
slotsInSync: false,
}, {
subTest: fmt.Sprintf("one slot not in sync for applicationId %s because database does not exist", appId),
applicationId: appId,
expectedSlots: map[string]map[string]zalandov1.Slot{
dbName: {
slotName: zalandov1.Slot{
Slot: map[string]string{
"databases": dbName,
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
Publication: map[string]acidv1.StreamTable{
"test1": {
EventType: "stream-type-a",
},
},
},
},
dbNotExists: {
slotNotExists: zalandov1.Slot{
Slot: map[string]string{
"databases": "dbnotexists",
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
Publication: map[string]acidv1.StreamTable{
"test2": {
EventType: "stream-type-b",
},
},
},
},
},
actualSlots: map[string]map[string]string{
slotName: {
"databases": dbName,
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
},
slotsInSync: false,
}, {
subTest: fmt.Sprintf("slots in sync for applicationId %s, but not for %s - checking %s should return true", appId, appId2, appId),
applicationId: appId,
expectedSlots: map[string]map[string]zalandov1.Slot{
dbName: {
slotName: zalandov1.Slot{
Slot: map[string]string{
"databases": dbName,
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
Publication: map[string]acidv1.StreamTable{
"test1": {
EventType: "stream-type-a",
},
},
},
},
dbNotExists: {
slotNotExistsAppId2: zalandov1.Slot{
Slot: map[string]string{
"databases": "dbnotexists",
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
Publication: map[string]acidv1.StreamTable{
"test2": {
EventType: "stream-type-b",
},
},
},
},
},
actualSlots: map[string]map[string]string{
slotName: {
"databases": dbName,
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
},
slotsInSync: true,
}, {
subTest: fmt.Sprintf("slots in sync for applicationId %s, but not for %s - checking %s should return false", appId, appId2, appId2),
applicationId: appId2,
expectedSlots: map[string]map[string]zalandov1.Slot{
dbName: {
slotName: zalandov1.Slot{
Slot: map[string]string{
"databases": dbName,
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
Publication: map[string]acidv1.StreamTable{
"test1": {
EventType: "stream-type-a",
},
},
},
},
dbNotExists: {
slotNotExistsAppId2: zalandov1.Slot{
Slot: map[string]string{
"databases": "dbnotexists",
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
Publication: map[string]acidv1.StreamTable{
"test2": {
EventType: "stream-type-b",
},
},
},
},
},
actualSlots: map[string]map[string]string{
slotName: {
"databases": dbName,
"plugin": constants.EventStreamSourcePluginType,
"type": "logical",
},
},
slotsInSync: false,
},
}
for _, tt := range tests {
result := hasSlotsInSync(tt.applicationId, tt.expectedSlots, tt.actualSlots)
if result != tt.slotsInSync {
t.Errorf("%s: unexpected result for slot test of applicationId: %v, expected slots %#v, actual slots %#v", tt.subTest, tt.applicationId, tt.expectedSlots, tt.actualSlots)
}
}
}
func TestGenerateFabricEventStream(t *testing.T) {
cluster.Name = clusterName
cluster.Namespace = namespace
// create the streams // create the streams
err = cluster.createOrUpdateStreams() err := cluster.syncStream(appId)
assert.NoError(t, err) assert.NoError(t, err)
// compare generated stream with expected stream // compare generated stream with expected stream
result := cluster.generateFabricEventStream(appId) result := cluster.generateFabricEventStream(appId)
if match, _ := sameStreams(result.Spec.EventStreams, fes.Spec.EventStreams); !match { if match, _ := cluster.compareStreams(result, fes); !match {
t.Errorf("malformed FabricEventStream, expected %#v, got %#v", fes, result) t.Errorf("malformed FabricEventStream, expected %#v, got %#v", fes, result)
} }
listOptions := metav1.ListOptions{ listOptions := metav1.ListOptions{
LabelSelector: cluster.labelsSet(true).String(), LabelSelector: cluster.labelsSet(false).String(),
} }
streams, err := cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions) streams, err := cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equalf(t, 1, len(streams.Items), "unexpected number of streams found: got %d, but expected only one", len(streams.Items))
// check if there is only one stream
if len(streams.Items) > 1 {
t.Errorf("too many stream CRDs found: got %d, but expected only one", len(streams.Items))
}
// compare stream returned from API with expected stream // compare stream returned from API with expected stream
if match, _ := sameStreams(streams.Items[0].Spec.EventStreams, fes.Spec.EventStreams); !match { if match, _ := cluster.compareStreams(&streams.Items[0], fes); !match {
t.Errorf("malformed FabricEventStream returned from API, expected %#v, got %#v", fes, streams.Items[0]) t.Errorf("malformed FabricEventStream returned from API, expected %#v, got %#v", fes, streams.Items[0])
} }
// sync streams once again // sync streams once again
err = cluster.createOrUpdateStreams() err = cluster.syncStream(appId)
assert.NoError(t, err) assert.NoError(t, err)
streams, err = cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions) streams, err = cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equalf(t, 1, len(streams.Items), "unexpected number of streams found: got %d, but expected only one", len(streams.Items))
// check if there is still only one stream
if len(streams.Items) > 1 {
t.Errorf("too many stream CRDs found after sync: got %d, but expected only one", len(streams.Items))
}
// compare stream resturned from API with generated stream // compare stream resturned from API with generated stream
if match, _ := sameStreams(streams.Items[0].Spec.EventStreams, result.Spec.EventStreams); !match { if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
t.Errorf("returned FabricEventStream differs from generated one, expected %#v, got %#v", result, streams.Items[0]) t.Errorf("returned FabricEventStream differs from generated one, expected %#v, got %#v", result, streams.Items[0])
} }
} }
func newFabricEventStream(streams []zalandov1.EventStream, annotations map[string]string) *zalandov1.FabricEventStream {
return &zalandov1.FabricEventStream{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-12345", clusterName),
Annotations: annotations,
},
Spec: zalandov1.FabricEventStreamSpec{
ApplicationId: appId,
EventStreams: streams,
},
}
}
func TestSyncStreams(t *testing.T) {
newClusterName := fmt.Sprintf("%s-2", pg.Name)
pg.Name = newClusterName
var cluster = New(
Config{
OpConfig: config.Config{
PodManagementPolicy: "ordered_ready",
Resources: config.Resources{
ClusterLabels: map[string]string{"application": "spilo"},
ClusterNameLabel: "cluster-name",
DefaultCPURequest: "300m",
DefaultCPULimit: "300m",
DefaultMemoryRequest: "300Mi",
DefaultMemoryLimit: "300Mi",
PodRoleLabel: "spilo-role",
},
},
}, client, pg, logger, eventRecorder)
_, err := cluster.KubeClient.Postgresqls(namespace).Create(
context.TODO(), &pg, metav1.CreateOptions{})
assert.NoError(t, err)
// create the stream
err = cluster.syncStream(appId)
assert.NoError(t, err)
// sync the stream again
err = cluster.syncStream(appId)
assert.NoError(t, err)
// check that only one stream remains after sync
listOptions := metav1.ListOptions{
LabelSelector: cluster.labelsSet(false).String(),
}
streams, err := cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
assert.NoError(t, err)
assert.Equalf(t, 1, len(streams.Items), "unexpected number of streams found: got %d, but expected only 1", len(streams.Items))
}
func TestSameStreams(t *testing.T) { func TestSameStreams(t *testing.T) {
testName := "TestSameStreams" testName := "TestSameStreams"
annotationsA := map[string]string{constants.EventStreamMemoryAnnotationKey: "500Mi"}
annotationsB := map[string]string{constants.EventStreamMemoryAnnotationKey: "1Gi"}
stream1 := zalandov1.EventStream{ stream1 := zalandov1.EventStream{
EventStreamFlow: zalandov1.EventStreamFlow{}, EventStreamFlow: zalandov1.EventStreamFlow{},
@ -311,67 +614,179 @@ func TestSameStreams(t *testing.T) {
tests := []struct { tests := []struct {
subTest string subTest string
streamsA []zalandov1.EventStream streamsA *zalandov1.FabricEventStream
streamsB []zalandov1.EventStream streamsB *zalandov1.FabricEventStream
match bool match bool
reason string reason string
}{ }{
{ {
subTest: "identical streams", subTest: "identical streams",
streamsA: []zalandov1.EventStream{stream1, stream2}, streamsA: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, annotationsA),
streamsB: []zalandov1.EventStream{stream1, stream2}, streamsB: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, annotationsA),
match: true, match: true,
reason: "", reason: "",
}, },
{ {
subTest: "same streams different order", subTest: "same streams different order",
streamsA: []zalandov1.EventStream{stream1, stream2}, streamsA: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, nil),
streamsB: []zalandov1.EventStream{stream2, stream1}, streamsB: newFabricEventStream([]zalandov1.EventStream{stream2, stream1}, nil),
match: true, match: true,
reason: "", reason: "",
}, },
{ {
subTest: "same streams different order", subTest: "same streams different order",
streamsA: []zalandov1.EventStream{stream1}, streamsA: newFabricEventStream([]zalandov1.EventStream{stream1}, nil),
streamsB: []zalandov1.EventStream{stream1, stream2}, streamsB: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, nil),
match: false, match: false,
reason: "number of defined streams is different", reason: "new streams EventStreams array does not match : number of defined streams is different",
}, },
{ {
subTest: "different number of streams", subTest: "different number of streams",
streamsA: []zalandov1.EventStream{stream1}, streamsA: newFabricEventStream([]zalandov1.EventStream{stream1}, nil),
streamsB: []zalandov1.EventStream{stream1, stream2}, streamsB: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, nil),
match: false, match: false,
reason: "number of defined streams is different", reason: "new streams EventStreams array does not match : number of defined streams is different",
}, },
{ {
subTest: "event stream specs differ", subTest: "event stream specs differ",
streamsA: []zalandov1.EventStream{stream1, stream2}, streamsA: newFabricEventStream([]zalandov1.EventStream{stream1, stream2}, nil),
streamsB: fes.Spec.EventStreams, streamsB: fes,
match: false, match: false,
reason: "number of defined streams is different", reason: "new streams annotations do not match: Added \"fes.zalando.org/FES_CPU\" with value \"250m\"., new streams labels do not match the current ones, new streams EventStreams array does not match : number of defined streams is different",
}, },
{ {
subTest: "event stream recovery specs differ", subTest: "event stream recovery specs differ",
streamsA: []zalandov1.EventStream{stream2}, streamsA: newFabricEventStream([]zalandov1.EventStream{stream2}, nil),
streamsB: []zalandov1.EventStream{stream3}, streamsB: newFabricEventStream([]zalandov1.EventStream{stream3}, nil),
match: false, match: false,
reason: "event stream specs differ", reason: "new streams EventStreams array does not match : event stream specs differ",
},
{
subTest: "event stream with new annotations",
streamsA: newFabricEventStream([]zalandov1.EventStream{stream2}, nil),
streamsB: newFabricEventStream([]zalandov1.EventStream{stream2}, annotationsA),
match: false,
reason: "new streams annotations do not match: Added \"fes.zalando.org/FES_MEMORY\" with value \"500Mi\".",
},
{
subTest: "event stream annotations differ",
streamsA: newFabricEventStream([]zalandov1.EventStream{stream3}, annotationsA),
streamsB: newFabricEventStream([]zalandov1.EventStream{stream3}, annotationsB),
match: false,
reason: "new streams annotations do not match: \"fes.zalando.org/FES_MEMORY\" changed from \"500Mi\" to \"1Gi\".",
}, },
} }
for _, tt := range tests { for _, tt := range tests {
streamsMatch, matchReason := sameStreams(tt.streamsA, tt.streamsB) streamsMatch, matchReason := cluster.compareStreams(tt.streamsA, tt.streamsB)
if streamsMatch != tt.match { if streamsMatch != tt.match || matchReason != tt.reason {
t.Errorf("%s %s: unexpected match result when comparing streams: got %s, epxected %s", t.Errorf("%s %s: unexpected match result when comparing streams: got %s, expected %s",
testName, tt.subTest, matchReason, tt.reason) testName, tt.subTest, matchReason, tt.reason)
} }
} }
} }
func TestUpdateFabricEventStream(t *testing.T) { func TestUpdateStreams(t *testing.T) {
client, _ := newFakeK8sStreamClient() pg.Name = fmt.Sprintf("%s-3", pg.Name)
var cluster = New(
Config{
OpConfig: config.Config{
PodManagementPolicy: "ordered_ready",
Resources: config.Resources{
ClusterLabels: map[string]string{"application": "spilo"},
ClusterNameLabel: "cluster-name",
DefaultCPURequest: "300m",
DefaultCPULimit: "300m",
DefaultMemoryRequest: "300Mi",
DefaultMemoryLimit: "300Mi",
EnableOwnerReferences: util.True(),
PodRoleLabel: "spilo-role",
},
},
}, client, pg, logger, eventRecorder)
_, err := cluster.KubeClient.Postgresqls(namespace).Create(
context.TODO(), &pg, metav1.CreateOptions{})
assert.NoError(t, err)
// create stream with different owner reference
fes.ObjectMeta.Name = fmt.Sprintf("%s-12345", pg.Name)
fes.ObjectMeta.Labels["cluster-name"] = pg.Name
createdStream, err := cluster.KubeClient.FabricEventStreams(namespace).Create(
context.TODO(), fes, metav1.CreateOptions{})
assert.NoError(t, err)
assert.Equal(t, createdStream.Spec.ApplicationId, appId)
// sync the stream which should update the owner reference
err = cluster.syncStream(appId)
assert.NoError(t, err)
// check that only one stream exists after sync
listOptions := metav1.ListOptions{
LabelSelector: cluster.labelsSet(true).String(),
}
streams, err := cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
assert.NoError(t, err)
assert.Equalf(t, 1, len(streams.Items), "unexpected number of streams found: got %d, but expected only 1", len(streams.Items))
// compare owner references
if !reflect.DeepEqual(streams.Items[0].OwnerReferences, cluster.ownerReferences()) {
t.Errorf("unexpected owner references, expected %#v, got %#v", cluster.ownerReferences(), streams.Items[0].OwnerReferences)
}
// change specs of streams and patch CRD
for i, stream := range pg.Spec.Streams {
if stream.ApplicationId == appId {
streamTable := stream.Tables["data.bar"]
streamTable.EventType = "stream-type-c"
stream.Tables["data.bar"] = streamTable
stream.BatchSize = k8sutil.UInt32ToPointer(uint32(250))
pg.Spec.Streams[i] = stream
}
}
// compare stream returned from API with expected stream
streams = patchPostgresqlStreams(t, cluster, &pg.Spec, listOptions)
result := cluster.generateFabricEventStream(appId)
if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
t.Errorf("Malformed FabricEventStream after updating manifest, expected %#v, got %#v", streams.Items[0], result)
}
// disable recovery
for idx, stream := range pg.Spec.Streams {
if stream.ApplicationId == appId {
stream.EnableRecovery = util.False()
pg.Spec.Streams[idx] = stream
}
}
streams = patchPostgresqlStreams(t, cluster, &pg.Spec, listOptions)
result = cluster.generateFabricEventStream(appId)
if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
t.Errorf("Malformed FabricEventStream after disabling event recovery, expected %#v, got %#v", streams.Items[0], result)
}
}
func patchPostgresqlStreams(t *testing.T, cluster *Cluster, pgSpec *acidv1.PostgresSpec, listOptions metav1.ListOptions) (streams *zalandov1.FabricEventStreamList) {
patchData, err := specPatch(pgSpec)
assert.NoError(t, err)
pgPatched, err := cluster.KubeClient.Postgresqls(namespace).Patch(
context.TODO(), cluster.Name, types.MergePatchType, patchData, metav1.PatchOptions{}, "spec")
assert.NoError(t, err)
cluster.Postgresql.Spec = pgPatched.Spec
err = cluster.syncStream(appId)
assert.NoError(t, err)
streams, err = cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
assert.NoError(t, err)
return streams
}
func TestDeleteStreams(t *testing.T) {
pg.Name = fmt.Sprintf("%s-4", pg.Name)
var cluster = New( var cluster = New(
Config{ Config{
OpConfig: config.Config{ OpConfig: config.Config{
@ -392,12 +807,8 @@ func TestUpdateFabricEventStream(t *testing.T) {
context.TODO(), &pg, metav1.CreateOptions{}) context.TODO(), &pg, metav1.CreateOptions{})
assert.NoError(t, err) assert.NoError(t, err)
// create statefulset to have ownerReference for streams // create the stream
_, err = cluster.createStatefulSet() err = cluster.syncStream(appId)
assert.NoError(t, err)
// now create the stream
err = cluster.createOrUpdateStreams()
assert.NoError(t, err) assert.NoError(t, err)
// change specs of streams and patch CRD // change specs of streams and patch CRD
@ -411,48 +822,70 @@ func TestUpdateFabricEventStream(t *testing.T) {
} }
} }
patchData, err := specPatch(pg.Spec)
assert.NoError(t, err)
pgPatched, err := cluster.KubeClient.Postgresqls(namespace).Patch(
context.TODO(), cluster.Name, types.MergePatchType, patchData, metav1.PatchOptions{}, "spec")
assert.NoError(t, err)
cluster.Postgresql.Spec = pgPatched.Spec
err = cluster.createOrUpdateStreams()
assert.NoError(t, err)
// compare stream returned from API with expected stream // compare stream returned from API with expected stream
listOptions := metav1.ListOptions{ listOptions := metav1.ListOptions{
LabelSelector: cluster.labelsSet(true).String(), LabelSelector: cluster.labelsSet(false).String(),
} }
streams, err := cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions) streams := patchPostgresqlStreams(t, cluster, &pg.Spec, listOptions)
assert.NoError(t, err)
result := cluster.generateFabricEventStream(appId) result := cluster.generateFabricEventStream(appId)
if match, _ := sameStreams(streams.Items[0].Spec.EventStreams, result.Spec.EventStreams); !match { if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
t.Errorf("Malformed FabricEventStream after updating manifest, expected %#v, got %#v", streams.Items[0], result) t.Errorf("Malformed FabricEventStream after updating manifest, expected %#v, got %#v", streams.Items[0], result)
} }
// change teamId and check that stream is updated
pg.Spec.TeamID = "new-team"
streams = patchPostgresqlStreams(t, cluster, &pg.Spec, listOptions)
result = cluster.generateFabricEventStream(appId)
if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
t.Errorf("Malformed FabricEventStream after updating teamId, expected %#v, got %#v", streams.Items[0].ObjectMeta.Labels, result.ObjectMeta.Labels)
}
// disable recovery // disable recovery
for _, stream := range pg.Spec.Streams { for idx, stream := range pg.Spec.Streams {
if stream.ApplicationId == appId { if stream.ApplicationId == appId {
stream.EnableRecovery = util.False() stream.EnableRecovery = util.False()
pg.Spec.Streams[idx] = stream
} }
} }
patchData, err = specPatch(pg.Spec)
assert.NoError(t, err)
pgPatched, err = cluster.KubeClient.Postgresqls(namespace).Patch(
context.TODO(), cluster.Name, types.MergePatchType, patchData, metav1.PatchOptions{}, "spec")
assert.NoError(t, err)
cluster.Postgresql.Spec = pgPatched.Spec
err = cluster.createOrUpdateStreams()
assert.NoError(t, err)
streams = patchPostgresqlStreams(t, cluster, &pg.Spec, listOptions)
result = cluster.generateFabricEventStream(appId) result = cluster.generateFabricEventStream(appId)
if match, _ := sameStreams(streams.Items[0].Spec.EventStreams, result.Spec.EventStreams); !match { if match, _ := cluster.compareStreams(&streams.Items[0], result); !match {
t.Errorf("Malformed FabricEventStream after disabling event recovery, expected %#v, got %#v", streams.Items[0], result) t.Errorf("Malformed FabricEventStream after disabling event recovery, expected %#v, got %#v", streams.Items[0], result)
} }
// remove streams from manifest
pg.Spec.Streams = nil
pgUpdated, err := cluster.KubeClient.Postgresqls(namespace).Update(
context.TODO(), &pg, metav1.UpdateOptions{})
assert.NoError(t, err)
appIds := getDistinctApplicationIds(pgUpdated.Spec.Streams)
cluster.cleanupRemovedStreams(appIds)
// check that streams have been deleted
streams, err = cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
assert.NoError(t, err)
assert.Equalf(t, 0, len(streams.Items), "unexpected number of streams found: got %d, but expected none", len(streams.Items))
// create stream to test deleteStreams code
fes.ObjectMeta.Name = fmt.Sprintf("%s-12345", pg.Name)
fes.ObjectMeta.Labels["cluster-name"] = pg.Name
_, err = cluster.KubeClient.FabricEventStreams(namespace).Create(
context.TODO(), fes, metav1.CreateOptions{})
assert.NoError(t, err)
// sync it once to cluster struct
err = cluster.syncStream(appId)
assert.NoError(t, err)
// we need a mock client because deleteStreams checks for CRD existance
mockClient := k8sutil.NewMockKubernetesClient()
cluster.KubeClient.CustomResourceDefinitionsGetter = mockClient.CustomResourceDefinitionsGetter
cluster.deleteStreams()
// check that streams have been deleted
streams, err = cluster.KubeClient.FabricEventStreams(namespace).List(context.TODO(), listOptions)
assert.NoError(t, err)
assert.Equalf(t, 0, len(streams.Items), "unexpected number of streams found: got %d, but expected none", len(streams.Items))
} }

View File

@ -4,8 +4,10 @@ import (
"context" "context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"maps"
"reflect" "reflect"
"regexp" "regexp"
"slices"
"strconv" "strconv"
"strings" "strings"
"time" "time"
@ -15,11 +17,11 @@ import (
"github.com/zalando/postgres-operator/pkg/util" "github.com/zalando/postgres-operator/pkg/util"
"github.com/zalando/postgres-operator/pkg/util/constants" "github.com/zalando/postgres-operator/pkg/util/constants"
"github.com/zalando/postgres-operator/pkg/util/k8sutil" "github.com/zalando/postgres-operator/pkg/util/k8sutil"
"golang.org/x/exp/slices"
batchv1 "k8s.io/api/batch/v1" batchv1 "k8s.io/api/batch/v1"
v1 "k8s.io/api/core/v1" v1 "k8s.io/api/core/v1"
policyv1 "k8s.io/api/policy/v1" policyv1 "k8s.io/api/policy/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
) )
var requirePrimaryRestartWhenDecreased = []string{ var requirePrimaryRestartWhenDecreased = []string{
@ -79,6 +81,10 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
return err return err
} }
if err = c.syncPatroniResources(); err != nil {
c.logger.Errorf("could not sync Patroni resources: %v", err)
}
// sync volume may already transition volumes to gp3, if iops/throughput or type is specified // sync volume may already transition volumes to gp3, if iops/throughput or type is specified
if err = c.syncVolumes(); err != nil { if err = c.syncVolumes(); err != nil {
return err return err
@ -91,7 +97,11 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
} }
} }
c.logger.Debug("syncing statefulsets") if !isInMaintenanceWindow(newSpec.Spec.MaintenanceWindows) {
// do not apply any major version related changes yet
newSpec.Spec.PostgresqlParam.PgVersion = oldSpec.Spec.PostgresqlParam.PgVersion
}
if err = c.syncStatefulSet(); err != nil { if err = c.syncStatefulSet(); err != nil {
if !k8sutil.ResourceAlreadyExists(err) { if !k8sutil.ResourceAlreadyExists(err) {
err = fmt.Errorf("could not sync statefulsets: %v", err) err = fmt.Errorf("could not sync statefulsets: %v", err)
@ -107,8 +117,8 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
} }
c.logger.Debug("syncing pod disruption budgets") c.logger.Debug("syncing pod disruption budgets")
if err = c.syncPodDisruptionBudget(false); err != nil { if err = c.syncPodDisruptionBudgets(false); err != nil {
err = fmt.Errorf("could not sync pod disruption budget: %v", err) err = fmt.Errorf("could not sync pod disruption budgets: %v", err)
return err return err
} }
@ -143,7 +153,10 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
return fmt.Errorf("could not sync connection pooler: %v", err) return fmt.Errorf("could not sync connection pooler: %v", err)
} }
if len(c.Spec.Streams) > 0 { // sync if manifest stream count is different from stream CR count
// it can be that they are always different due to grouping of manifest streams
// but we would catch missed removals on update
if len(c.Spec.Streams) != len(c.Streams) {
c.logger.Debug("syncing streams") c.logger.Debug("syncing streams")
if err = c.syncStreams(); err != nil { if err = c.syncStreams(); err != nil {
err = fmt.Errorf("could not sync streams: %v", err) err = fmt.Errorf("could not sync streams: %v", err)
@ -173,6 +186,167 @@ func (c *Cluster) syncFinalizer() error {
return nil return nil
} }
func (c *Cluster) syncPatroniResources() error {
errors := make([]string, 0)
if err := c.syncPatroniService(); err != nil {
errors = append(errors, fmt.Sprintf("could not sync %s service: %v", Patroni, err))
}
for _, suffix := range patroniObjectSuffixes {
if c.patroniKubernetesUseConfigMaps() {
if err := c.syncPatroniConfigMap(suffix); err != nil {
errors = append(errors, fmt.Sprintf("could not sync %s Patroni config map: %v", suffix, err))
}
} else {
if err := c.syncPatroniEndpoint(suffix); err != nil {
errors = append(errors, fmt.Sprintf("could not sync %s Patroni endpoint: %v", suffix, err))
}
}
}
if len(errors) > 0 {
return fmt.Errorf("%v", strings.Join(errors, `', '`))
}
return nil
}
func (c *Cluster) syncPatroniConfigMap(suffix string) error {
var (
cm *v1.ConfigMap
err error
)
configMapName := fmt.Sprintf("%s-%s", c.Name, suffix)
c.logger.Debugf("syncing %s config map", configMapName)
c.setProcessName("syncing %s config map", configMapName)
if cm, err = c.KubeClient.ConfigMaps(c.Namespace).Get(context.TODO(), configMapName, metav1.GetOptions{}); err == nil {
c.PatroniConfigMaps[suffix] = cm
desiredOwnerRefs := c.ownerReferences()
if !reflect.DeepEqual(cm.ObjectMeta.OwnerReferences, desiredOwnerRefs) {
c.logger.Infof("new %s config map's owner references do not match the current ones", configMapName)
cm.ObjectMeta.OwnerReferences = desiredOwnerRefs
c.setProcessName("updating %s config map", configMapName)
cm, err = c.KubeClient.ConfigMaps(c.Namespace).Update(context.TODO(), cm, metav1.UpdateOptions{})
if err != nil {
return fmt.Errorf("could not update %s config map: %v", configMapName, err)
}
c.PatroniConfigMaps[suffix] = cm
}
annotations := make(map[string]string)
maps.Copy(annotations, cm.Annotations)
// Patroni can add extra annotations so incl. current annotations in desired annotations
desiredAnnotations := c.annotationsSet(cm.Annotations)
if changed, _ := c.compareAnnotations(annotations, desiredAnnotations, nil); changed {
patchData, err := metaAnnotationsPatch(desiredAnnotations)
if err != nil {
return fmt.Errorf("could not form patch for %s config map: %v", configMapName, err)
}
cm, err = c.KubeClient.ConfigMaps(c.Namespace).Patch(context.TODO(), configMapName, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
if err != nil {
return fmt.Errorf("could not patch annotations of %s config map: %v", configMapName, err)
}
c.PatroniConfigMaps[suffix] = cm
}
} else if !k8sutil.ResourceNotFound(err) {
// if config map does not exist yet, Patroni should create it
return fmt.Errorf("could not get %s config map: %v", configMapName, err)
}
return nil
}
func (c *Cluster) syncPatroniEndpoint(suffix string) error {
var (
ep *v1.Endpoints
err error
)
endpointName := fmt.Sprintf("%s-%s", c.Name, suffix)
c.logger.Debugf("syncing %s endpoint", endpointName)
c.setProcessName("syncing %s endpoint", endpointName)
if ep, err = c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), endpointName, metav1.GetOptions{}); err == nil {
c.PatroniEndpoints[suffix] = ep
desiredOwnerRefs := c.ownerReferences()
if !reflect.DeepEqual(ep.ObjectMeta.OwnerReferences, desiredOwnerRefs) {
c.logger.Infof("new %s endpoints's owner references do not match the current ones", endpointName)
ep.ObjectMeta.OwnerReferences = desiredOwnerRefs
c.setProcessName("updating %s endpoint", endpointName)
ep, err = c.KubeClient.Endpoints(c.Namespace).Update(context.TODO(), ep, metav1.UpdateOptions{})
if err != nil {
return fmt.Errorf("could not update %s endpoint: %v", endpointName, err)
}
c.PatroniEndpoints[suffix] = ep
}
annotations := make(map[string]string)
maps.Copy(annotations, ep.Annotations)
// Patroni can add extra annotations so incl. current annotations in desired annotations
desiredAnnotations := c.annotationsSet(ep.Annotations)
if changed, _ := c.compareAnnotations(annotations, desiredAnnotations, nil); changed {
patchData, err := metaAnnotationsPatch(desiredAnnotations)
if err != nil {
return fmt.Errorf("could not form patch for %s endpoint: %v", endpointName, err)
}
ep, err = c.KubeClient.Endpoints(c.Namespace).Patch(context.TODO(), endpointName, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
if err != nil {
return fmt.Errorf("could not patch annotations of %s endpoint: %v", endpointName, err)
}
c.PatroniEndpoints[suffix] = ep
}
} else if !k8sutil.ResourceNotFound(err) {
// if endpoint does not exist yet, Patroni should create it
return fmt.Errorf("could not get %s endpoint: %v", endpointName, err)
}
return nil
}
func (c *Cluster) syncPatroniService() error {
var (
svc *v1.Service
err error
)
serviceName := fmt.Sprintf("%s-%s", c.Name, Patroni)
c.logger.Debugf("syncing %s service", serviceName)
c.setProcessName("syncing %s service", serviceName)
if svc, err = c.KubeClient.Services(c.Namespace).Get(context.TODO(), serviceName, metav1.GetOptions{}); err == nil {
c.Services[Patroni] = svc
desiredOwnerRefs := c.ownerReferences()
if !reflect.DeepEqual(svc.ObjectMeta.OwnerReferences, desiredOwnerRefs) {
c.logger.Infof("new %s service's owner references do not match the current ones", serviceName)
svc.ObjectMeta.OwnerReferences = desiredOwnerRefs
c.setProcessName("updating %v service", serviceName)
svc, err = c.KubeClient.Services(c.Namespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
if err != nil {
return fmt.Errorf("could not update %s service: %v", serviceName, err)
}
c.Services[Patroni] = svc
}
annotations := make(map[string]string)
maps.Copy(annotations, svc.Annotations)
// Patroni can add extra annotations so incl. current annotations in desired annotations
desiredAnnotations := c.annotationsSet(svc.Annotations)
if changed, _ := c.compareAnnotations(annotations, desiredAnnotations, nil); changed {
patchData, err := metaAnnotationsPatch(desiredAnnotations)
if err != nil {
return fmt.Errorf("could not form patch for %s service: %v", serviceName, err)
}
svc, err = c.KubeClient.Services(c.Namespace).Patch(context.TODO(), serviceName, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
if err != nil {
return fmt.Errorf("could not patch annotations of %s service: %v", serviceName, err)
}
c.Services[Patroni] = svc
}
} else if !k8sutil.ResourceNotFound(err) {
// if config service does not exist yet, Patroni should create it
return fmt.Errorf("could not get %s service: %v", serviceName, err)
}
return nil
}
func (c *Cluster) syncServices() error { func (c *Cluster) syncServices() error {
for _, role := range []PostgresRole{Master, Replica} { for _, role := range []PostgresRole{Master, Replica} {
c.logger.Debugf("syncing %s service", role) c.logger.Debugf("syncing %s service", role)
@ -200,22 +374,17 @@ func (c *Cluster) syncService(role PostgresRole) error {
if svc, err = c.KubeClient.Services(c.Namespace).Get(context.TODO(), c.serviceName(role), metav1.GetOptions{}); err == nil { if svc, err = c.KubeClient.Services(c.Namespace).Get(context.TODO(), c.serviceName(role), metav1.GetOptions{}); err == nil {
c.Services[role] = svc c.Services[role] = svc
desiredSvc := c.generateService(role, &c.Spec) desiredSvc := c.generateService(role, &c.Spec)
if match, reason := c.compareServices(svc, desiredSvc); !match {
c.logServiceChanges(role, svc, desiredSvc, false, reason)
updatedSvc, err := c.updateService(role, svc, desiredSvc) updatedSvc, err := c.updateService(role, svc, desiredSvc)
if err != nil { if err != nil {
return fmt.Errorf("could not update %s service to match desired state: %v", role, err) return fmt.Errorf("could not update %s service to match desired state: %v", role, err)
} }
c.Services[role] = updatedSvc c.Services[role] = updatedSvc
c.logger.Infof("%s service %q is in the desired state now", role, util.NameFromMeta(desiredSvc.ObjectMeta))
}
return nil return nil
} }
if !k8sutil.ResourceNotFound(err) { if !k8sutil.ResourceNotFound(err) {
return fmt.Errorf("could not get %s service: %v", role, err) return fmt.Errorf("could not get %s service: %v", role, err)
} }
// no existing service, create new one // no existing service, create new one
c.Services[role] = nil
c.logger.Infof("could not find the cluster's %s service", role) c.logger.Infof("could not find the cluster's %s service", role)
if svc, err = c.createService(role); err == nil { if svc, err = c.createService(role); err == nil {
@ -240,8 +409,28 @@ func (c *Cluster) syncEndpoint(role PostgresRole) error {
) )
c.setProcessName("syncing %s endpoint", role) c.setProcessName("syncing %s endpoint", role)
if ep, err = c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), c.endpointName(role), metav1.GetOptions{}); err == nil { if ep, err = c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), c.serviceName(role), metav1.GetOptions{}); err == nil {
// TODO: No syncing of endpoints here, is this covered completely by updateService? desiredEp := c.generateEndpoint(role, ep.Subsets)
// if owner references differ we update which would also change annotations
if !reflect.DeepEqual(ep.ObjectMeta.OwnerReferences, desiredEp.ObjectMeta.OwnerReferences) {
c.logger.Infof("new %s endpoints's owner references do not match the current ones", role)
c.setProcessName("updating %v endpoint", role)
ep, err = c.KubeClient.Endpoints(c.Namespace).Update(context.TODO(), desiredEp, metav1.UpdateOptions{})
if err != nil {
return fmt.Errorf("could not update %s endpoint: %v", role, err)
}
} else {
if changed, _ := c.compareAnnotations(ep.Annotations, desiredEp.Annotations, nil); changed {
patchData, err := metaAnnotationsPatch(desiredEp.Annotations)
if err != nil {
return fmt.Errorf("could not form patch for %s endpoint: %v", role, err)
}
ep, err = c.KubeClient.Endpoints(c.Namespace).Patch(context.TODO(), c.serviceName(role), types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
if err != nil {
return fmt.Errorf("could not patch annotations of %s endpoint: %v", role, err)
}
}
}
c.Endpoints[role] = ep c.Endpoints[role] = ep
return nil return nil
} }
@ -249,7 +438,6 @@ func (c *Cluster) syncEndpoint(role PostgresRole) error {
return fmt.Errorf("could not get %s endpoint: %v", role, err) return fmt.Errorf("could not get %s endpoint: %v", role, err)
} }
// no existing endpoint, create new one // no existing endpoint, create new one
c.Endpoints[role] = nil
c.logger.Infof("could not find the cluster's %s endpoint", role) c.logger.Infof("could not find the cluster's %s endpoint", role)
if ep, err = c.createEndpoint(role); err == nil { if ep, err = c.createEndpoint(role); err == nil {
@ -259,7 +447,7 @@ func (c *Cluster) syncEndpoint(role PostgresRole) error {
return fmt.Errorf("could not create missing %s endpoint: %v", role, err) return fmt.Errorf("could not create missing %s endpoint: %v", role, err)
} }
c.logger.Infof("%s endpoint %q already exists", role, util.NameFromMeta(ep.ObjectMeta)) c.logger.Infof("%s endpoint %q already exists", role, util.NameFromMeta(ep.ObjectMeta))
if ep, err = c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), c.endpointName(role), metav1.GetOptions{}); err != nil { if ep, err = c.KubeClient.Endpoints(c.Namespace).Get(context.TODO(), c.serviceName(role), metav1.GetOptions{}); err != nil {
return fmt.Errorf("could not fetch existing %s endpoint: %v", role, err) return fmt.Errorf("could not fetch existing %s endpoint: %v", role, err)
} }
} }
@ -267,21 +455,22 @@ func (c *Cluster) syncEndpoint(role PostgresRole) error {
return nil return nil
} }
func (c *Cluster) syncPodDisruptionBudget(isUpdate bool) error { func (c *Cluster) syncPrimaryPodDisruptionBudget(isUpdate bool) error {
var ( var (
pdb *policyv1.PodDisruptionBudget pdb *policyv1.PodDisruptionBudget
err error err error
) )
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.podDisruptionBudgetName(), metav1.GetOptions{}); err == nil { if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.PrimaryPodDisruptionBudgetName(), metav1.GetOptions{}); err == nil {
c.PodDisruptionBudget = pdb c.PrimaryPodDisruptionBudget = pdb
newPDB := c.generatePodDisruptionBudget() newPDB := c.generatePrimaryPodDisruptionBudget()
if match, reason := k8sutil.SamePDB(pdb, newPDB); !match { match, reason := c.comparePodDisruptionBudget(pdb, newPDB)
if !match {
c.logPDBChanges(pdb, newPDB, isUpdate, reason) c.logPDBChanges(pdb, newPDB, isUpdate, reason)
if err = c.updatePodDisruptionBudget(newPDB); err != nil { if err = c.updatePrimaryPodDisruptionBudget(newPDB); err != nil {
return err return err
} }
} else { } else {
c.PodDisruptionBudget = pdb c.PrimaryPodDisruptionBudget = pdb
} }
return nil return nil
@ -290,22 +479,74 @@ func (c *Cluster) syncPodDisruptionBudget(isUpdate bool) error {
return fmt.Errorf("could not get pod disruption budget: %v", err) return fmt.Errorf("could not get pod disruption budget: %v", err)
} }
// no existing pod disruption budget, create new one // no existing pod disruption budget, create new one
c.PodDisruptionBudget = nil c.logger.Infof("could not find the primary pod disruption budget")
c.logger.Infof("could not find the cluster's pod disruption budget")
if pdb, err = c.createPodDisruptionBudget(); err != nil { if err = c.createPrimaryPodDisruptionBudget(); err != nil {
if !k8sutil.ResourceAlreadyExists(err) { if !k8sutil.ResourceAlreadyExists(err) {
return fmt.Errorf("could not create pod disruption budget: %v", err) return fmt.Errorf("could not create primary pod disruption budget: %v", err)
} }
c.logger.Infof("pod disruption budget %q already exists", util.NameFromMeta(pdb.ObjectMeta)) c.logger.Infof("pod disruption budget %q already exists", util.NameFromMeta(pdb.ObjectMeta))
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.podDisruptionBudgetName(), metav1.GetOptions{}); err != nil { if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.PrimaryPodDisruptionBudgetName(), metav1.GetOptions{}); err != nil {
return fmt.Errorf("could not fetch existing %q pod disruption budget", util.NameFromMeta(pdb.ObjectMeta)) return fmt.Errorf("could not fetch existing %q pod disruption budget", util.NameFromMeta(pdb.ObjectMeta))
} }
} }
c.logger.Infof("created missing pod disruption budget %q", util.NameFromMeta(pdb.ObjectMeta)) return nil
c.PodDisruptionBudget = pdb }
func (c *Cluster) syncCriticalOpPodDisruptionBudget(isUpdate bool) error {
var (
pdb *policyv1.PodDisruptionBudget
err error
)
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.criticalOpPodDisruptionBudgetName(), metav1.GetOptions{}); err == nil {
c.CriticalOpPodDisruptionBudget = pdb
newPDB := c.generateCriticalOpPodDisruptionBudget()
match, reason := c.comparePodDisruptionBudget(pdb, newPDB)
if !match {
c.logPDBChanges(pdb, newPDB, isUpdate, reason)
if err = c.updateCriticalOpPodDisruptionBudget(newPDB); err != nil {
return err
}
} else {
c.CriticalOpPodDisruptionBudget = pdb
}
return nil
}
if !k8sutil.ResourceNotFound(err) {
return fmt.Errorf("could not get pod disruption budget: %v", err)
}
// no existing pod disruption budget, create new one
c.logger.Infof("could not find pod disruption budget for critical operations")
if err = c.createCriticalOpPodDisruptionBudget(); err != nil {
if !k8sutil.ResourceAlreadyExists(err) {
return fmt.Errorf("could not create pod disruption budget for critical operations: %v", err)
}
c.logger.Infof("pod disruption budget %q already exists", util.NameFromMeta(pdb.ObjectMeta))
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(context.TODO(), c.criticalOpPodDisruptionBudgetName(), metav1.GetOptions{}); err != nil {
return fmt.Errorf("could not fetch existing %q pod disruption budget", util.NameFromMeta(pdb.ObjectMeta))
}
}
return nil
}
func (c *Cluster) syncPodDisruptionBudgets(isUpdate bool) error {
errors := make([]string, 0)
if err := c.syncPrimaryPodDisruptionBudget(isUpdate); err != nil {
errors = append(errors, fmt.Sprintf("%v", err))
}
if err := c.syncCriticalOpPodDisruptionBudget(isUpdate); err != nil {
errors = append(errors, fmt.Sprintf("%v", err))
}
if len(errors) > 0 {
return fmt.Errorf("%v", strings.Join(errors, `', '`))
}
return nil return nil
} }
@ -317,6 +558,7 @@ func (c *Cluster) syncStatefulSet() error {
) )
podsToRecreate := make([]v1.Pod, 0) podsToRecreate := make([]v1.Pod, 0)
isSafeToRecreatePods := true isSafeToRecreatePods := true
postponeReasons := make([]string, 0)
switchoverCandidates := make([]spec.NamespacedName, 0) switchoverCandidates := make([]spec.NamespacedName, 0)
pods, err := c.listPods() pods, err := c.listPods()
@ -326,12 +568,12 @@ func (c *Cluster) syncStatefulSet() error {
// NB: Be careful to consider the codepath that acts on podsRollingUpdateRequired before returning early. // NB: Be careful to consider the codepath that acts on podsRollingUpdateRequired before returning early.
sset, err := c.KubeClient.StatefulSets(c.Namespace).Get(context.TODO(), c.statefulSetName(), metav1.GetOptions{}) sset, err := c.KubeClient.StatefulSets(c.Namespace).Get(context.TODO(), c.statefulSetName(), metav1.GetOptions{})
if err != nil { if err != nil && !k8sutil.ResourceNotFound(err) {
if !k8sutil.ResourceNotFound(err) {
return fmt.Errorf("error during reading of statefulset: %v", err) return fmt.Errorf("error during reading of statefulset: %v", err)
} }
if err != nil {
// statefulset does not exist, try to re-create it // statefulset does not exist, try to re-create it
c.Statefulset = nil
c.logger.Infof("cluster's statefulset does not exist") c.logger.Infof("cluster's statefulset does not exist")
sset, err = c.createStatefulSet() sset, err = c.createStatefulSet()
@ -354,6 +596,11 @@ func (c *Cluster) syncStatefulSet() error {
c.logger.Infof("created missing statefulset %q", util.NameFromMeta(sset.ObjectMeta)) c.logger.Infof("created missing statefulset %q", util.NameFromMeta(sset.ObjectMeta))
} else { } else {
desiredSts, err := c.generateStatefulSet(&c.Spec)
if err != nil {
return fmt.Errorf("could not generate statefulset: %v", err)
}
c.logger.Debug("syncing statefulsets")
// check if there are still pods with a rolling update flag // check if there are still pods with a rolling update flag
for _, pod := range pods { for _, pod := range pods {
if c.getRollingUpdateFlagFromPod(&pod) { if c.getRollingUpdateFlagFromPod(&pod) {
@ -368,18 +615,36 @@ func (c *Cluster) syncStatefulSet() error {
} }
if len(podsToRecreate) > 0 { if len(podsToRecreate) > 0 {
c.logger.Debugf("%d / %d pod(s) still need to be rotated", len(podsToRecreate), len(pods)) c.logger.Infof("%d / %d pod(s) still need to be rotated", len(podsToRecreate), len(pods))
} }
// statefulset is already there, make sure we use its definition in order to compare with the spec. // statefulset is already there, make sure we use its definition in order to compare with the spec.
c.Statefulset = sset c.Statefulset = sset
desiredSts, err := c.generateStatefulSet(&c.Spec) cmp := c.compareStatefulSetWith(desiredSts)
if !cmp.rollingUpdate {
updatedPodAnnotations := map[string]*string{}
for _, anno := range cmp.deletedPodAnnotations {
updatedPodAnnotations[anno] = nil
}
for anno, val := range desiredSts.Spec.Template.Annotations {
updatedPodAnnotations[anno] = &val
}
metadataReq := map[string]map[string]map[string]*string{"metadata": {"annotations": updatedPodAnnotations}}
patch, err := json.Marshal(metadataReq)
if err != nil { if err != nil {
return fmt.Errorf("could not generate statefulset: %v", err) return fmt.Errorf("could not form patch for pod annotations: %v", err)
} }
cmp := c.compareStatefulSetWith(desiredSts) for _, pod := range pods {
if changed, _ := c.compareAnnotations(pod.Annotations, desiredSts.Spec.Template.Annotations, nil); changed {
_, err = c.KubeClient.Pods(c.Namespace).Patch(context.TODO(), pod.Name, types.StrategicMergePatchType, patch, metav1.PatchOptions{})
if err != nil {
return fmt.Errorf("could not patch annotations for pod %q: %v", pod.Name, err)
}
}
}
}
if !cmp.match { if !cmp.match {
if cmp.rollingUpdate { if cmp.rollingUpdate {
podsToRecreate = make([]v1.Pod, 0) podsToRecreate = make([]v1.Pod, 0)
@ -452,12 +717,14 @@ func (c *Cluster) syncStatefulSet() error {
c.logger.Debug("syncing Patroni config") c.logger.Debug("syncing Patroni config")
if configPatched, restartPrimaryFirst, restartWait, err = c.syncPatroniConfig(pods, c.Spec.Patroni, requiredPgParameters); err != nil { if configPatched, restartPrimaryFirst, restartWait, err = c.syncPatroniConfig(pods, c.Spec.Patroni, requiredPgParameters); err != nil {
c.logger.Warningf("Patroni config updated? %v - errors during config sync: %v", configPatched, err) c.logger.Warningf("Patroni config updated? %v - errors during config sync: %v", configPatched, err)
postponeReasons = append(postponeReasons, "errors during Patroni config sync")
isSafeToRecreatePods = false isSafeToRecreatePods = false
} }
// restart Postgres where it is still pending // restart Postgres where it is still pending
if err = c.restartInstances(pods, restartWait, restartPrimaryFirst); err != nil { if err = c.restartInstances(pods, restartWait, restartPrimaryFirst); err != nil {
c.logger.Errorf("errors while restarting Postgres in pods via Patroni API: %v", err) c.logger.Errorf("errors while restarting Postgres in pods via Patroni API: %v", err)
postponeReasons = append(postponeReasons, "errors while restarting Postgres via Patroni API")
isSafeToRecreatePods = false isSafeToRecreatePods = false
} }
@ -465,14 +732,14 @@ func (c *Cluster) syncStatefulSet() error {
// statefulset or those that got their configuration from the outdated statefulset) // statefulset or those that got their configuration from the outdated statefulset)
if len(podsToRecreate) > 0 { if len(podsToRecreate) > 0 {
if isSafeToRecreatePods { if isSafeToRecreatePods {
c.logger.Debugln("performing rolling update") c.logger.Info("performing rolling update")
c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "Update", "Performing rolling update") c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "Update", "Performing rolling update")
if err := c.recreatePods(podsToRecreate, switchoverCandidates); err != nil { if err := c.recreatePods(podsToRecreate, switchoverCandidates); err != nil {
return fmt.Errorf("could not recreate pods: %v", err) return fmt.Errorf("could not recreate pods: %v", err)
} }
c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "Update", "Rolling update done - pods have been recreated") c.eventRecorder.Event(c.GetReference(), v1.EventTypeNormal, "Update", "Rolling update done - pods have been recreated")
} else { } else {
c.logger.Warningf("postpone pod recreation until next sync because of errors during config sync") c.logger.Warningf("postpone pod recreation until next sync - reason: %s", strings.Join(postponeReasons, `', '`))
} }
} }
@ -682,7 +949,7 @@ func (c *Cluster) checkAndSetGlobalPostgreSQLConfiguration(pod *v1.Pod, effectiv
// check if specified slots exist in config and if they differ // check if specified slots exist in config and if they differ
for slotName, desiredSlot := range desiredPatroniConfig.Slots { for slotName, desiredSlot := range desiredPatroniConfig.Slots {
// only add slots specified in manifest to c.replicationSlots // only add slots specified in manifest to c.replicationSlots
for manifestSlotName, _ := range c.Spec.Patroni.Slots { for manifestSlotName := range c.Spec.Patroni.Slots {
if manifestSlotName == slotName { if manifestSlotName == slotName {
c.replicationSlots[slotName] = desiredSlot c.replicationSlots[slotName] = desiredSlot
} }
@ -778,7 +1045,7 @@ func (c *Cluster) syncStandbyClusterConfiguration() error {
// carries the request to change configuration through // carries the request to change configuration through
for _, pod := range pods { for _, pod := range pods {
podName := util.NameFromMeta(pod.ObjectMeta) podName := util.NameFromMeta(pod.ObjectMeta)
c.logger.Debugf("patching Postgres config via Patroni API on pod %s with following options: %s", c.logger.Infof("patching Postgres config via Patroni API on pod %s with following options: %s",
podName, standbyOptionsToSet) podName, standbyOptionsToSet)
if err = c.patroni.SetStandbyClusterParameters(&pod, standbyOptionsToSet); err == nil { if err = c.patroni.SetStandbyClusterParameters(&pod, standbyOptionsToSet); err == nil {
return nil return nil
@ -790,7 +1057,7 @@ func (c *Cluster) syncStandbyClusterConfiguration() error {
} }
func (c *Cluster) syncSecrets() error { func (c *Cluster) syncSecrets() error {
c.logger.Info("syncing secrets") c.logger.Debug("syncing secrets")
c.setProcessName("syncing secrets") c.setProcessName("syncing secrets")
generatedSecrets := c.generateUserSecrets() generatedSecrets := c.generateUserSecrets()
retentionUsers := make([]string, 0) retentionUsers := make([]string, 0)
@ -800,7 +1067,7 @@ func (c *Cluster) syncSecrets() error {
secret, err := c.KubeClient.Secrets(generatedSecret.Namespace).Create(context.TODO(), generatedSecret, metav1.CreateOptions{}) secret, err := c.KubeClient.Secrets(generatedSecret.Namespace).Create(context.TODO(), generatedSecret, metav1.CreateOptions{})
if err == nil { if err == nil {
c.Secrets[secret.UID] = secret c.Secrets[secret.UID] = secret
c.logger.Debugf("created new secret %s, namespace: %s, uid: %s", util.NameFromMeta(secret.ObjectMeta), generatedSecret.Namespace, secret.UID) c.logger.Infof("created new secret %s, namespace: %s, uid: %s", util.NameFromMeta(secret.ObjectMeta), generatedSecret.Namespace, secret.UID)
continue continue
} }
if k8sutil.ResourceAlreadyExists(err) { if k8sutil.ResourceAlreadyExists(err) {
@ -855,13 +1122,14 @@ func (c *Cluster) updateSecret(
// fetch user map to update later // fetch user map to update later
var userMap map[string]spec.PgUser var userMap map[string]spec.PgUser
var userKey string var userKey string
if secretUsername == c.systemUsers[constants.SuperuserKeyName].Name { switch secretUsername {
case c.systemUsers[constants.SuperuserKeyName].Name:
userKey = constants.SuperuserKeyName userKey = constants.SuperuserKeyName
userMap = c.systemUsers userMap = c.systemUsers
} else if secretUsername == c.systemUsers[constants.ReplicationUserKeyName].Name { case c.systemUsers[constants.ReplicationUserKeyName].Name:
userKey = constants.ReplicationUserKeyName userKey = constants.ReplicationUserKeyName
userMap = c.systemUsers userMap = c.systemUsers
} else { default:
userKey = secretUsername userKey = secretUsername
userMap = c.pgUsers userMap = c.pgUsers
} }
@ -934,14 +1202,32 @@ func (c *Cluster) updateSecret(
userMap[userKey] = pwdUser userMap[userKey] = pwdUser
} }
if !reflect.DeepEqual(secret.ObjectMeta.OwnerReferences, generatedSecret.ObjectMeta.OwnerReferences) {
updateSecret = true
updateSecretMsg = fmt.Sprintf("secret %s owner references do not match the current ones", secretName)
secret.ObjectMeta.OwnerReferences = generatedSecret.ObjectMeta.OwnerReferences
}
if updateSecret { if updateSecret {
c.logger.Debugln(updateSecretMsg) c.logger.Infof("%s", updateSecretMsg)
if _, err = c.KubeClient.Secrets(secret.Namespace).Update(context.TODO(), secret, metav1.UpdateOptions{}); err != nil { if secret, err = c.KubeClient.Secrets(secret.Namespace).Update(context.TODO(), secret, metav1.UpdateOptions{}); err != nil {
return fmt.Errorf("could not update secret %s: %v", secretName, err) return fmt.Errorf("could not update secret %s: %v", secretName, err)
} }
c.Secrets[secret.UID] = secret c.Secrets[secret.UID] = secret
} }
if changed, _ := c.compareAnnotations(secret.Annotations, generatedSecret.Annotations, nil); changed {
patchData, err := metaAnnotationsPatch(generatedSecret.Annotations)
if err != nil {
return fmt.Errorf("could not form patch for secret %q annotations: %v", secret.Name, err)
}
secret, err = c.KubeClient.Secrets(secret.Namespace).Patch(context.TODO(), secret.Name, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
if err != nil {
return fmt.Errorf("could not patch annotations for secret %q: %v", secret.Name, err)
}
c.Secrets[secret.UID] = secret
}
return nil return nil
} }
@ -1367,18 +1653,56 @@ func (c *Cluster) syncLogicalBackupJob() error {
if err != nil { if err != nil {
return fmt.Errorf("could not generate the desired logical backup job state: %v", err) return fmt.Errorf("could not generate the desired logical backup job state: %v", err)
} }
if match, reason := k8sutil.SameLogicalBackupJob(job, desiredJob); !match { if !reflect.DeepEqual(job.ObjectMeta.OwnerReferences, desiredJob.ObjectMeta.OwnerReferences) {
c.logger.Info("new logical backup job's owner references do not match the current ones")
job, err = c.KubeClient.CronJobs(job.Namespace).Update(context.TODO(), desiredJob, metav1.UpdateOptions{})
if err != nil {
return fmt.Errorf("could not update owner references for logical backup job %q: %v", job.Name, err)
}
c.logger.Infof("logical backup job %s updated", c.getLogicalBackupJobName())
}
if cmp := c.compareLogicalBackupJob(job, desiredJob); !cmp.match {
c.logger.Infof("logical job %s is not in the desired state and needs to be updated", c.logger.Infof("logical job %s is not in the desired state and needs to be updated",
c.getLogicalBackupJobName(), c.getLogicalBackupJobName(),
) )
if reason != "" { if len(cmp.reasons) != 0 {
for _, reason := range cmp.reasons {
c.logger.Infof("reason: %s", reason) c.logger.Infof("reason: %s", reason)
} }
}
if len(cmp.deletedPodAnnotations) != 0 {
templateMetadataReq := map[string]map[string]map[string]map[string]map[string]map[string]map[string]*string{
"spec": {"jobTemplate": {"spec": {"template": {"metadata": {"annotations": {}}}}}}}
for _, anno := range cmp.deletedPodAnnotations {
templateMetadataReq["spec"]["jobTemplate"]["spec"]["template"]["metadata"]["annotations"][anno] = nil
}
patch, err := json.Marshal(templateMetadataReq)
if err != nil {
return fmt.Errorf("could not marshal ObjectMeta for logical backup job %q pod template: %v", jobName, err)
}
job, err = c.KubeClient.CronJobs(c.Namespace).Patch(context.TODO(), jobName, types.StrategicMergePatchType, patch, metav1.PatchOptions{}, "")
if err != nil {
c.logger.Errorf("failed to remove annotations from the logical backup job %q pod template: %v", jobName, err)
return err
}
}
if err = c.patchLogicalBackupJob(desiredJob); err != nil { if err = c.patchLogicalBackupJob(desiredJob); err != nil {
return fmt.Errorf("could not update logical backup job to match desired state: %v", err) return fmt.Errorf("could not update logical backup job to match desired state: %v", err)
} }
c.logger.Info("the logical backup job is synced") c.logger.Info("the logical backup job is synced")
} }
if changed, _ := c.compareAnnotations(job.Annotations, desiredJob.Annotations, nil); changed {
patchData, err := metaAnnotationsPatch(desiredJob.Annotations)
if err != nil {
return fmt.Errorf("could not form patch for the logical backup job %q: %v", jobName, err)
}
_, err = c.KubeClient.CronJobs(c.Namespace).Patch(context.TODO(), jobName, types.MergePatchType, []byte(patchData), metav1.PatchOptions{})
if err != nil {
return fmt.Errorf("could not patch annotations of the logical backup job %q: %v", jobName, err)
}
}
c.LogicalBackupJob = desiredJob
return nil return nil
} }
if !k8sutil.ResourceNotFound(err) { if !k8sutil.ResourceNotFound(err) {

Some files were not shown because too many files have changed in this diff Show More