* restart master first in some edge cases
* edge case is when desired is lower than effective
* wait after config patch and restart on sync whenever we see pending_restart
* convert options to int to check decrease and add unit test
* minor update to e2e tests
* wait only after restart not every sync
* using spilo 14 e2e images
* improve Patroni config sync
* collect new and updated slots to patch patroni
* refactor httpGet in Patroni and extend unit tests
* GetMemberData should call the patroni endpoint
* add PATCH test
The ports section should be a list. Without this fix you'll trigger the following error:
```
Warning Create 2m38s postgres-operator json: cannot unmarshal object into Go struct field Sidecar.spec.sidecars.ports of type []v1.ContainerPort
```
* remove role from installLookupFunction and run it on database sync, too
* fix condition to decide on syncing pooler
* trigger lookup from database sync only if pooler is set
* use empty spec everywhere and do not sync if one lookupfunction was passed
* do not sync pooler after being disabled
* enhance docs on clone and restore
* add chapter about upgrading the operator
* add section for standby clusters
* Update docs/administrator.md
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
This commit adds support for using an Azure storage account as a backup
location.
It uses the existing GCS functionality as a reference for what to do,
and follows the example set by GCS as closely as possible.
The decision to name the cloud provider key "aws_or_gcp" is unfortunate
while adding support for Azure, but I have left it alone to allow for
this changeset to be backwards compatible.
* refactor restarting instances and reduce listPods calls
* only add parameters to set if it differs from effective config
* update e2e test for updating Postgres config
* patch config only once
* reorder e2e tests to follow alphabetical sorting
* e2e: finish waiting for pod failover only if all pods were replaced
* wait for sync in rolling update timeout test
* restart instances via rest api instead of recreating pods
* Ignore differences in bootstrap.dcs when compare SPILO_CONFIGURATION
* isBootstrapOnlyParameter is rewritten, instead of whitelist it uses blacklist
* added e2e test for max_connections decreasing
* documentation updated
* pending_restart flag added to restart api call, wait fot ttl seconds after restart
* refactoring, /restart returns error if pending_restart is set to true and patroni is not pending restart
* restart postgresql instances within pods only if pod's restart is not required
* patroni might need to restart postgresql after pods were recreated if values like max_connections decreased
* instancesRestart is not critical, try to restart pods if not successful
* cleanup
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* Create cross namespace secrets
* add test cases
* fixes
* Fixes
- include namespace in secret name only when namespace is provided
- use username.namespace as key to pgUsers only when namespace is
provided
- avoid conflict in the role creation in db by checking namespace
alongwith the username
* Update unit tests
* Fix test case
* Fixes
- update regular expression for usernames
- add test to allow check for valid usernames
- create pg roles with namespace (if any) appended in rolename
* add more test cases for valid usernames
* update docs
* fixes as per review comments
* update e2e
* fixes
* Add toggle to allow namespaced secrets
* update docs
* comment update
* Update e2e/tests/test_e2e.py
* few minor fixes
* fix unit tests
* fix e2e
* fix e2e attempt 2
* fix e2e
Co-authored-by: Rafia Sabih <rafia.sabih@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
default_pool_size, min_pool_size, reserve_pool_size are per pool values,
that's it per (database, user) pair. Which means it's wrong to calculate
them based on the global maxDBConn, otherwise for the cases when there
are too many users it would result in the configuration conflicting
between maxDBConn and min_pool_size. Adjust those values by the number
of known pools.
This not necessarily a perfect solution, because load could be
distributed between databases & users unevenly in which case some of
those parameters will cause rarely used pools being of the same size as
the active ones.