* add the possibility to create a standby cluster that streams from a remote primary
* extending unit tests
* add more docs and e2e test
Co-authored-by: machine424 <ayoubmrini424@gmail.com>
* include tolerations in statefulset comparison
* provide alternative merge behavior of nodeSelectorTerms for node readiness label
* add config option to change affinity merge behavior
* reworked e2e tests around node affinity
* Add support for manual gs_wal_path in standby
* Remove separate standby version configuration
* Remove setting standby path via cluster/uid/version
Picking up the version doesn't work reliably without making changes to
Spilo. It's clearer to just specify the full S3/GS bucket path.
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* enhance docs on clone and restore
* add chapter about upgrading the operator
* add section for standby clusters
* Update docs/administrator.md
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
* Create cross namespace secrets
* add test cases
* fixes
* Fixes
- include namespace in secret name only when namespace is provided
- use username.namespace as key to pgUsers only when namespace is
provided
- avoid conflict in the role creation in db by checking namespace
alongwith the username
* Update unit tests
* Fix test case
* Fixes
- update regular expression for usernames
- add test to allow check for valid usernames
- create pg roles with namespace (if any) appended in rolename
* add more test cases for valid usernames
* update docs
* fixes as per review comments
* update e2e
* fixes
* Add toggle to allow namespaced secrets
* update docs
* comment update
* Update e2e/tests/test_e2e.py
* few minor fixes
* fix unit tests
* fix e2e
* fix e2e attempt 2
* fix e2e
Co-authored-by: Rafia Sabih <rafia.sabih@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* rename db roles that are removed from manifests
* extend PostgresTeam e2e test
* make suffix configurable and add deprecated field to pgUser struct
* deny LOGIN from deprecated roles
* update feature documentation
* Initial commit for new 1.6 release with Postgres 13 support.
* Updating maintainers, Go version, Codeowners.
* Use lazy upgrade image that contains pg13.
* fix typo for ownerReference
* fix clusterrole in helm chart
* reflect GCP logical backup in validation
* improve PostgresTeam docs
* change defaults for enable_pgversion_env_var and storage_resize_mode
* explain manual part of in-place upgrade
* remove gsoc docs
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* Adding nodeaffinity support alongside node_readiness_label
* add documentation for node affinity
* add node affinity e2e test
* add unit test for node affinity
Co-authored-by: Steffen Pøhner Henriksen <str3sses@gmail.com>
Co-authored-by: Adrian Astley <adrian.astley@activision.com>
* Enable connection pooler for replica
* Refactor code for connection pooler
- Move all the relevant code to a separate file
- Move all the related tests to a separate file
- Avoid using cluster where not required
- Simplify the logic in sync and other methods
- Cleanup of duplicated or unused code
* Fix labels for the replica pods
* Update deleteConnectionPooler to include role
* Adding test cases and other changes
- Fix unit test and delete secret when required only
- Make sure we use empty fresh cluster for every test case.
* enhance e2e test
* Disable pooler in complete manifest as this is source for e2e too an creates unnecessary pooler setups.
Co-authored-by: Rafia Sabih <rafia.sabih@zalando.de>
Co-authored-by: Jan Mussler <janm81@gmail.com>
* PostgresTeamCRD for advanced team management
* rework internal structure to be closer to CRD
* superusers instead of admin
* add more util functions and unit tests
* fix initHumanUsers
* check for superusers when creating normal teams
* polishing and fixes
* adding the essential missing pieces
* add documentation and update rbac
* reflect some feedback
* reflect more feedback
* fixing debug logs and raise QueueResyncPeriodTPR
* add two more flags to disable CRD and its superuser support
* fix chart
* update go modules
* move to client 1.19.3 and update codegen
Extend infrastructure roles handling
Postgres Operator uses infrastructure roles to provide access to a database for
external users e.g. for monitoring purposes. Such infrastructure roles are
expected to be present in the form of k8s secrets with the following content:
inrole1: some_encrypted_role
password1: some_encrypted_password
user1: some_entrypted_name
inrole2: some_encrypted_role
password2: some_encrypted_password
user2: some_entrypted_name
The format of this content is implied implicitly and not flexible enough. In
case if we do not have possibility to change the format of a secret we want to
use in the Operator, we need to recreate it in this format.
To address this lets make the format of secret content explicitly. The idea is
to introduce a new configuration option for the Operator.
infrastructure_roles_secrets:
- secretname: k8s_secret_name
userkey: some_encrypted_name
passwordkey: some_encrypted_password
rolekey: some_encrypted_role
- secretname: k8s_secret_name
userkey: some_encrypted_name
passwordkey: some_encrypted_password
rolekey: some_encrypted_role
This would allow Operator to use any avalable secrets to prepare infrastructure
roles. To make it backward compatible simulate the old behaviour if the new
option is not present.
The new configuration option is intended be used mainly from CRD, but it's also
available via Operator ConfigMap in a limited fashion. For ConfigMap one can
put there only a string with one secret definition in the following format (as
a string):
infrastructure_roles_secrets: |
secretname: k8s_secret_name,
userkey: some_encrypted_name,
passwordkey: some_encrypted_password,
rolekey: some_encrypted_role
Note than only one secret could be specified this way, no multiple secrets are
allowed.
Eventually the resulting list of infrastructure roles would be a total sum of
all supported ways to describe it, namely legacy via
infrastructure_roles_secret_name and infrastructure_roles_secrets from both
ConfigMap and CRD.
* PreparedDatabases with default role setup
* merge changes from master
* include preparedDatabases spec check when syncing databases
* create a default preparedDB if not specified
* add more default privileges for schemas
* use empty brackets block for undefined objects
* cover more default privilege scenarios and always define admin role
* add DefaultUsers flag
* support extensions and defaultUsers for preparedDatabases
* remove exact version in deployment manifest
* enable CRD validation for new field
* update generated code
* reflect code review
* fix typo in SQL command
* add documentation for preparedDatabases feature + minor changes
* some datname should stay
* add unit tests
* reflect some feedback
* init users for preparedDatabases also on update
* only change DB default privileges on creation
* add one more section in user docs
* one more sentence
* missing quotes in pooler configmap in values.yaml
* missing quotes in pooler configmap in values-crd.yaml
* docs clarifications
* helm3 --skip-crds
* Update docs/user.md
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* details moved in docs
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
Connection pooler support
Add support for a connection pooler. The idea is to make it generic enough to
be able to switch between different implementations (e.g. pgbouncer or
odyssey). Operator needs to create a deployment with pooler and a service for
it to access.
For connection pool to work properly, a database needs to be prepared by
operator, namely a separate user have to be created with an access to an
installed lookup function (to fetch credential for other users).
This setups is supposed to be used only by robot/application users. Usually a
connection pool implementation is more CPU bounded, so it makes sense to create
several pods for connection pool with more emphasize on cpu resources. At the
moment there are no special affinity or tolerations assigned to bring those
pods closer to the database. For availability purposes minimal number of
connection pool pods is 2, ideally they have to be distributed between
different nodes/AZ, but it's not enforced in the operator itself. Available
configuration supposed to be ergonomic and in the normal case require minimum
changes to a manifest to enable connection pool. To have more control over the
configuration and functionality on the pool side one can customize the
corresponding docker image.
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* bump version to 1.4.0 + some polishing
* align version for UI chart
* update user docs to warn for standby replicas
* minor log message changes for RBAC resources
* add validation for PG resources and volume size
* check resource requests also on UPDATE and SYNC + update docs
* if cluster was running don't error on sync