This commit adds support of a not-yet-released Patroni feature that allows postgres to run as primary in case of a failed leader lock update.
* Add Patroni 'failsafe_mode' local parameter (enable for a single PG cluster)
* Allow configuring Patroni 'failsafe_mode' parameter globally
* create streams only after postgres instances were restarted
* checkAndSetGlobalPostgreSQLConfiguration returns if config has been patched
* restart can be pending even without a config patch
* provide event stream API
* check manifest settings for logical decoding before creating streams
* operator updates Postgres config and creates replication user
* name FES like the Postgres cluster
* add delete case and fix updating streams + update unit test
* check if fes CRD exists before syncing
* existing slot must use the same plugin
* make id and payload columns configurable
* sync streams only when they are defined in manifest
* introduce applicationId for separate stream CRDs
* add FES to RBAC in chart
* disable streams in chart
* switch to pgoutput plugin and let operator create publications
* reflect code review and additional refactoring
Co-authored-by: Paŭlo Ebermann <paul.ebermann@zalando.de>
* password rotation in K8s secrets
* add db connection to syncSecrets
* add user retention
* add e2e test
* cleanup on username mismatch if rotation was switched off
* add unit test for syncSecrets + new updateSecret func
* init error arrays correctly
* avoid nilPointer when syncing connectionPooler
* getInfrastructureRoles should return error
* fix unit tests and return type for getInfrastructureRoles
* restart master first in some edge cases
* edge case is when desired is lower than effective
* wait after config patch and restart on sync whenever we see pending_restart
* convert options to int to check decrease and add unit test
* minor update to e2e tests
* wait only after restart not every sync
* using spilo 14 e2e images
* replace statefulset on annotation diff
* remove update annotation function for statefulset
* add unit test for syncing annotations
* add inherited annotation to unit test
* Enable connection pooler for replica
* Refactor code for connection pooler
- Move all the relevant code to a separate file
- Move all the related tests to a separate file
- Avoid using cluster where not required
- Simplify the logic in sync and other methods
- Cleanup of duplicated or unused code
* Fix labels for the replica pods
* Update deleteConnectionPooler to include role
* Adding test cases and other changes
- Fix unit test and delete secret when required only
- Make sure we use empty fresh cluster for every test case.
* enhance e2e test
* Disable pooler in complete manifest as this is source for e2e too an creates unnecessary pooler setups.
Co-authored-by: Rafia Sabih <rafia.sabih@zalando.de>
Co-authored-by: Jan Mussler <janm81@gmail.com>
* change Clone attribute of PostgresSpec to *ConnectionPooler
* update go.mod from master
* fix TestConnectionPoolerSynchronization()
* Update pkg/apis/acid.zalan.do/v1/postgresql_type.go
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
Co-authored-by: Pavlo Golub <pavlo.golub@gmail.com>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* Add EventsGetter to KubeClient to enable to sending K8S events
* Add eventRecorder to the controller, initialize it and hand it down to cluster via its constructor to enable it to emit events this way
* Add first set of events which then go to the postgresql custom resource the user interacts with to provide some feedback
* Add right to "create" events to operator cluster role
* Adapt cluster tests to new function sigurature with eventRecord (via NewFakeRecorder)
* Get a proper reference before sending events to a resource
Co-authored-by: Christian Rohmann <christian.rohmann@inovex.de>
There is a possibility to pass nil as one of the specs and an empty spec
into syncConnectionPooler. In this case it will perfom a syncronization
because nil != empty struct. Avoid such cases and make it testable by
returning list of syncronization reasons on top together with the final
error.
Connection pooler support
Add support for a connection pooler. The idea is to make it generic enough to
be able to switch between different implementations (e.g. pgbouncer or
odyssey). Operator needs to create a deployment with pooler and a service for
it to access.
For connection pool to work properly, a database needs to be prepared by
operator, namely a separate user have to be created with an access to an
installed lookup function (to fetch credential for other users).
This setups is supposed to be used only by robot/application users. Usually a
connection pool implementation is more CPU bounded, so it makes sense to create
several pods for connection pool with more emphasize on cpu resources. At the
moment there are no special affinity or tolerations assigned to bring those
pods closer to the database. For availability purposes minimal number of
connection pool pods is 2, ideally they have to be distributed between
different nodes/AZ, but it's not enforced in the operator itself. Available
configuration supposed to be ergonomic and in the normal case require minimum
changes to a manifest to enable connection pool. To have more control over the
configuration and functionality on the pool side one can customize the
corresponding docker image.
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>