* Enable connection pooler for replica
* Refactor code for connection pooler
- Move all the relevant code to a separate file
- Move all the related tests to a separate file
- Avoid using cluster where not required
- Simplify the logic in sync and other methods
- Cleanup of duplicated or unused code
* Fix labels for the replica pods
* Update deleteConnectionPooler to include role
* Adding test cases and other changes
- Fix unit test and delete secret when required only
- Make sure we use empty fresh cluster for every test case.
* enhance e2e test
* Disable pooler in complete manifest as this is source for e2e too an creates unnecessary pooler setups.
Co-authored-by: Rafia Sabih <rafia.sabih@zalando.de>
Co-authored-by: Jan Mussler <janm81@gmail.com>
* check resize mode on update events
* add unit test for PVC resizing
* set resize mode to pvc in charts and manifests
* add test for quantityToGigabyte
* just one debug line for syncing volumes
* extend test and update log msg
* clean up after test_multi_namespace test
* see the PR description for complete list of changes
Co-authored-by: Sergey Dudoladov <sergey.dudoladov@zalando.de>
* Fix clone from gcs
* pass google credentials env var if using GS bucket
* remove requirement for timezone as GCS returns timestamp in local time to the region it is in
* Revert "remove requirement for timezone as GCS returns timestamp in local time to the region it is in"
This reverts commit ac4eb350d9.
* update GCS documentation
* remove sentence about logical backups
* reword pod environment configmap section
* fix documentation
* PostgresTeamCRD for advanced team management
* rework internal structure to be closer to CRD
* superusers instead of admin
* add more util functions and unit tests
* fix initHumanUsers
* check for superusers when creating normal teams
* polishing and fixes
* adding the essential missing pieces
* add documentation and update rbac
* reflect some feedback
* reflect more feedback
* fixing debug logs and raise QueueResyncPeriodTPR
* add two more flags to disable CRD and its superuser support
* fix chart
* update go modules
* move to client 1.19.3 and update codegen
* Improving end 2 end tests, especially speed of execution and error, by implementing proper eventual asserts and timeouts.
* Add documentation for running individual tests
* Fixed String encoding in Patorni state check and error case
* Printing config as multi log line entity, makes it readable and grepable on startup
* Cosmetic changes to logs. Removed quotes from diff. Move all object diffs to text diff. Enabled padding for log level.
* Mount script with tools for easy logaccess and watching objects.
* Set proper update strategy for Postgres operator deployment.
* Move long running test to end. Move pooler test to new functions.
* Remove quote from valid K8s identifiers.
* Lookup function installation
Due to reusing a previous database connection without closing it, lookup
function installation process was skipping the first database in the
list, installing twice into postgres db instead. To prevent that, make
internal initDbConnWithName to overwrite a connection object, and return
the same object only from initDbConn, which is sort of public interface.
Another solution for this would be to modify initDbConnWithName to
return a connection object and then generate one temporary connection
for each db. It sound feasible but after one attempt it seems it
requires a bit more changes around (init, close connections) and
doesn't bring anything significantly better on the table. In case if
some future changes will prove this wrong, do not hesitate to refactor.
Change retry strategy to more insistive one, namely:
* retry on the next sync even if we failed to process one database and
install pooler appliance.
* perform the whole installation unconditionally on update, since the
list of target databases could be changed.
And for the sake of making it even more robust, also log the case when
operator decides to skip installation.
Extend connection pooler e2e test with verification that all dbs have
required schema installed.
Right now there are no readiness probes defined for connection pooler,
which means after a pod restart there is a short time window (between a
container start and connection pooler starting listening to a socket)
when a service can send queries to a new pod, but connection will be
refused. The pooler container is rather lightweight and it start to
listen immediately, so the time window is small, but still.
To fix this add a readiness probe for tcp socket opened by connection
pooler.
* post polishing for latest PRs
* update travis and go modules
* make deprecation comments in structs less confusing
* have separate pod priority class es for operator and database pods
* icnrease vm size
* cache deps
* switch to the absolute cache path as cdp does not support shell expansion
* do not pull non-existing image
* manually install kind
* add alias to kind
* use full kind name
* one more name change
* install kind with other tools
* add bind mounts instead of copying files
* test fetching the runner image
* build image for pierone
* bump up the client-go version to match the master
* bump up go version
* install pinned version of kind before any test run
* do not overwrite local ./manifests during test run
* update the docs
* fix kind name
* update go.* files
* fix deps
* avoid unnecessary image upload
* properly install kind
* Change network to host to make it reachable within e2e runner. May not be the right solution though.
* Small changes. Also use entrypoint vs cmd.
* Bumping spilo. Load before test.
* undo incorrect merge from the master
Co-authored-by: Sergey Dudoladov <sergey.dudoladov@zalando.de>
Co-authored-by: Jan Mußler <janm81@gmail.com>
* update kind and use with old storage class
* specify standard storage class in minimal manifest
* remove existing local storage class in kind
* fix pod distribution test
* exclude k8s master from nodes of interest
* allow using both infrastructure_roles_options
* new default values for user and role definition
* use robot_zmon as parent role
* add operator log to debug
* right name for old secret
* only extract if rolesDefs is empty
* set password1 in old infrastructure role
* fix new infra rile secret
* choose different role key for new secret
* set memberof everywhere
* reenable all tests
* reflect feedback
* remove condition for rolesDefs
Extend infrastructure roles handling
Postgres Operator uses infrastructure roles to provide access to a database for
external users e.g. for monitoring purposes. Such infrastructure roles are
expected to be present in the form of k8s secrets with the following content:
inrole1: some_encrypted_role
password1: some_encrypted_password
user1: some_entrypted_name
inrole2: some_encrypted_role
password2: some_encrypted_password
user2: some_entrypted_name
The format of this content is implied implicitly and not flexible enough. In
case if we do not have possibility to change the format of a secret we want to
use in the Operator, we need to recreate it in this format.
To address this lets make the format of secret content explicitly. The idea is
to introduce a new configuration option for the Operator.
infrastructure_roles_secrets:
- secretname: k8s_secret_name
userkey: some_encrypted_name
passwordkey: some_encrypted_password
rolekey: some_encrypted_role
- secretname: k8s_secret_name
userkey: some_encrypted_name
passwordkey: some_encrypted_password
rolekey: some_encrypted_role
This would allow Operator to use any avalable secrets to prepare infrastructure
roles. To make it backward compatible simulate the old behaviour if the new
option is not present.
The new configuration option is intended be used mainly from CRD, but it's also
available via Operator ConfigMap in a limited fashion. For ConfigMap one can
put there only a string with one secret definition in the following format (as
a string):
infrastructure_roles_secrets: |
secretname: k8s_secret_name,
userkey: some_encrypted_name,
passwordkey: some_encrypted_password,
rolekey: some_encrypted_role
Note than only one secret could be specified this way, no multiple secrets are
allowed.
Eventually the resulting list of infrastructure roles would be a total sum of
all supported ways to describe it, namely legacy via
infrastructure_roles_secret_name and infrastructure_roles_secrets from both
ConfigMap and CRD.
* change Clone attribute of PostgresSpec to *ConnectionPooler
* update go.mod from master
* fix TestConnectionPoolerSynchronization()
* Update pkg/apis/acid.zalan.do/v1/postgresql_type.go
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
Co-authored-by: Pavlo Golub <pavlo.golub@gmail.com>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>