* allow using both infrastructure_roles_options
* new default values for user and role definition
* use robot_zmon as parent role
* add operator log to debug
* right name for old secret
* only extract if rolesDefs is empty
* set password1 in old infrastructure role
* fix new infra rile secret
* choose different role key for new secret
* set memberof everywhere
* reenable all tests
* reflect feedback
* remove condition for rolesDefs
Extend infrastructure roles handling
Postgres Operator uses infrastructure roles to provide access to a database for
external users e.g. for monitoring purposes. Such infrastructure roles are
expected to be present in the form of k8s secrets with the following content:
inrole1: some_encrypted_role
password1: some_encrypted_password
user1: some_entrypted_name
inrole2: some_encrypted_role
password2: some_encrypted_password
user2: some_entrypted_name
The format of this content is implied implicitly and not flexible enough. In
case if we do not have possibility to change the format of a secret we want to
use in the Operator, we need to recreate it in this format.
To address this lets make the format of secret content explicitly. The idea is
to introduce a new configuration option for the Operator.
infrastructure_roles_secrets:
- secretname: k8s_secret_name
userkey: some_encrypted_name
passwordkey: some_encrypted_password
rolekey: some_encrypted_role
- secretname: k8s_secret_name
userkey: some_encrypted_name
passwordkey: some_encrypted_password
rolekey: some_encrypted_role
This would allow Operator to use any avalable secrets to prepare infrastructure
roles. To make it backward compatible simulate the old behaviour if the new
option is not present.
The new configuration option is intended be used mainly from CRD, but it's also
available via Operator ConfigMap in a limited fashion. For ConfigMap one can
put there only a string with one secret definition in the following format (as
a string):
infrastructure_roles_secrets: |
secretname: k8s_secret_name,
userkey: some_encrypted_name,
passwordkey: some_encrypted_password,
rolekey: some_encrypted_role
Note than only one secret could be specified this way, no multiple secrets are
allowed.
Eventually the resulting list of infrastructure roles would be a total sum of
all supported ways to describe it, namely legacy via
infrastructure_roles_secret_name and infrastructure_roles_secrets from both
ConfigMap and CRD.
* change Clone attribute of PostgresSpec to *ConnectionPooler
* update go.mod from master
* fix TestConnectionPoolerSynchronization()
* Update pkg/apis/acid.zalan.do/v1/postgresql_type.go
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
Co-authored-by: Pavlo Golub <pavlo.golub@gmail.com>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* Extend operator configuration to allow for a pod_environment_secret just like pod_environment_configmap
* Add all keys from PodEnvironmentSecrets as ENV vars (using SecretKeyRef to protect the value)
* Apply envVars from pod_environment_configmap and pod_environment_secrets before doing the global settings from the operator config. This allows them to be overriden by the user (via configmap / secret)
* Add ability use a Secret for custom pod envVars (via pod_environment_secret) to admin documentation
* Add pod_environment_secret to Helm chart values.yaml
* Add unit tests for PodEnvironmentConfigMap and PodEnvironmentSecret - highly inspired by @kupson and his very similar PR #481
* Added new parameter pod_environment_secret to operatorconfig CRD and configmap examples
* Add pod_environment_secret to the operationconfiguration CRD
Co-authored-by: Christian Rohmann <christian.rohmann@inovex.de>
* delete secrets the right way
* make a one function
* continue deleting secrets even if one delete fails
Co-authored-by: Felix Kunde <felix.kunde@zalando.de>
* Try to resize pvc if resizing pv has failed
* added config option to switch between storage resize strategies
* changes according to requests
* Update pkg/controller/operator_config.go
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* enable_storage_resize documented
added examples to the default configuration and helm value files
* enable_storage_resize renamed to volume_resize_mode, off by default
* volume_resize_mode renamed to storage_resize_mode
* Update pkg/apis/acid.zalan.do/v1/crds.go
* pkg/cluster/volumes.go updated
* Update docs/reference/operator_parameters.md
* Update manifests/postgresql-operator-default-configuration.yaml
* Update pkg/controller/operator_config.go
* Update pkg/util/config/config.go
* Update charts/postgres-operator/values-crd.yaml
* Update charts/postgres-operator/values.yaml
* Update docs/reference/operator_parameters.md
* added logging if no changes required
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* try to emit error for missing team name in cluster name
* skip creation after new cluster object
* move SetStatus to k8sclient and emit event when skipping creation and rename to SetPostgresCRDStatus
Co-authored-by: Felix Kunde <felix.kunde@zalando.de>
* do not block rolling updates with lazy spilo update enabled
* treat initContainers like Spilo image
Co-authored-by: Felix Kunde <felix.kunde@zalando.de>
Currently, the deployment manifest specifies two labels: `name` and `team`.
This fixes the service not matching the deployed pods by chosing a correct selector.
* Support for WAL_GS_BUCKET and GOOGLE_APPLICATION_CREDENTIALS environtment variables
* Fixed merge issue but also removed all changes to support macos.
* Updated test to new format
* Missed macos specific changes
* Added documentation and addressed comments
* Update docs/administrator.md
* Update docs/administrator.md
* Update e2e/run.sh
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
deleteConnectionPooler function incorrectly checks that the delete api response is ResourceNotFound. Looks like the only consequence is a confusing log message, but obviously it's wrong. Remove negation, since having ResourceNotFound as error is the good case.
Co-authored-by: Christian Rohmann <christian.rohmann@inovex.de>
* Initial commit
* Corrections
- set the type of the new configuration parameter to be array of
strings
- propagate the annotations to statefulset at sync
* Enable regular expression matching
* Improvements
-handle rollingUpdate flag
-modularize code
-rename config parameter name
* fix merge error
* Pass annotations to connection pooler deployment
* update code-gen
* Add documentation and update manifests
* add e2e test and introduce option in configmap
* fix service annotations test
* Add unit test
* fix e2e tests
* better key lookup of annotations tests
* add debug message for annotation tests
* Fix typos
* minor fix for looping
* Handle update path and renaming
- handle the update path to update sts and connection pooler deployment.
This way no need to wait for sync
- rename the parameter to downscaler_annotations
- handle other review comments
* another try to fix python loops
* Avoid unneccessary update events
* Update manifests
* some final polishing
* fix cluster_test after polishing
Co-authored-by: Rafia Sabih <rafia.sabih@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* PreparedDatabases with default role setup
* merge changes from master
* include preparedDatabases spec check when syncing databases
* create a default preparedDB if not specified
* add more default privileges for schemas
* use empty brackets block for undefined objects
* cover more default privilege scenarios and always define admin role
* add DefaultUsers flag
* support extensions and defaultUsers for preparedDatabases
* remove exact version in deployment manifest
* enable CRD validation for new field
* update generated code
* reflect code review
* fix typo in SQL command
* add documentation for preparedDatabases feature + minor changes
* some datname should stay
* add unit tests
* reflect some feedback
* init users for preparedDatabases also on update
* only change DB default privileges on creation
* add one more section in user docs
* one more sentence
* initial implementation
* describe forcing the rolling upgrade
* make parameter name more descriptive
* add missing pieces
* address review
* address review
* fix bug in e2e tests
* fix cluster name label in e2e test
* raise test timeout
* load spilo test image
* use available spilo image
* delete replica pod for lazy update test
* fix e2e
* fix e2e with a vengeance
* lets wait for another 30m
* print pod name in error msg
* print pod name in error msg 2
* raise timeout, comment other tests
* subsequent updates of config
* add comma
* fix e2e test
* run unit tests before e2e
* remove conflicting dependency
* Revert "remove conflicting dependency"
This reverts commit 65fc09054b.
* improve cdp build
* dont run unit before e2e tests
* Revert "improve cdp build"
This reverts commit e2a8fa12aa.
Co-authored-by: Sergey Dudoladov <sergey.dudoladov@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* Added support for specifying a nodePort value for the postgres-operator-ui service when the type is NodePort
Co-authored-by: Marcus Portmann <marcusp@discovery.co.za>
* Add EventsGetter to KubeClient to enable to sending K8S events
* Add eventRecorder to the controller, initialize it and hand it down to cluster via its constructor to enable it to emit events this way
* Add first set of events which then go to the postgresql custom resource the user interacts with to provide some feedback
* Add right to "create" events to operator cluster role
* Adapt cluster tests to new function sigurature with eventRecord (via NewFakeRecorder)
* Get a proper reference before sending events to a resource
Co-authored-by: Christian Rohmann <christian.rohmann@inovex.de>
* adds a Get call to Patroni interface to fetch state of a Patroni member
* postpones re-creating pods if at least one replica is currently being created
Co-authored-by: Sergey Dudoladov <sergey.dudoladov@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* further compatibility with k8sUseConfigMaps - skip further endpoints related actions
* Update pkg/cluster/cluster.go
thanks!
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* Update pkg/cluster/cluster.go
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* Update pkg/cluster/cluster.go
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
There is a possibility to pass nil as one of the specs and an empty spec
into syncConnectionPooler. In this case it will perfom a syncronization
because nil != empty struct. Avoid such cases and make it testable by
returning list of syncronization reasons on top together with the final
error.
* Allow additional Volumes to be mounted
* added TargetContainers option to determine if additional volume need to be mounter or not
* fixed dependencies
* updated manifest additional volume example
* More validation
Check that there are no volume mount path clashes or "all" vs ["a", "b"]
mixtures. Also change the default behaviour to mount to "postgres"
container.
* More documentation / example about additional volumes
* Revert go.sum and go.mod from origin/master
* Declare addictionalVolume specs in CRDs
* fixed k8sres after rebase
* resolv conflict
Co-authored-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Co-authored-by: Thierry <thierry@malt.com>
* Protected and system users can't be a connection pool user
It's not supported, neither it's a best practice. Also fix potential null
pointer access. For protected users it makes sense by intent of protecting this
users (e.g. from being overriden or used as something else than supposed). For
system users the reason is the same as for superuser, it's about replicastion
user and it's under patroni control.
This is implemented on both levels, operator config and postgresql manifest.
For the latter we just use default name in this case, assuming that operator
config is always correct. For the former, since it's a serious
misconfiguration, operator panics.
* missing quotes in pooler configmap in values.yaml
* missing quotes in pooler configmap in values-crd.yaml
* docs clarifications
* helm3 --skip-crds
* Update docs/user.md
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* details moved in docs
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>