Commit Graph

20 Commits

Author SHA1 Message Date
Felix Kunde 43163cf83b
allow using both infrastructure_roles_options (#1090)
* allow using both infrastructure_roles_options

* new default values for user and role definition

* use robot_zmon as parent role

* add operator log to debug

* right name for old secret

* only extract if rolesDefs is empty

* set password1 in old infrastructure role

* fix new infra rile secret

* choose different role key for new secret

* set memberof everywhere

* reenable all tests

* reflect feedback

* remove condition for rolesDefs
2020-08-10 15:08:03 +02:00
Dmitry Dolgov 7cf2fae6df
[WIP] Extend infrastructure roles handling (#1064)
Extend infrastructure roles handling

Postgres Operator uses infrastructure roles to provide access to a database for
external users e.g. for monitoring purposes. Such infrastructure roles are
expected to be present in the form of k8s secrets with the following content:

    inrole1: some_encrypted_role
    password1: some_encrypted_password
    user1: some_entrypted_name

    inrole2: some_encrypted_role
    password2: some_encrypted_password
    user2: some_entrypted_name

The format of this content is implied implicitly and not flexible enough. In
case if we do not have possibility to change the format of a secret we want to
use in the Operator, we need to recreate it in this format.

To address this lets make the format of secret content explicitly. The idea is
to introduce a new configuration option for the Operator.

    infrastructure_roles_secrets:
    - secretname: k8s_secret_name
      userkey: some_encrypted_name
      passwordkey: some_encrypted_password
      rolekey: some_encrypted_role

    - secretname: k8s_secret_name
      userkey: some_encrypted_name
      passwordkey: some_encrypted_password
      rolekey: some_encrypted_role

This would allow Operator to use any avalable secrets to prepare infrastructure
roles. To make it backward compatible simulate the old behaviour if the new
option is not present.

The new configuration option is intended be used mainly from CRD, but it's also
available via Operator ConfigMap in a limited fashion. For ConfigMap one can
put there only a string with one secret definition in the following format (as
a string):

    infrastructure_roles_secrets: |
        secretname: k8s_secret_name,
        userkey: some_encrypted_name,
        passwordkey: some_encrypted_password,
        rolekey: some_encrypted_role

Note than only one secret could be specified this way, no multiple secrets are
allowed.

Eventually the resulting list of infrastructure roles would be a total sum of
all supported ways to describe it, namely legacy via
infrastructure_roles_secret_name and infrastructure_roles_secrets from both
ConfigMap and CRD.
2020-08-05 14:18:56 +02:00
Rafia Sabih d52296c323
Propagate annotations to the StatefulSet (#932)
* Initial commit

* Corrections

- set the type of the new  configuration parameter to be array of
  strings
- propagate the annotations to statefulset at sync

* Enable regular expression matching

* Improvements

-handle rollingUpdate flag
-modularize code
-rename config parameter name

* fix merge error

* Pass annotations to connection pooler deployment

* update code-gen

* Add documentation and update manifests

* add e2e test and introduce option in configmap

* fix service annotations test

* Add unit test

* fix e2e tests

* better key lookup of annotations tests

* add debug message for annotation tests

* Fix typos

* minor fix for looping

* Handle update path and renaming

- handle the update path to update sts and connection pooler deployment.
  This way no need to wait for sync
- rename the parameter to downscaler_annotations
- handle other review comments

* another try to fix python loops

* Avoid unneccessary update events

* Update manifests

* some final polishing

* fix cluster_test after polishing

Co-authored-by: Rafia Sabih <rafia.sabih@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2020-05-04 14:46:56 +02:00
Sergey Dudoladov cc635a02e3
Lazy upgrade of the Spilo image (#859)
* initial implementation

* describe forcing the rolling upgrade

* make parameter name more descriptive

* add missing pieces

* address review

* address review

* fix bug in e2e tests

* fix cluster name label in e2e test

* raise test timeout

* load spilo test image

* use available spilo image

* delete replica pod for lazy update test

* fix e2e

* fix e2e with a vengeance

* lets wait for another 30m

* print pod name in error msg

* print pod name in error msg 2

* raise timeout, comment other tests

* subsequent updates of config

* add comma

* fix e2e test

* run unit tests before e2e

* remove conflicting dependency

* Revert "remove conflicting dependency"

This reverts commit 65fc09054b.

* improve cdp build

* dont run unit before e2e tests

* Revert "improve cdp build"

This reverts commit e2a8fa12aa.

Co-authored-by: Sergey Dudoladov <sergey.dudoladov@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2020-04-29 10:07:14 +02:00
Sergey Dudoladov 3c91bdeffa
Re-create pods only if all replicas are running (#903)
* adds a Get call to Patroni interface to fetch state of a Patroni member
* postpones re-creating pods if at least one replica is currently being created  

Co-authored-by: Sergey Dudoladov <sergey.dudoladov@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2020-04-20 15:14:11 +02:00
Felix Kunde b43b22dfcc
Call me pooler, not pool (#883)
* rename pooler parts and add example to manifest
* update codegen
* fix manifest and add more details to docs
* reflect renaming also in e2e tests
2020-04-01 10:34:03 +02:00
Felix Kunde 64d816c556
add short sleep before redistributing pods (#891) 2020-03-31 11:47:39 +02:00
Dmitry Dolgov 9dfa433363
Connection pooler (#799)
Connection pooler support

Add support for a connection pooler. The idea is to make it generic enough to
be able to switch between different implementations (e.g. pgbouncer or
odyssey). Operator needs to create a deployment with pooler and a service for
it to access.

For connection pool to work properly, a database needs to be prepared by
operator, namely a separate user have to be created with an access to an
installed lookup function (to fetch credential for other users).

This setups is supposed to be used only by robot/application users. Usually a
connection pool implementation is more CPU bounded, so it makes sense to create
several pods for connection pool with more emphasize on cpu resources. At the
moment there are no special affinity or tolerations assigned to bring those
pods closer to the database. For availability purposes minimal number of
connection pool pods is 2, ideally they have to be distributed between
different nodes/AZ, but it's not enforced in the operator itself. Available
configuration supposed to be ergonomic and in the normal case require minimum
changes to a manifest to enable connection pool. To have more control over the
configuration and functionality on the pool side one can customize the
corresponding docker image.

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2020-03-25 12:57:26 +01:00
Felix Kunde 07c5da35e3
fix minor issues in docs and manifests (#866)
* fix minor issues in docs and manifests
* double retry_timeout_sec
2020-03-18 15:02:13 +01:00
Felix Kunde cde61f3f0b
e2e: wait for pods after disabling anti affinity (#862) 2020-03-11 14:08:54 +01:00
Felix Kunde ae2a38d62a
add e2e test for node readiness label (#846)
* add e2e test for node readiness label
* refactoring and order tests alphabetically
* always wait for replica after failover
2020-03-06 12:55:34 +01:00
Felix Kunde 742d7334a1
use cluster-name as default label everywhere (#782)
* use cluster-name as default label everywhere
* fix e2e test
2020-02-19 15:01:01 +01:00
Felix Kunde 3b10dc645d
patch/update services on type change (#824)
* use Update when disabling LoadBalancer + added e2e test
2020-02-13 16:24:15 +01:00
Jonathan Juares Beber 744c71d16b
Allow services update when changing annotations (#818)
The current implementations for `pkg.util.k8sutil.SameService` considers
only service annotations change on the default annotations created by the
operator. Custom annotations are not compared and consequently not
applied after the first service creation.

This commit introduces a complete annotations comparison between the
current service created by the operator and the new one generated based on
the configs. Also, it adds tests on the above-mentioned function.
2020-02-13 10:55:30 +01:00
Felix Kunde be6c8cd573
specify cluster in e2e taint test (#823) 2020-02-10 16:41:51 +01:00
Jonathan Juares Beber ba60e15d07 Add ServiceAnnotations cluster config (#803)
The [operator parameters][1] already support the
`custom_service_annotations` config.With this parameter is possible to
define custom annotations that will be used on the services created by the
operator. The `custom_service_annotations` as all the other
[operator parameters][1] are defined on the operator level and do not allow
customization on the cluster level. A cluster may require different service
annotations, as for example, set up different cloud load balancers
timeouts, different ingress annotations, and/or enable more customizable
environments.

This commit introduces a new parameter on the cluster level, called
`serviceAnnotations`, responsible for defining custom annotations just for
the services created by the operator to the specifically defined cluster.
It allows a mix of configuration between `custom_service_annotations` and
`serviceAnnotations` where the latest one will have priority. In order to
allow custom service annotations to be used on services without
LoadBalancers (as for example, service mesh services annotations) both
`custom_service_annotations` and `serviceAnnotations` are applied
independently of load-balancing configuration. For retro-compatibility
purposes, `custom_service_annotations` is still under
[Load balancer related options][2]. The two default annotations when using
LoadBalancer services, `external-dns.alpha.kubernetes.io/hostname` and
`service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` are
still defined by the operator.
`service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` can
be overridden by `custom_service_annotations` or `serviceAnnotations`,
allowing a more customizable environment.
`external-dns.alpha.kubernetes.io/hostname` can not be overridden once
there is no differentiation between custom service annotations for
replicas and masters.

It updates the documentation and creates the necessary unit and e2e
tests to the above-described feature too.

[1]: https://github.com/zalando/postgres-operator/blob/master/docs/reference/operator_parameters.md
[2]: https://github.com/zalando/postgres-operator/blob/master/docs/reference/operator_parameters.md#load-balancer-related-options
2020-02-10 12:03:25 +01:00
Felix Kunde 1f0312a014
make minimum limits boundaries configurable (#808)
* make minimum limits boundaries configurable
* add e2e test
2020-02-03 11:43:18 +01:00
Felix Kunde 107334fe71
Add global option to enable/disable init containers and sidecars (#478)
* Add global option to enable/disable init containers and sidecars
* update dependencies
2019-12-10 15:45:54 +01:00
Felix Kunde cd350a4bc1
make run.sh executable from within e2e (#619) 2019-07-24 15:07:32 +02:00
Sergey Dudoladov 69af2d60e5
Implement runner for e2e tests (#548)
* implement a runner for e2e tests

* move e2e tests to a Docker container

* integrate e2e tests into build pipelines

* add tests for multi-namespace support and logical backup jobs

* @FxKu implement the first e2e test for failovers
2019-06-05 17:07:27 +02:00