Commit Graph

162 Commits

Author SHA1 Message Date
ReSearchITEng 5014eebfb2
when kubernetes_use_configmaps -> skip further endpoints actions even delete (#921)
* further compatibility with k8sUseConfigMaps - skip further endpoints related actions

* Update pkg/cluster/cluster.go

thanks!

Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>

* Update pkg/cluster/cluster.go

Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>

* Update pkg/cluster/cluster.go

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2020-04-16 16:47:59 +02:00
Dmitry Dolgov 6a689cdc1c
Prevent empty syncs (#922)
There is a possibility to pass nil as one of the specs and an empty spec
into syncConnectionPooler. In this case it will perfom a syncronization
because nil != empty struct. Avoid such cases and make it testable by
returning list of syncronization reasons on top together with the final
error.
2020-04-16 15:14:31 +02:00
Dmitry Dolgov a1f2bd05b9
Prevent superuser from being a connection pool user (#906)
* Protected and system users can't be a connection pool user

It's not supported, neither it's a best practice. Also fix potential null
pointer access. For protected users it makes sense by intent of protecting this
users (e.g. from being overriden or used as something else than supposed). For
system users the reason is the same as for superuser, it's about replicastion
user and it's under patroni control.

This is implemented on both levels, operator config and postgresql manifest.
For the latter we just use default name in this case, assuming that operator
config is always correct. For the former, since it's a serious
misconfiguration, operator panics.
2020-04-09 09:21:45 +02:00
Felix Kunde b43b22dfcc
Call me pooler, not pool (#883)
* rename pooler parts and add example to manifest
* update codegen
* fix manifest and add more details to docs
* reflect renaming also in e2e tests
2020-04-01 10:34:03 +02:00
Felix Kunde 66f2cda87f
Move operator to go 1.14 (#882)
* update go modules march 2020
* update to GO 1.14
* reflect k8s client API changes
2020-03-30 15:50:17 +02:00
Dmitry Dolgov 9dfa433363
Connection pooler (#799)
Connection pooler support

Add support for a connection pooler. The idea is to make it generic enough to
be able to switch between different implementations (e.g. pgbouncer or
odyssey). Operator needs to create a deployment with pooler and a service for
it to access.

For connection pool to work properly, a database needs to be prepared by
operator, namely a separate user have to be created with an access to an
installed lookup function (to fetch credential for other users).

This setups is supposed to be used only by robot/application users. Usually a
connection pool implementation is more CPU bounded, so it makes sense to create
several pods for connection pool with more emphasize on cpu resources. At the
moment there are no special affinity or tolerations assigned to bring those
pods closer to the database. For availability purposes minimal number of
connection pool pods is 2, ideally they have to be distributed between
different nodes/AZ, but it's not enforced in the operator itself. Available
configuration supposed to be ergonomic and in the normal case require minimum
changes to a manifest to enable connection pool. To have more control over the
configuration and functionality on the pool side one can customize the
corresponding docker image.

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
2020-03-25 12:57:26 +01:00
Felix Kunde b66734a0a9
omit PgVersion diff on sync (#860)
* use PostgresParam.PgVersion everywhere
* on sync compare pgVersion with SpiloConfiguration
* update getNewPgVersion and added tests
2020-03-13 11:48:19 +01:00
Felix Kunde 702a194c41
switch to rbac/v1 (#829)
* switch to rbac/v1
2020-02-17 11:25:07 +01:00
Felix Kunde 1f0312a014
make minimum limits boundaries configurable (#808)
* make minimum limits boundaries configurable
* add e2e test
2020-02-03 11:43:18 +01:00
Felix Kunde cd110aabf4
Enforce minimum cpu and memory limits (#731)
* add validation for PG resources and volume size
* check resource requests also on UPDATE and SYNC + update docs
* if cluster was running don't error on sync
2019-12-12 16:43:55 +01:00
Felix Kunde f0e29060b1
move StatefulSet to apps/v1 (#675) 2019-09-30 16:42:04 +02:00
Felix Kunde 4a863d2280 Avoid orphaned objects on delete (#654)
* Make setSpec function work correctly when updating cluster status fails
2019-08-27 12:54:35 +02:00
Rafia Sabih 540d58d5bd
Adding the support for standby cluster
This will set up a continuous wal streaming cluster, by adding the corresponding section in postgres manifest. Instead of having a full-fledged standby cluster as in Patroni, here we use only the wal path of the source cluster and stream from there.

Since, standby cluster is streaming from the master and does not require to create or use databases of it's own. Hence, it bypasses the creation of users or databases.

There is a separate sample manifest added to set up a standby-cluster.
2019-06-21 10:11:39 +02:00
Felix Kunde 6918394562
Add PDB configuration toggle (#583)
* Don't create an impossible disruption budget for smaller clusters.
* sync PDB also on update
2019-06-18 10:48:21 +02:00
Rafia Sabih 2886027516
Some typos/spelling mistakes fix (#580)
Harmless typos fix.
2019-06-06 14:20:15 +02:00
Stephane T 1f4267eb05 fix: remove headless service config when deleting cluster (#567)
see: https://github.com/zalando/postgres-operator/issues/566

Signed-off-by: Stephane Tang <hi@stang.sh>
2019-05-21 13:49:34 +02:00
Sergey Dudoladov f3e1e80aaf
Add logical backup (#442)
* Add k8s cron job to spawn logical backups

* Minor doc updates
2019-05-16 15:52:01 +02:00
Felix Kunde 0fbfbb23bb
Use /status subresource instead of plain manifest field (#534)
* turns PostgresStatus type into a struct with field PostgresClusterStatus
* setStatus patch target is now /status subresource
* unmarshalling PostgresStatus takes care of previous status field convention
* new simple bool functions status.Running(), status.Creating()
2019-05-07 12:01:45 +02:00
Stephane T edeb06d39c fix: update init_containers (#518)
* fix: PATH expension in Makefile

Signed-off-by: Stephane Tang <hi@stang.sh>

* refact: pass list of containers to compareContainers()

Signed-off-by: Stephane Tang <hi@stang.sh>

* compare initContainers while comparing StatefulSet

  Fixes #517

Signed-off-by: Stephane Tang <hi@stang.sh>

* refact: compareContainers()

Signed-off-by: Stephane Tang <hi@stang.sh>
2019-03-19 17:46:12 +01:00
Felix Kunde 31e568157b reflect change in github url (#496)
Project was moved from the incubator to the Zalando main org, hence the rename
2019-02-25 11:26:55 +01:00
Maxim Ivanov ed6acc1178 Correctly report success in .status on Update (#469) 2019-01-31 13:09:17 +01:00
Jan Mussler c70905ae8b Modifying some of the logging to be more descriptive. (#440)
* Modifying some of the logging to be more descriptive.
2019-01-08 13:07:36 +01:00
zerg-junior c0b0b9a832
[WIP] Add 'admin' option to create role (#425)
* Add 'admin' option to create role

* Fix run_locally_script
2018-12-27 10:14:33 +01:00
zerg-junior 7907f95d2f
Improve reporting about rolling updates (#391) 2018-09-24 11:57:43 +02:00
Noah Kantrowitz 0b75a89920 Fix the casing of github.com/Sirupsen/logrus to match what the project itself uses. (#380)
Dep enforces this.
2018-09-06 10:26:48 +02:00
zerg-junior 25fa45fd58 [WIP] Grant 'superuser' to the members of Postgres admin teams (#371)
Added support for superuser team in addition to the admin team that owns the postgres cluster.
2018-08-30 10:51:37 +02:00
zerg-junior aeae0a6ef2
Use cluster's own namespace to patch the cluster manifest (#373) 2018-08-22 11:07:12 +02:00
Oleksii Kliukin e1ed4b847d
Use code-generation for CRD API and deepcopy methods (#369)
Client-go provides a https://github.com/kubernetes/code-generator package in order to provide the API to work with CRDs similar to the one available for built-in types, i.e. Pods, Statefulsets and so on.

Use this package to generate deepcopy methods (required for CRDs), instead of using an external deepcopy package; we also generate APIs used to manipulate both Postgres and OperatorConfiguration CRDs, as well as informers and listers for the Postgres CRD, instead of using generic informers and CRD REST API; by using generated code we can get rid of some custom and obscure CRD-related code and use a better API.

All generated code resides in /pkg/generated, with an exception of zz_deepcopy.go in apis/acid.zalan.do/v1

Rename postgres-operator-configuration CRD to OperatorConfiguration, since the former broke naming convention in the code-generator.

Moved Postgresql, PostgresqlList, OperatorConfiguration and OperatorConfigurationList and other types used by them into

Change the type of  the Error field in the Postgresql crd to a string, so that client-go could generate a deepcopy for it.

Use generated code to set status of CRD objects as well. Right now this is done with patch, however, Kubernetes 1.11 introduces the /status subresources, allowing us to set the status with
the special updateStatus call in the future. For now, we keep the code that is compatible with earlier versions of Kubernetes.

Rename postgresql.go to database.go and status.go to logs_and_api.go to reflect the purpose of each of those files.

Update client-go dependencies.

Minor reformatting and renaming.
2018-08-15 17:22:25 +02:00
Oleksii Kliukin e933908084
Configure pg_hba in the local postgresql configuration of Patroni. (#361)
Previously, the operator put pg_hba into the bootstrap/pg_hba key of
Patroni. That had 2 adverse effects:
 - pg_hba.conf was shadowed by Spilo default section in the local
   postgresql configuration
 - when updating pg_hba in the cluster manifest, the updated lines were
   not propagated to DCS, since the key was defined in the boostrap
   section of Patroni.

Include some minor refactoring, moving methods to unexported when
possible and commenting out usage of md5, so that gosec won't complain.

Per https://github.com/zalando-incubator/postgres-operator/issues/330

Review by @zerg-junior
2018-08-08 11:01:26 +02:00
Oleksii Kliukin b06186eb41
Linter-induced code refactoring, run round 2. (#360)
Run more linters in the gometalinter, i.e. deadcode, megacheck,
nakedret, dup.

More consistent code formatting, remove two dead functions, eliminate
naked a bunch of naked returns, refactor a few functions to avoid code
duplication.
2018-08-06 12:09:19 +02:00
Oleksii Kliukin 59f0c5551e
Allow configuring pod priority globally and per cluster. (#353)
* Allow configuring pod priority globally and per cluster.

Allow to specify pod priority class for all pods managed by the operator,
as well as for those belonging to individual clusters.

Controlled by the pod_priority_class_name operator configuration
parameter and the podPriorityClassName manifest option.

See https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
for the explanation on how to define priority classes since Kubernetes 1.8.

Some import order changes are due to go fmt.
Removal of OrphanDependents deprecated field.

Code review by @zerg-junior
2018-08-03 14:03:37 +02:00
Oleksii Kliukin ac7b132314
Refactoring inspired by gometalinter. (#357)
Among other things, fix a few issues with deepcopy implementation.
2018-08-03 11:09:45 +02:00
Oleksii Kliukin d2d3f21dc2 Client go upgrade v6 (#352)
There are shortcuts in this code, i.e. we created the deepcopy function
by using the deepcopy package instead of the generated code, that will
be addressed once migrated to client-go v8. Also, some objects,
particularly statefulsets, are still taken from v1beta, this will also
be addressed in further commits once the changes are stabilized.
2018-08-01 11:08:01 +02:00
Oleksii Kliukin 0181a1b5b1
Introduce a repair scan to fix failing clusters (#304)
A repair is a sync scan that acts only on those clusters that indicate
that the last add, update or sync operation on them has failed. It is
supposed to kick in more frequently than the repair scan. The repair
scan still remains to be useful to fix the consequences of external
actions (i.e. someone deletes a postgres-related service by mistake)
unbeknownst to the operator.

The repair scan is controlled by the new repair_period parameter in the
operator configuration. It has to be at least 2 times more frequent than
a sync scan to have any effect (a normal sync scan will update both last
synced and last repaired attributes of the controller, since repair is
just a sync underneath).

A repair scan could be queued for a cluster that is already being synced
if the sync period exceeds the interval between repairs. In that case a
repair event will be discarded once the corresponding worker finds out
that the cluster is not failing anymore.

Review by @zerg-junior
2018-07-24 11:21:45 +02:00
zerg-junior 417f13c0bd
Submit RBAC credentials during initial Event processing (#344)
* During initial Event processing submit the service account for pods and bind it to a cluster role that allows Patroni to successfully start. The cluster role is assumed to be created by the k8s cluster administrator.
2018-07-19 16:40:40 +02:00
Oleksii Kliukin 3a9378d3b8
Allow configuring the operator via the YAML manifest. (#326)
* Up until now, the operator read its own configuration from the
configmap.  That has a number of limitations, i.e. when the
configuration value is not a scalar, but a map or a list. We use a
custom code based on github.com/kelseyhightower/envconfig to decode
non-scalar values out of plain text keys, but that breaks when the data
inside the keys contains both YAML-special elememtns (i.e. commas) and
complex quotes, one good example for that is search_path inside
`team_api_role_configuration`. In addition, reliance on the configmap
forced a flag structure on the configuration, making it hard to write
and to read (see
https://github.com/zalando-incubator/postgres-operator/pull/308#issuecomment-395131778).

The changes allow to supply the operator configuration in a proper YAML
file. That required registering a custom CRD to support the operator
configuration and provide an example at
manifests/postgresql-operator-default-configuration.yaml. At the moment,
both old configmap and the new CRD configuration is supported, so no
compatibility issues, however, in the future I'd like to deprecate the
configmap-based configuration altogether. Contrary to the
configmap-based configuration, the CRD one doesn't embed defaults into
the operator code, however, one can use the
manifests/postgresql-operator-default-configuration.yaml as a starting
point in order to build a custom configuration.

Since previously `ReadyWaitInterval` and `ReadyWaitTimeout` parameters
used to create the CRD were taken from the operator configuration, which
is not possible if the configuration itself is stored in the CRD object,
I've added the ability to specify them as environment variables
`CRD_READY_WAIT_INTERVAL` and `CRD_READY_WAIT_TIMEOUT` respectively.

Per review by @zerg-junior  and  @Jan-M.
2018-07-16 16:20:46 +02:00
Oleksii Kliukin e90a01050c
Switchover must wait for the inner goroutine before it returns. (#343)
* Switchover must wait for the inner goroutine before it returns.

Otherwise, two corner cases may happen:

 - waitForPodLabel writes to the podLabelErr channel that has been
   already closed by the outer routine

 - the outer routine exists and the caller subscribes to the pod
   the inner goroutine has already subscribed to, resulting in panic.

 The previous commit fe47f9ebea
 that touched that code added the cancellation channel, but didn't bother
 to actually wait for the goroutine to be cancelled.

 Per report and review from @valer-cara.
 Original issue: https://github.com/zalando-incubator/postgres-operator/issues/342
2018-07-16 11:50:35 +02:00
Oleksii Kliukin 48a5744314
Use Patroni API to set bootstrap-only options. (#299)
Call Patroni API /config in order to set special options that are
ignored when set in the configuration file, such as max_connections.
Per https://github.com/zalando-incubator/postgres-operator/issues/297

* Some minor refacoring:

Rename Cluster ManualFailover to Swithover
Rename Patroni Failover to Switchover
Add more details to error messages and comments introduced in this PR.

Review by @zerg-junior
2018-05-29 12:35:25 +02:00
Oleksii Kliukin 27c7245fed Avoid terminating delete on errors.
When there is an error happening upon deletion of the Kubernetes object
belonging to the cluster being removed, it makes no sense to abort the
deletion: the manifest will be removed anyway, therefore all the objects
after the one we aborted at will stay forever.
2018-05-18 18:10:37 +02:00
Oleksii Kliukin 0c616a802f Merge branch 'master' into rolling_updates_with_statefulset_annotations
# Conflicts:
#	pkg/cluster/k8sres.go
2018-05-15 15:33:34 +02:00
Oleksii Kliukin 987b43456b
Deprecate old LB options, fix endpoint sync. (#287)
* Depreate old LB options, fix endpoint sync.

- deprecate useLoadBalancer, replicaLoadBalancer from the manifest
  and enable_load_balancer from the operator configuration. The old
  operator configuration options become no-op with this commit. For
  the old manifest options, `useLoadBalancer` and `replicaLoadBalancer`
  are still consulted,  but only in the absense of the new ones
  (enableMasterLoadBalancer and enableReplicaLoadBalancer).

- Make sure the endpoint being created during the sync receives proper
  addresses subset. This is more critical for the replicas, as for the
  masters Patroni will normally re-create the endpoint before the
  operator.

- Avoid creating the replica endpoint, since it will be created automatically
  by the corresponding service.
- Update the README and unit tests.

Code review by @mgomezch and @zerg-junior
2018-05-15 15:19:18 +02:00
Oleksii Kliukin 332dab5237 Merge branch 'rolling_updates_with_statefulset_annotations' of github.com:zalando-incubator/postgres-operator into rolling_updates_with_statefulset_annotations 2018-05-08 14:51:10 +02:00
Oleksii Kliukin ce0d4af91c Initial implementation for the statefulset annotations indicating rolling updates. 2018-05-07 08:07:37 +02:00
Oleksii Kliukin 43a1db2128 Merge branch 'master' into pending_rolling_updates 2018-05-03 11:27:16 +02:00
Oleksii Kliukin fe47f9ebea
Improve the pod moving behavior during the Kubernetes cluster upgrade. (#281)
* Improve the pod moving behavior during the Kubernetes cluster upgrade.

Fix an issue of not waiting for at least one replica to become ready
(if the Statefulset indicates there are replicas) when moving the master
pod off the decomissioned node. Resolves the first part of #279.

Small fixes to error messages.

* Eliminate a race condition during the swithover.

When the operator initiates the failover (switchover) that fails and
then retries it for a second time it may happen that the previous
waitForPodChannel is still active. As a result, the operator subscribes
to the former master pod two times, causing a panic.

The problem was that the original code didn't bother to cancel the
waitForPodLalbel for the new master pod in the case when the failover
fails. This commit fixes it by adding a stop channel to that function.

Code review by @zerg-junior
2018-05-03 10:20:24 +02:00
Sergey Dudoladov 1b718fd4c2 Minor improvemets in reporting service account creation 2018-04-26 13:47:25 +02:00
Sergey Dudoladov d99b553ec1 Convert default account definiton into JSON 2018-04-25 12:35:16 +02:00
Sergey Dudoladov 485ec4b8ea Move service account to Controller 2018-04-24 15:13:08 +02:00
Sergey Dudoladov 5daf0a4172 Fix error reporting during pod service account creation 2018-04-20 14:20:38 +02:00
Sergey Dudoladov bd51d2922b Turn ServiceAccount into struct value to avoid race conditon during account creation 2018-04-20 13:05:05 +02:00