* bump tp v1.6.0
* update logical-backup image
* Using smaller image for e2e test.
* fix env var name in docs
* add postgresql-client-13 to logical backup image
Co-authored-by: Jan Mussler <janm81@gmail.com>
* Initial commit for new 1.6 release with Postgres 13 support.
* Updating maintainers, Go version, Codeowners.
* Use lazy upgrade image that contains pg13.
* fix typo for ownerReference
* fix clusterrole in helm chart
* reflect GCP logical backup in validation
* improve PostgresTeam docs
* change defaults for enable_pgversion_env_var and storage_resize_mode
* explain manual part of in-place upgrade
* remove gsoc docs
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* Fix clone from gcs
* pass google credentials env var if using GS bucket
* remove requirement for timezone as GCS returns timestamp in local time to the region it is in
* Revert "remove requirement for timezone as GCS returns timestamp in local time to the region it is in"
This reverts commit ac4eb350d9.
* update GCS documentation
* remove sentence about logical backups
* reword pod environment configmap section
* fix documentation
* PostgresTeamCRD for advanced team management
* rework internal structure to be closer to CRD
* superusers instead of admin
* add more util functions and unit tests
* fix initHumanUsers
* check for superusers when creating normal teams
* polishing and fixes
* adding the essential missing pieces
* add documentation and update rbac
* reflect some feedback
* reflect more feedback
* fixing debug logs and raise QueueResyncPeriodTPR
* add two more flags to disable CRD and its superuser support
* fix chart
* update go modules
* move to client 1.19.3 and update codegen
* Extend operator configuration to allow for a pod_environment_secret just like pod_environment_configmap
* Add all keys from PodEnvironmentSecrets as ENV vars (using SecretKeyRef to protect the value)
* Apply envVars from pod_environment_configmap and pod_environment_secrets before doing the global settings from the operator config. This allows them to be overriden by the user (via configmap / secret)
* Add ability use a Secret for custom pod envVars (via pod_environment_secret) to admin documentation
* Add pod_environment_secret to Helm chart values.yaml
* Add unit tests for PodEnvironmentConfigMap and PodEnvironmentSecret - highly inspired by @kupson and his very similar PR #481
* Added new parameter pod_environment_secret to operatorconfig CRD and configmap examples
* Add pod_environment_secret to the operationconfiguration CRD
Co-authored-by: Christian Rohmann <christian.rohmann@inovex.de>
* Support for WAL_GS_BUCKET and GOOGLE_APPLICATION_CREDENTIALS environtment variables
* Fixed merge issue but also removed all changes to support macos.
* Updated test to new format
* Missed macos specific changes
* Added documentation and addressed comments
* Update docs/administrator.md
* Update docs/administrator.md
* Update e2e/run.sh
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* initial implementation
* describe forcing the rolling upgrade
* make parameter name more descriptive
* add missing pieces
* address review
* address review
* fix bug in e2e tests
* fix cluster name label in e2e test
* raise test timeout
* load spilo test image
* use available spilo image
* delete replica pod for lazy update test
* fix e2e
* fix e2e with a vengeance
* lets wait for another 30m
* print pod name in error msg
* print pod name in error msg 2
* raise timeout, comment other tests
* subsequent updates of config
* add comma
* fix e2e test
* run unit tests before e2e
* remove conflicting dependency
* Revert "remove conflicting dependency"
This reverts commit 65fc09054b.
* improve cdp build
* dont run unit before e2e tests
* Revert "improve cdp build"
This reverts commit e2a8fa12aa.
Co-authored-by: Sergey Dudoladov <sergey.dudoladov@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* add tab for monthly costs per cluster
* sync run_local and update version number
* lowering resources
* some Makefile polishing and updated admin docs on UI
* extend admin docs on UI
* add api-service manifest for operator
* set min limits in UI to default min limits of operator
* reflect new UI helm charts in docs
* make cluster name label configurable
* define postgres-pod clusterrole and align rbac in chart
* align UI chart rbac with operator and update doc
* operator RBAC needs podsecuritypolicy to grant it to postgres-pod
The [operator parameters][1] already support the
`custom_service_annotations` config.With this parameter is possible to
define custom annotations that will be used on the services created by the
operator. The `custom_service_annotations` as all the other
[operator parameters][1] are defined on the operator level and do not allow
customization on the cluster level. A cluster may require different service
annotations, as for example, set up different cloud load balancers
timeouts, different ingress annotations, and/or enable more customizable
environments.
This commit introduces a new parameter on the cluster level, called
`serviceAnnotations`, responsible for defining custom annotations just for
the services created by the operator to the specifically defined cluster.
It allows a mix of configuration between `custom_service_annotations` and
`serviceAnnotations` where the latest one will have priority. In order to
allow custom service annotations to be used on services without
LoadBalancers (as for example, service mesh services annotations) both
`custom_service_annotations` and `serviceAnnotations` are applied
independently of load-balancing configuration. For retro-compatibility
purposes, `custom_service_annotations` is still under
[Load balancer related options][2]. The two default annotations when using
LoadBalancer services, `external-dns.alpha.kubernetes.io/hostname` and
`service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` are
still defined by the operator.
`service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` can
be overridden by `custom_service_annotations` or `serviceAnnotations`,
allowing a more customizable environment.
`external-dns.alpha.kubernetes.io/hostname` can not be overridden once
there is no differentiation between custom service annotations for
replicas and masters.
It updates the documentation and creates the necessary unit and e2e
tests to the above-described feature too.
[1]: https://github.com/zalando/postgres-operator/blob/master/docs/reference/operator_parameters.md
[2]: https://github.com/zalando/postgres-operator/blob/master/docs/reference/operator_parameters.md#load-balancer-related-options
* add CRD manifests with validation
* update documentation
* patroni slots is not an array but a nested hash map
* make deps call tools
* cover validation in docs and export it in crds.go
* add toggle to disable creation of CRD validation and document it
* use templated service account also for CRD-configured helm deployment
* Initial commit for our basic Postgres Operator UI:
* Create and modify Postgres manifests
* Watch Operator Logs in the UI
* Observe cluster creation progress
* S3 Backup browser for clone and restore
Many thanks to Manuel Gomez and Jan Mussler for the initial UI work a long time ago!
* database.go: substitute hardcoded .svc.cluster.local dns suffix with config parameter
Use the pod's configured dns search path, for clusters where .svc.cluster.local is not correct.
A repair is a sync scan that acts only on those clusters that indicate
that the last add, update or sync operation on them has failed. It is
supposed to kick in more frequently than the repair scan. The repair
scan still remains to be useful to fix the consequences of external
actions (i.e. someone deletes a postgres-related service by mistake)
unbeknownst to the operator.
The repair scan is controlled by the new repair_period parameter in the
operator configuration. It has to be at least 2 times more frequent than
a sync scan to have any effect (a normal sync scan will update both last
synced and last repaired attributes of the controller, since repair is
just a sync underneath).
A repair scan could be queued for a cluster that is already being synced
if the sync period exceeds the interval between repairs. In that case a
repair event will be discarded once the corresponding worker finds out
that the cluster is not failing anymore.
Review by @zerg-junior
* During initial Event processing submit the service account for pods and bind it to a cluster role that allows Patroni to successfully start. The cluster role is assumed to be created by the k8s cluster administrator.