* PreparedDatabases with default role setup
* merge changes from master
* include preparedDatabases spec check when syncing databases
* create a default preparedDB if not specified
* add more default privileges for schemas
* use empty brackets block for undefined objects
* cover more default privilege scenarios and always define admin role
* add DefaultUsers flag
* support extensions and defaultUsers for preparedDatabases
* remove exact version in deployment manifest
* enable CRD validation for new field
* update generated code
* reflect code review
* fix typo in SQL command
* add documentation for preparedDatabases feature + minor changes
* some datname should stay
* add unit tests
* reflect some feedback
* init users for preparedDatabases also on update
* only change DB default privileges on creation
* add one more section in user docs
* one more sentence
* initial implementation
* describe forcing the rolling upgrade
* make parameter name more descriptive
* add missing pieces
* address review
* address review
* fix bug in e2e tests
* fix cluster name label in e2e test
* raise test timeout
* load spilo test image
* use available spilo image
* delete replica pod for lazy update test
* fix e2e
* fix e2e with a vengeance
* lets wait for another 30m
* print pod name in error msg
* print pod name in error msg 2
* raise timeout, comment other tests
* subsequent updates of config
* add comma
* fix e2e test
* run unit tests before e2e
* remove conflicting dependency
* Revert "remove conflicting dependency"
This reverts commit 65fc09054b.
* improve cdp build
* dont run unit before e2e tests
* Revert "improve cdp build"
This reverts commit e2a8fa12aa.
Co-authored-by: Sergey Dudoladov <sergey.dudoladov@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* Added support for specifying a nodePort value for the postgres-operator-ui service when the type is NodePort
Co-authored-by: Marcus Portmann <marcusp@discovery.co.za>
* Add EventsGetter to KubeClient to enable to sending K8S events
* Add eventRecorder to the controller, initialize it and hand it down to cluster via its constructor to enable it to emit events this way
* Add first set of events which then go to the postgresql custom resource the user interacts with to provide some feedback
* Add right to "create" events to operator cluster role
* Adapt cluster tests to new function sigurature with eventRecord (via NewFakeRecorder)
* Get a proper reference before sending events to a resource
Co-authored-by: Christian Rohmann <christian.rohmann@inovex.de>
* Allow additional Volumes to be mounted
* added TargetContainers option to determine if additional volume need to be mounter or not
* fixed dependencies
* updated manifest additional volume example
* More validation
Check that there are no volume mount path clashes or "all" vs ["a", "b"]
mixtures. Also change the default behaviour to mount to "postgres"
container.
* More documentation / example about additional volumes
* Revert go.sum and go.mod from origin/master
* Declare addictionalVolume specs in CRDs
* fixed k8sres after rebase
* resolv conflict
Co-authored-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Co-authored-by: Thierry <thierry@malt.com>
* missing quotes in pooler configmap in values.yaml
* missing quotes in pooler configmap in values-crd.yaml
* docs clarifications
* helm3 --skip-crds
* Update docs/user.md
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* details moved in docs
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* Add patroni parameters for `synchronous_mode`
* Update complete-postgres-manifest.yaml, removed quotation marks
* Update k8sres_test.go, adjust result for `Patroni configured`
* Update k8sres_test.go, adjust result for `Patroni configured`
* Update complete-postgres-manifest.yaml, set synchronous mode to false in this example
* Update pkg/cluster/k8sres.go
Does the same but is shorter. So we fix that it if you like.
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* Update docs/reference/cluster_manifest.md
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* Add patroni's `synchronous_mode_strict`
* Extend `TestGenerateSpiloConfig` with `SynchronousModeStrict`
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* kubernetes_use_configmap
* Update manifests/postgresql-operator-default-configuration.yaml
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* Update manifests/configmap.yaml
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* Update charts/postgres-operator/values.yaml
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* go.fmt
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
Connection pooler support
Add support for a connection pooler. The idea is to make it generic enough to
be able to switch between different implementations (e.g. pgbouncer or
odyssey). Operator needs to create a deployment with pooler and a service for
it to access.
For connection pool to work properly, a database needs to be prepared by
operator, namely a separate user have to be created with an access to an
installed lookup function (to fetch credential for other users).
This setups is supposed to be used only by robot/application users. Usually a
connection pool implementation is more CPU bounded, so it makes sense to create
several pods for connection pool with more emphasize on cpu resources. At the
moment there are no special affinity or tolerations assigned to bring those
pods closer to the database. For availability purposes minimal number of
connection pool pods is 2, ideally they have to be distributed between
different nodes/AZ, but it's not enforced in the operator itself. Available
configuration supposed to be ergonomic and in the normal case require minimum
changes to a manifest to enable connection pool. To have more control over the
configuration and functionality on the pool side one can customize the
corresponding docker image.
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* bump version to 1.4.0 + some polishing
* align version for UI chart
* update user docs to warn for standby replicas
* minor log message changes for RBAC resources
* add tab for monthly costs per cluster
* sync run_local and update version number
* lowering resources
* some Makefile polishing and updated admin docs on UI
* extend admin docs on UI
* add api-service manifest for operator
* set min limits in UI to default min limits of operator
* reflect new UI helm charts in docs
* make cluster name label configurable
* define postgres-pod clusterrole and align rbac in chart
* align UI chart rbac with operator and update doc
* operator RBAC needs podsecuritypolicy to grant it to postgres-pod
The [operator parameters][1] already support the
`custom_service_annotations` config.With this parameter is possible to
define custom annotations that will be used on the services created by the
operator. The `custom_service_annotations` as all the other
[operator parameters][1] are defined on the operator level and do not allow
customization on the cluster level. A cluster may require different service
annotations, as for example, set up different cloud load balancers
timeouts, different ingress annotations, and/or enable more customizable
environments.
This commit introduces a new parameter on the cluster level, called
`serviceAnnotations`, responsible for defining custom annotations just for
the services created by the operator to the specifically defined cluster.
It allows a mix of configuration between `custom_service_annotations` and
`serviceAnnotations` where the latest one will have priority. In order to
allow custom service annotations to be used on services without
LoadBalancers (as for example, service mesh services annotations) both
`custom_service_annotations` and `serviceAnnotations` are applied
independently of load-balancing configuration. For retro-compatibility
purposes, `custom_service_annotations` is still under
[Load balancer related options][2]. The two default annotations when using
LoadBalancer services, `external-dns.alpha.kubernetes.io/hostname` and
`service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` are
still defined by the operator.
`service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` can
be overridden by `custom_service_annotations` or `serviceAnnotations`,
allowing a more customizable environment.
`external-dns.alpha.kubernetes.io/hostname` can not be overridden once
there is no differentiation between custom service annotations for
replicas and masters.
It updates the documentation and creates the necessary unit and e2e
tests to the above-described feature too.
[1]: https://github.com/zalando/postgres-operator/blob/master/docs/reference/operator_parameters.md
[2]: https://github.com/zalando/postgres-operator/blob/master/docs/reference/operator_parameters.md#load-balancer-related-options
* add CRD manifests with validation
* update documentation
* patroni slots is not an array but a nested hash map
* make deps call tools
* cover validation in docs and export it in crds.go
* add toggle to disable creation of CRD validation and document it
* use templated service account also for CRD-configured helm deployment
* Introduce crds directory for compatibility with Helm v3
This commit introduce a crd directory for the helm chart which has all
custom resource definitions. The files in the crd directory is plain
YAML. The crds got the label `app.kubernetes.io/name: postgres-operator`
and removes all the templating.
Helm v3 ignores the objects from the `templates` directory which have a
`crd-install` hook. This commit addes templates/crds.yaml which
generates YAMLs for CRDs. The hooks from these CRDs are detected by Helm
v2 as well as v3. Helm v2 executes the hook and Helm v3 ignores the hook
(YAML files are not applied)
The approach is inspired by the prometheus-operator chart
helm/charts@89b233eef6
* Added possibility to add custom annotations to LoadBalancer service.
* Added parameters for custom endpoint, access and secret key for logical backup.
* Modified dump.sh so it knows how to handle new features. Configurable S3 SSE
* align config map, operator config, helm chart values and templates
* follow helm chart conventions also in CRD templates
* split up values files and add comments
* avoid yaml confusion in postgres manifests
* bump spilo version and use example for logical_backup_s3_bucket
* add ConfigTarget switch to values
Fixing privileges to execute `patronictl remove`.
You could/should have also just used the operator delete cluster flow (remove manifest). It is not really the plan to use patroni inside a pod to remove a existing cluster.
* StatefulSet fsGroup config option to allow non-root spilo
* Allow Postgres CRD to overide SpiloFSGroup of the Operator.
* Document FSGroup of a Pod cannot be changed after creation.
* database.go: substitute hardcoded .svc.cluster.local dns suffix with config parameter
Use the pod's configured dns search path, for clusters where .svc.cluster.local is not correct.
* turns PostgresStatus type into a struct with field PostgresClusterStatus
* setStatus patch target is now /status subresource
* unmarshalling PostgresStatus takes care of previous status field convention
* new simple bool functions status.Running(), status.Creating()
* Config option to allow Spilo container to run non-privileged.
Runs non-privileged by default.
Fixes#395
* add spilo_privileged to manifests/configmap.yaml
* add spilo_privileged to helm chart's values.yaml
Helm chart, that contains all required components to install postgres-operator.
So far all the versions are hardcoded to the latest release, and documentation
mention helm mostly for development purposes with minikube.