* Support for WAL_GS_BUCKET and GOOGLE_APPLICATION_CREDENTIALS environtment variables
* Fixed merge issue but also removed all changes to support macos.
* Updated test to new format
* Missed macos specific changes
* Added documentation and addressed comments
* Update docs/administrator.md
* Update docs/administrator.md
* Update e2e/run.sh
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* Initial commit
* Corrections
- set the type of the new configuration parameter to be array of
strings
- propagate the annotations to statefulset at sync
* Enable regular expression matching
* Improvements
-handle rollingUpdate flag
-modularize code
-rename config parameter name
* fix merge error
* Pass annotations to connection pooler deployment
* update code-gen
* Add documentation and update manifests
* add e2e test and introduce option in configmap
* fix service annotations test
* Add unit test
* fix e2e tests
* better key lookup of annotations tests
* add debug message for annotation tests
* Fix typos
* minor fix for looping
* Handle update path and renaming
- handle the update path to update sts and connection pooler deployment.
This way no need to wait for sync
- rename the parameter to downscaler_annotations
- handle other review comments
* another try to fix python loops
* Avoid unneccessary update events
* Update manifests
* some final polishing
* fix cluster_test after polishing
Co-authored-by: Rafia Sabih <rafia.sabih@zalando.de>
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* PreparedDatabases with default role setup
* merge changes from master
* include preparedDatabases spec check when syncing databases
* create a default preparedDB if not specified
* add more default privileges for schemas
* use empty brackets block for undefined objects
* cover more default privilege scenarios and always define admin role
* add DefaultUsers flag
* support extensions and defaultUsers for preparedDatabases
* remove exact version in deployment manifest
* enable CRD validation for new field
* update generated code
* reflect code review
* fix typo in SQL command
* add documentation for preparedDatabases feature + minor changes
* some datname should stay
* add unit tests
* reflect some feedback
* init users for preparedDatabases also on update
* only change DB default privileges on creation
* add one more section in user docs
* one more sentence
* Allow additional Volumes to be mounted
* added TargetContainers option to determine if additional volume need to be mounter or not
* fixed dependencies
* updated manifest additional volume example
* More validation
Check that there are no volume mount path clashes or "all" vs ["a", "b"]
mixtures. Also change the default behaviour to mount to "postgres"
container.
* More documentation / example about additional volumes
* Revert go.sum and go.mod from origin/master
* Declare addictionalVolume specs in CRDs
* fixed k8sres after rebase
* resolv conflict
Co-authored-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Co-authored-by: Thierry <thierry@malt.com>
* Add patroni parameters for `synchronous_mode`
* Update complete-postgres-manifest.yaml, removed quotation marks
* Update k8sres_test.go, adjust result for `Patroni configured`
* Update k8sres_test.go, adjust result for `Patroni configured`
* Update complete-postgres-manifest.yaml, set synchronous mode to false in this example
* Update pkg/cluster/k8sres.go
Does the same but is shorter. So we fix that it if you like.
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* Update docs/reference/cluster_manifest.md
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* Add patroni's `synchronous_mode_strict`
* Extend `TestGenerateSpiloConfig` with `SynchronousModeStrict`
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
* kubernetes_use_configmap
* Update manifests/postgresql-operator-default-configuration.yaml
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* Update manifests/configmap.yaml
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* Update charts/postgres-operator/values.yaml
Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>
* go.fmt
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
Connection pooler support
Add support for a connection pooler. The idea is to make it generic enough to
be able to switch between different implementations (e.g. pgbouncer or
odyssey). Operator needs to create a deployment with pooler and a service for
it to access.
For connection pool to work properly, a database needs to be prepared by
operator, namely a separate user have to be created with an access to an
installed lookup function (to fetch credential for other users).
This setups is supposed to be used only by robot/application users. Usually a
connection pool implementation is more CPU bounded, so it makes sense to create
several pods for connection pool with more emphasize on cpu resources. At the
moment there are no special affinity or tolerations assigned to bring those
pods closer to the database. For availability purposes minimal number of
connection pool pods is 2, ideally they have to be distributed between
different nodes/AZ, but it's not enforced in the operator itself. Available
configuration supposed to be ergonomic and in the normal case require minimum
changes to a manifest to enable connection pool. To have more control over the
configuration and functionality on the pool side one can customize the
corresponding docker image.
Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
The [operator parameters][1] already support the
`custom_service_annotations` config.With this parameter is possible to
define custom annotations that will be used on the services created by the
operator. The `custom_service_annotations` as all the other
[operator parameters][1] are defined on the operator level and do not allow
customization on the cluster level. A cluster may require different service
annotations, as for example, set up different cloud load balancers
timeouts, different ingress annotations, and/or enable more customizable
environments.
This commit introduces a new parameter on the cluster level, called
`serviceAnnotations`, responsible for defining custom annotations just for
the services created by the operator to the specifically defined cluster.
It allows a mix of configuration between `custom_service_annotations` and
`serviceAnnotations` where the latest one will have priority. In order to
allow custom service annotations to be used on services without
LoadBalancers (as for example, service mesh services annotations) both
`custom_service_annotations` and `serviceAnnotations` are applied
independently of load-balancing configuration. For retro-compatibility
purposes, `custom_service_annotations` is still under
[Load balancer related options][2]. The two default annotations when using
LoadBalancer services, `external-dns.alpha.kubernetes.io/hostname` and
`service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` are
still defined by the operator.
`service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` can
be overridden by `custom_service_annotations` or `serviceAnnotations`,
allowing a more customizable environment.
`external-dns.alpha.kubernetes.io/hostname` can not be overridden once
there is no differentiation between custom service annotations for
replicas and masters.
It updates the documentation and creates the necessary unit and e2e
tests to the above-described feature too.
[1]: https://github.com/zalando/postgres-operator/blob/master/docs/reference/operator_parameters.md
[2]: https://github.com/zalando/postgres-operator/blob/master/docs/reference/operator_parameters.md#load-balancer-related-options
* add validation for PG resources and volume size
* check resource requests also on UPDATE and SYNC + update docs
* if cluster was running don't error on sync
* Added possibility to add custom annotations to LoadBalancer service.
* Added parameters for custom endpoint, access and secret key for logical backup.
* Modified dump.sh so it knows how to handle new features. Configurable S3 SSE
This will set up a continuous wal streaming cluster, by adding the corresponding section in postgres manifest. Instead of having a full-fledged standby cluster as in Patroni, here we use only the wal path of the source cluster and stream from there.
Since, standby cluster is streaming from the master and does not require to create or use databases of it's own. Hence, it bypasses the creation of users or databases.
There is a separate sample manifest added to set up a standby-cluster.
* StatefulSet fsGroup config option to allow non-root spilo
* Allow Postgres CRD to overide SpiloFSGroup of the Operator.
* Document FSGroup of a Pod cannot be changed after creation.
Override clone s3 bucket path
Add possibility to use a custom s3 bucket path for cloning a cluster
from an arbitrary bucket (e.g. from another k8s cluster). For that
a new config options is introduced `s3_wal_path`, that should point
to a location that spilo would understand.
* Config option to allow Spilo container to run non-privileged.
Runs non-privileged by default.
Fixes#395
* add spilo_privileged to manifests/configmap.yaml
* add spilo_privileged to helm chart's values.yaml
Add possibility to mount a tmpfs volume to /dev/shm to avoid issues like
[this](https://github.com/docker-library/postgres/issues/416). To achieve that
two new options were introduced:
* `enableShmVolume` to PostgreSQL manifest, to specify whether or not mount
this volume per database cluster
* `enable_shm_volume` to operator configuration, to specify whether or not mount
per operator.
The first one, `enableShmVolume` takes precedence to allow us to be more flexible.
Client-go provides a https://github.com/kubernetes/code-generator package in order to provide the API to work with CRDs similar to the one available for built-in types, i.e. Pods, Statefulsets and so on.
Use this package to generate deepcopy methods (required for CRDs), instead of using an external deepcopy package; we also generate APIs used to manipulate both Postgres and OperatorConfiguration CRDs, as well as informers and listers for the Postgres CRD, instead of using generic informers and CRD REST API; by using generated code we can get rid of some custom and obscure CRD-related code and use a better API.
All generated code resides in /pkg/generated, with an exception of zz_deepcopy.go in apis/acid.zalan.do/v1
Rename postgres-operator-configuration CRD to OperatorConfiguration, since the former broke naming convention in the code-generator.
Moved Postgresql, PostgresqlList, OperatorConfiguration and OperatorConfigurationList and other types used by them into
Change the type of the Error field in the Postgresql crd to a string, so that client-go could generate a deepcopy for it.
Use generated code to set status of CRD objects as well. Right now this is done with patch, however, Kubernetes 1.11 introduces the /status subresources, allowing us to set the status with
the special updateStatus call in the future. For now, we keep the code that is compatible with earlier versions of Kubernetes.
Rename postgresql.go to database.go and status.go to logs_and_api.go to reflect the purpose of each of those files.
Update client-go dependencies.
Minor reformatting and renaming.
Previously, the operator put pg_hba into the bootstrap/pg_hba key of
Patroni. That had 2 adverse effects:
- pg_hba.conf was shadowed by Spilo default section in the local
postgresql configuration
- when updating pg_hba in the cluster manifest, the updated lines were
not propagated to DCS, since the key was defined in the boostrap
section of Patroni.
Include some minor refactoring, moving methods to unexported when
possible and commenting out usage of md5, so that gosec won't complain.
Per https://github.com/zalando-incubator/postgres-operator/issues/330
Review by @zerg-junior
Run more linters in the gometalinter, i.e. deadcode, megacheck,
nakedret, dup.
More consistent code formatting, remove two dead functions, eliminate
naked a bunch of naked returns, refactor a few functions to avoid code
duplication.
* Allow configuring pod priority globally and per cluster.
Allow to specify pod priority class for all pods managed by the operator,
as well as for those belonging to individual clusters.
Controlled by the pod_priority_class_name operator configuration
parameter and the podPriorityClassName manifest option.
See https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
for the explanation on how to define priority classes since Kubernetes 1.8.
Some import order changes are due to go fmt.
Removal of OrphanDependents deprecated field.
Code review by @zerg-junior
There are shortcuts in this code, i.e. we created the deepcopy function
by using the deepcopy package instead of the generated code, that will
be addressed once migrated to client-go v8. Also, some objects,
particularly statefulsets, are still taken from v1beta, this will also
be addressed in further commits once the changes are stabilized.
* Improve generting of Scalyr container environment.
Avoid duplicating POD_NAME and POD_NAMESPACE that already bundled
every sidecar.
Do not complain on the lack of SCLALYR_SERVER_HOST, since it is set to
https://upload.eu.scalyr.com in the container we use.
Do not mentioned SCALYR_SERVER_HOST in the error messages, since it is
derived from the cluster name automatically.
The old way of specifying it with the annotation is deprecated and not
available in recent Kubernetes versions. We will keep it there anyway
until upgrading to the new go-client that is incompatible with those
versions.
Per report from @schmitch
* Define sidecars in the operator configuration.
Right now only the name and the docker image can be defined, but with
the help of the pod_environment_configmap parameter arbitrary
environment variables can be passed to the sidecars.
* Refactoring around generatePodTemplate.
Original implementation of per-cluster sidecars by @theRealWardo
Per review by @zerg-junior and @Jan-M
Some special patroni postgresql parameters, like max_connections,
should reside in the bootstrap.dcs.postgresql.parameters section
to come into effect.
* Depreate old LB options, fix endpoint sync.
- deprecate useLoadBalancer, replicaLoadBalancer from the manifest
and enable_load_balancer from the operator configuration. The old
operator configuration options become no-op with this commit. For
the old manifest options, `useLoadBalancer` and `replicaLoadBalancer`
are still consulted, but only in the absense of the new ones
(enableMasterLoadBalancer and enableReplicaLoadBalancer).
- Make sure the endpoint being created during the sync receives proper
addresses subset. This is more critical for the replicas, as for the
masters Patroni will normally re-create the endpoint before the
operator.
- Avoid creating the replica endpoint, since it will be created automatically
by the corresponding service.
- Update the README and unit tests.
Code review by @mgomezch and @zerg-junior
* Improve the pod moving behavior during the Kubernetes cluster upgrade.
Fix an issue of not waiting for at least one replica to become ready
(if the Statefulset indicates there are replicas) when moving the master
pod off the decomissioned node. Resolves the first part of #279.
Small fixes to error messages.
* Eliminate a race condition during the swithover.
When the operator initiates the failover (switchover) that fails and
then retries it for a second time it may happen that the previous
waitForPodChannel is still active. As a result, the operator subscribes
to the former master pod two times, causing a panic.
The problem was that the original code didn't bother to cancel the
waitForPodLalbel for the new master pod in the case when the failover
fails. This commit fixes it by adding a stop channel to that function.
Code review by @zerg-junior
* Remove 'team' label from the statefulset selector.
I was never supposed to be there, but implicitely statefulset
creates a selector out of meta.labels field. That is the problem
with recent Kubernetes, since statefulset cannot pick up pods
with non-matching label selectors, and we rely on statefulset
picking up old pods after statefulset replacement.
Make sure selector changes trigger replacement of the statefulset.
In the case new selector has more labels than the old one nothing
should be done with a statefulset, otherwise the new statefulset
won't see orphaned pods from the old one, as they won't match the
selector.
See https://github.com/kubernetes/kubernetes/issues/46901#issuecomment-356418393
By default, spilo sets WAL_BUCKET_SCOPE_PREFIX depending on the cluster
namespace, possibly to a non-empty string. However, we won't be able to
clone those clusters, as the clone prefix is always set to an empty string.
We could go the other way around and set both WAL_BUCKET_SCOPE_PREFIX
and CLONE_WAL_BUCKET_SCOPE_PREFIX to a non-default value that depends
on the cluster's namespace, but it seems that we don't need this
feature for now (no conflict will occur even for clusters with the
same name and different namespaces because of the SCOPE_SUFFIX) and
it requires some additional testing first.
* Track origin of roles.
* Propagate changes on infrastructure roles to corresponding secrets.
When the password in the infrastructure role is updated, re-generate the
secret for that role.
Previously, the password for an infrastructure role was always fetched from
the secret, making any updates to such role a no-op after the corresponding
secret had been generated.
This allows using S3 API in order to simplify finding all folders that are different only by a suffix, since the suffix delimiter will not occur in the suffix itself (currently being a UID).