Previously, the operator started to move the pods off the nodes to be
decomissioned by watching the eol_node_label value. Every new postgres
pod has been created with the anti-affinity to that label, making sure
that the pods being moved won't land on another to be decomissioned
node.
The changes introduce another label that indicates the ready node. The
new pod affinity will esnure that the pod is only scheduled to the node
marked as ready, discarding the previous anti-affinity. That way the
nodes can transition from the pending-decomission to the other statuses
(drained, terminating) without having pods suddently scaled to them.
In addition, rename the label that triggers the start of the upgrade
process to node_eol_label (for consistency with node_readiness_label)
and set its default vvalue to lifecycle-status:pending-decomission.
- fix the lack of closing the cursor for the query that returned no
rows.
- fix syncing of the user options, as previously those were not
fetched from the database.
Add options to the PgUser structure, potentially allowing to set
per-role options in the cluster definition as well.
Introduce api_roles_configuration operator option with the default
of log_statement=all
* Add node toleration config to PodSpec
This allows to taint nodes dedicated to Postgres and prevents other pods from running on these nodes.
* Document taint and toleration setup
And remove setting from default operator ConfigMap
* Allow to overwrite tolerations with Postgres manifest
Allow cloning clusters from the operator.
The changes add a new JSON node `clone` with possible values `cluster`
and `timestamp`. `cluster` is mandatory, and setting a non-empty
`timestamp` triggers wal-e point in time recovery. Spilo and Patroni do
the whole heavy-lifting, the operator just defines certain variables and
gathers some data about how to connect to the host to clone or the
target S3 bucket.
As a minor change, set the image pull policy to IfNotPresent instead
of Always to simplify local testing.
Change the default replication username to standby.
* Deny all requests to the load balancer by default.
* Operator-wide toggle for the load-balancer.
* Define per-cluster useLoadBalancer option.
If useLoadBalancer is not set - then operator-wide defaults take place. If it
is true - the load balancer is created, otherwise a service type clusterIP is
created.
Internally, we have to completely replace the service if the service type
changes. We cannot patch, since some fields from the old service that will
remain after patch are incompatible with the new one, and handling them
explicitly when updating the service is ugly and error-prone. We cannot
update the service because of the immutable fields, that leaves us the only
option of deleting the old service and creating the new one. Unfortunately,
there is still an issue of unnecessary removal of endpoints associated with
the service, it will be addressed in future commits.
* Revert the unintended effect of go fmt
* Recreate endpoints on service update.
When the service type is changed, the service is deleted and then
the one with the new type is created. Unfortnately, endpoints are
deleted as well. Re-create them afterwards, preserving the original
addresses stored in them.
* Improve error messages and comments. Use generate instead of gen in names.
The flag adds a replica service with the name cluster_name-repl and
a DNS name that defaults to {cluster}-repl.{team}.{hostedzone}.
The implementation converted Service field of the cluster into a map
with one or two elements and deals with the cases when the new flag
is changed on a running cluster
(the update and the sync should create or delete the replica service).
In order to pick up master and replica service and master endpoint
when listing cluster resources.
* Update the spec when updating the cluster.
Command-line options --nodatabaseaccess and --noteamsapi disable all
teams api interaction and access to the Postgres database. This is
useful for debugging purposes when the operator runs out of cluster
(with --outofcluster flag).
The same effect can be achieved by setting enable_db_access and/or
enable_teams_api to false.
Run operations concerning multiple clusters in parallel. Each cluster gets its
own worker in order to create, update, sync or delete clusters. Each worker
acquires the lock on a cluster. Subsequent operations on the same cluster
have to wait until the current one finishes. There is a pool of parallel
workers, configurable with the `workers` parameter in the configmap and set by
default to 4. The cluster-related tasks are assigned to the workers based on
a cluster name: the tasks for the same cluster will be always assigned to the
same worker. There is no blocking between workers, although there is a chance
that a single worker will become a bottleneck if too many clusters are
assigned to it; therefore, for large-scale deployments it might be necessary
to bump up workers from the default value.
* Avoid "bulk-comparing" pod resources during sync.
First attempt to fix bogus restarts due to the reported mismatch
of container resources where one of the resources is an empty struct,
while the other has all fields set to nil.
In addition, add an ability to set limits and requests per pod, as well as the operator-level defaults.
* Add infrastructure roles configured globally.
Those are the roles defined in the operator itself. The operator's
configuration refers to the secret containing role names, passwords
and membership information. While they are referred to as roles, in
reality those are users.
In addition, improve the regex to filter out invalid users and
make sure user secret names are compatible with DNS name spec.
Add an example manifest for the infrastructure roles.
- Set WAL_S3_BUCKET to point WAL-E where to fetch/store WAL files
- Set annotations/iam.amazonaws.com/role to set the role to access AWS"
The new env vairables are PGOP_WAL_S3_BUCKET and PGOP_KUBE_IAM_ROLE.
- add a new environment variable for triggering debug log level
- show both new, old object and diff during syncs and updates
- use pretty package to pretty-print go structures
-