* Bump up a Spilo version to use Patroni >= v1.4.4 ; this fixes issues with k8s 1.10 API changes
* Bump up an operator version to use the new 'etcd_host' default value
* Re-use 'zalando-postgres-operator' as a pod service account and add extra RBAC permissions to make it work
* Document in quickstart connecting to Postgres via psql
After an unsuccessful initial cluster sync it may happen that the
cluster statefulset is empty. This has been made more likely since
88d6a7be3, since it has introduced syncing volumes before statefulsets,
and the volume sync mail fail for different reasons (i.e. the volume has
been shrinked, or too many calls to Amazon).
Some special patroni postgresql parameters, like max_connections,
should reside in the bootstrap.dcs.postgresql.parameters section
to come into effect.
When there is an error happening upon deletion of the Kubernetes object
belonging to the cluster being removed, it makes no sense to abort the
deletion: the manifest will be removed anyway, therefore all the objects
after the one we aborted at will stay forever.
Do not use statefulset number of pods to figure out running ones
for volume resizing, since the statefulset pointer could be nil.
Instead, look at the actual running pods.
Avoid sharing pointers to the same spec data between the informer
and the clusters. The only catch is that the error field is cleared
during deepcopy, since it is an interface that may contain private
fields that cannot be copied, however, the error is only used when
the manifest is parsed and before it is queued, therefore, we never
refer to that field in the cluster structure.
987b434 introduced a new function that modifies the cluster spec in
memory before the cluster processes it. Unfortunately, the instance
being modified appeared to be the one stored internally in the
PostgresInformer, resulting in those modifications to be propagated with
futher cluster events and producing update loops in some occasions.
This commit makes sure we copy the spec before putting it into the
clusterEventQueues.
* Depreate old LB options, fix endpoint sync.
- deprecate useLoadBalancer, replicaLoadBalancer from the manifest
and enable_load_balancer from the operator configuration. The old
operator configuration options become no-op with this commit. For
the old manifest options, `useLoadBalancer` and `replicaLoadBalancer`
are still consulted, but only in the absense of the new ones
(enableMasterLoadBalancer and enableReplicaLoadBalancer).
- Make sure the endpoint being created during the sync receives proper
addresses subset. This is more critical for the replicas, as for the
masters Patroni will normally re-create the endpoint before the
operator.
- Avoid creating the replica endpoint, since it will be created automatically
by the corresponding service.
- Update the README and unit tests.
Code review by @mgomezch and @zerg-junior
The node_eol_label is obsolete and not used.
The node_readiness_label, if set, will prevent scheduling pods on the node without that label, by default minikube doesn't set any label on the node.
* Sanity checks for the cluster name, improve tests.
- check that the normal and clone cluster name complies with the valid
service name. For clone cluster, only do it if clone timestamp is not
set; with a clone timestamp set, the clone name points to the S3 bucket
- add tests and improve existing ones, making sure we don't call Error()
method for an empty error, as well as that we don't miss cases where
expected error is not empty, but actual call to be tested does not
return an error.
Code review by @zerg-junior and @Jan-M
* Improve the pod moving behavior during the Kubernetes cluster upgrade.
Fix an issue of not waiting for at least one replica to become ready
(if the Statefulset indicates there are replicas) when moving the master
pod off the decomissioned node. Resolves the first part of #279.
Small fixes to error messages.
* Eliminate a race condition during the swithover.
When the operator initiates the failover (switchover) that fails and
then retries it for a second time it may happen that the previous
waitForPodChannel is still active. As a result, the operator subscribes
to the former master pod two times, causing a panic.
The problem was that the original code didn't bother to cancel the
waitForPodLalbel for the new master pod in the case when the failover
fails. This commit fixes it by adding a stop channel to that function.
Code review by @zerg-junior