Commit Graph

218 Commits

Author SHA1 Message Date
Sergey Dudoladov 485ec4b8ea Move service account to Controller 2018-04-24 15:13:08 +02:00
Sergey Dudoladov c31c76281c Make operator unaware of its own service account 2018-04-23 14:38:20 +02:00
Sergey Dudoladov 5daf0a4172 Fix error reporting during pod service account creation 2018-04-20 14:20:38 +02:00
Sergey Dudoladov bd51d2922b Turn ServiceAccount into struct value to avoid race conditon during account creation 2018-04-20 13:05:05 +02:00
Sergey Dudoladov 23f893647c Remove sync of pod service accounts 2018-04-19 15:48:58 +02:00
Sergey Dudoladov 214ae04aa7 Deploy service account for pod creation on demand 2018-04-18 16:20:20 +02:00
Manuel Gómez 88c68712b6
Fix statefulset label selector diffing (#273)
Otherwise, rolling updates are done unnecessarily.
2018-04-06 17:21:57 +02:00
Oleksii Kliukin 9bf80afa6b
Remove team from statefulset selector (#271)
* Remove 'team' label from the statefulset selector.

I was never supposed to be there, but implicitely statefulset
creates a selector out of meta.labels field. That is the problem
with recent Kubernetes, since statefulset cannot pick up pods
with non-matching label selectors, and we rely on statefulset
picking up old pods after statefulset replacement.

Make sure selector changes trigger replacement of the statefulset.

In the case new selector has more labels than the old one nothing
should be done with a statefulset, otherwise the new statefulset
won't see orphaned pods from the old one, as they won't match the
selector. 

See https://github.com/kubernetes/kubernetes/issues/46901#issuecomment-356418393
2018-04-06 13:58:47 +02:00
Oleksii Kliukin 26db91c53e
Improve infrastructure role definitions (#208)
Enhance definitions of infrastructure roles by allowing membership in multiple roles, role options and per-role configuration to be specified in the infrastructure role configmap, which must have the same name as the infrastructure role secret. See manifests/infrastructure-roles-configmap.yaml for the examples and updated README for the description of different types of database roles supposed by the operator and their purposes.

Change the logic of merging infrastructure roles with the manifest roles when they have the same name, to return the infrastructure role unchanged instead of merging. Previously, we used to propagate flags from the manifest role to the resulting infrastructure one, as there were no way to define flags for the infrastructure role; however, this is not the case anymore.

Code review and tests by @erthalion
2018-04-04 17:21:36 +02:00
zerg-junior d264be9faa
Merge pull request #261 from zalando-incubator/wal_bucket_scope_prefix
Fix clone for origins in non-default namespaces.
2018-04-03 17:47:18 +02:00
zerg-junior ff5793b584
Merge pull request #258 from zalando-incubator/always-create-replica-service
[WIP] Always create replica service
2018-03-29 14:42:26 +02:00
erthalion 8967a3be2c Add tests for load balancer function logic 2018-03-27 12:11:46 +02:00
Sergey Dudoladov ced770a827 Respond to code review 2018-03-26 11:07:32 +02:00
Sergey Dudoladov a8862aeee1 Enable backward compatibility for enable_load_balancer setting from operator configmap 2018-03-19 17:19:50 +01:00
Sergey Dudoladov 386d7b6bdb Implement backward compatibility with older load balancer settings 2018-03-16 13:27:38 +01:00
Sergey Dudoladov 20f30d3739 Update the method for deciding about load balancers 2018-03-14 12:46:58 +01:00
Sergey Dudoladov 0986e56226 Add separate params for master and replica load balancers to operator configuration 2018-03-14 12:12:28 +01:00
Sergey Dudoladov ac6c5bcf09 Explicitly name replica and master load balancer params in PostgresSpec 2018-03-14 12:03:27 +01:00
Sergey Dudoladov 5bc5e70c81 Log if replica service has no load balancer 2018-03-12 16:48:44 +01:00
Sergey Dudoladov 5ff562a607 Minor improvements 2018-03-02 14:03:41 +01:00
Sergey Dudoladov 2aeff096f7 Make ReplicaLoadBalancer a separate toggler 2018-03-02 13:35:25 +01:00
Oleksii Kliukin 59a214727c Fix clone for origins in non-default namespaces.
By default, spilo sets WAL_BUCKET_SCOPE_PREFIX depending on the cluster
namespace, possibly to a non-empty string. However, we won't be able to
clone those clusters, as the clone prefix is always set to an empty string.

We could go the other way around and set both WAL_BUCKET_SCOPE_PREFIX
and CLONE_WAL_BUCKET_SCOPE_PREFIX to a non-default value that depends
on the cluster's namespace, but it seems that we don't need this
feature for now (no conflict will occur even for clusters with the
same name and different namespaces because of the SCOPE_SUFFIX) and
it requires some additional testing first.
2018-03-01 12:26:09 +01:00
Sergey Dudoladov 35104cb72b Add CLONE_ prefix to the env var 2018-03-01 11:19:15 +01:00
Sergey Dudoladov bcb8caeddf Set WAL_BUCKET_SCOPE_PREFIX to the empty string 2018-03-01 11:16:47 +01:00
Sergey Dudoladov fb21246fcd Remove early stopping conditions that rely on the relica service being absent 2018-02-27 17:21:51 +01:00
Sergey Dudoladov 28fed26845 Do not delete an endpoint for the replica service w/o load balancer during sync 2018-02-27 17:18:30 +01:00
Sergey Dudoladov b107d781e8 Do not delete replica service w/o load balancer during sync 2018-02-27 17:16:00 +01:00
Sergey Dudoladov 2ef069ee93 Create/delete replica service regardless of load balancer setup 2018-02-27 17:10:49 +01:00
zerg-junior 0f392c2007
Merge pull request #252 from zalando-incubator/label-teams
Add 'team' label to pods, stateful sets, secrets and pod disruption budgets
2018-02-26 12:57:26 +01:00
Sergey Dudoladov 071547e5bf Modify to add extra labels only during resource creation 2018-02-26 11:11:50 +01:00
Oleksii Kliukin 2bb7e98268
update individual role secrets from infrastructure roles (#206)
* Track origin of roles.

* Propagate changes on infrastructure roles to corresponding secrets.

When the password in the infrastructure role is updated, re-generate the
secret for that role.

Previously, the password for an infrastructure role was always fetched from
the secret, making any updates to such role a no-op after the corresponding
secret had been generated.
2018-02-23 17:24:04 +01:00
Sergey Dudoladov 00dc810544 Add 'team' label to pods, stateful sets, secrets and pod disruption budgets 2018-02-23 14:36:10 +01:00
Dmitrii Dolgov ef50b147c5 Use list of checks instead of a map 2018-02-23 14:24:33 +01:00
Dmitrii Dolgov 95d86c7600 Move container comparison logic to a separate function 2018-02-23 11:58:37 +01:00
Oleksii Kliukin c4aab502b3
Remove Patroni leftover objects on cluster deletion. (#244)
* Remove all endpoints and configmaps from Patroni when Patroni is running with Kubernetes support on cluster deletion.
2018-02-23 09:52:22 +01:00
Dmitry Dolgov bf4b0f0f33
Merge pull request #240 from zalando-incubator/feature/goreport-improvements
Some improvements for golint, ineffassign and misspell
2018-02-22 11:31:08 +01:00
Oleksii Kliukin cca73e30b7
Make code around recreating pods and creating objects in the database less brittle (#213)
There used to be a masterLess flag that was supposed to indicate whether the cluster it belongs to runs without the acting master by design. At some point, as we didn't really have support for such clusters, the flag has been misused to indicate there is no master in the cluster. However, that was not done consistently (a cluster without all pods running would never be masterless, even when the master is not among the running pods) and it was based on the wrong assumption that the masterless cluster will remain masterless until the next attempt to change that flag, ignoring the possibility of master coming up or some node doing a successful promotion. Therefore, this PR gets rid of that flag completely.

When the cluster is running with 0 instances, there is obviously no master and it makes no sense to create any database objects inside the non-existing master. Therefore, this PR introduces an additional check for that.

recreatePods were assuming that the roles of the pods recorded when the function has stared will not change; for instance, terminated replica pods should start as replicas. Revisit that assumption by looking at the actual role of the re-spawned pods; that avoids a failover if some replica has promoted to the master role while being re-spawned. In addition, if the failover from the old master was unsuccessful, we used to stop and leave the old master running on an old pod, without recording this fact anywhere. This PR makes the failover failure emit a warning, but not stop recreating the last master pod; in the worst case, the running master will be terminated, however, this case is rather unlikely one.

As a side effect, make waitForPodLabel return the pod definition it waited for, avoiding extra API calls in recreatePods and movePodFromEndOfLifeNode
2018-02-22 10:42:05 +01:00
zerg-junior b0549c3c9c
Merge pull request #225 from zalando-incubator/support-many-namespaces
Support many namespaces
2018-02-20 17:39:42 +01:00
Oleksii Kliukin 99c090899f
Change the suffix delimiter to slash. (#242)
This allows using S3 API in order to simplify finding all folders that are different only by a suffix, since the suffix delimiter will not occur in the suffix itself (currently being a UID).
2018-02-20 16:31:44 +01:00
Oleksii Kliukin c597377617
Use cluster UID as a suffix to the WAL bucket. (#211)
Avoid reusing WAL S3 buckets of the older cluster with the same name as the existing one.

For the new cluster, the S3 bucket name will include a suffix that is equal to the UID of the PostgreSQL object describing the cluster. That way, the bucket name will stay the same for all members iff  they correspond to the same PostgreSQL cluster object.

When "clone: uid:" key is present in the cluster manifest and the cluster is cloned from an S3 bucket (currently that happens if the endTimestamp is present in the clone description) the S3 bucket to clone from is suffixed with the -uid value.
2018-02-20 15:36:43 +01:00
Dmitrii Dolgov a7cd859919 Some improvements for golint, ineffassign and misspell 2018-02-19 17:46:31 +01:00
Sergey Dudoladov f194a2ae5a Introduce changes from the PR #200 by @alexeyklyukin 2018-02-07 14:02:32 +01:00
Sergey Dudoladov ea84f9d577 Rename the configmap 'namespace' entry to avoid confusion with the map's owm namespace 2018-02-06 15:09:00 +01:00
Oleksii Kliukin b90a36c909
Set node_readiness_label default to an empty value. (#204)
Previously, it was set to the lifecycle-status:ready, breaking a
lot of minikube deployments. Also it was not possible befor to run
with this label set to an empty value.

Document the effect of the label in the new section of the
documentation.
2018-01-16 15:43:03 +01:00
Manuel Gómez bf4406d2a4 Consider container names in Statefulset diffs (#210)
This includes a comparison on container names being equal in the
decision of whether a Statefulset has been updated.
2018-01-16 12:06:11 +01:00
Oleksii Kliukin 23011bdf9a
Migrate only master pods. Migrate single masters. (#199)
Avoid migrating replica pods, since they will be handled by the
node draining anyway (the PDB specifies that only masters are to
be kept).

Allow migration of the single-pod clusters.
2018-01-09 11:55:11 +01:00
zerg-junior bb5ce6cbbe
Merge pull request #195 from zalando-incubator/databases-rest-endpoint
Add a REST endpoint to list databases in all clusters
2018-01-09 11:53:32 +01:00
Oleksii Kliukin 8e99518eeb
Improve behavior on node decomissionining (#184)
* Trigger the node migration on the lack of  the readiness label.

* Examine the node's readiness status on node add.

Make sure we don't miss the not ready node, especially when the
operator is killed during the migration.
2018-01-04 11:53:15 +01:00
Manuel Gómez 1109cfa7a1
Add PostgreSQL pod namespace Scalyr sidecar environment (#196)
Another tiny bit of information that could be useful for log filters
once we start deploying clusters into separate namespaces.
2017-12-22 17:12:50 +01:00
Oleksii Kliukin 9720ac1f7e WIP: Hold the proper locks while examining the list of databases.
Introduce a new lock called specMu lock to protect the cluster spec.
This lock is held on update and sync, and when retrieving the spec in
the API code. There is no need to acquire it for cluster creation and
deletion: creation assigns the spec to the cluster before linking it to
the controller, and deletion just removes the cluster from the list in
the controller, both holding the global clustersMu Lock.
2017-12-22 13:06:11 +01:00