* Track origin of roles.
* Propagate changes on infrastructure roles to corresponding secrets.
When the password in the infrastructure role is updated, re-generate the
secret for that role.
Previously, the password for an infrastructure role was always fetched from
the secret, making any updates to such role a no-op after the corresponding
secret had been generated.
There used to be a masterLess flag that was supposed to indicate whether the cluster it belongs to runs without the acting master by design. At some point, as we didn't really have support for such clusters, the flag has been misused to indicate there is no master in the cluster. However, that was not done consistently (a cluster without all pods running would never be masterless, even when the master is not among the running pods) and it was based on the wrong assumption that the masterless cluster will remain masterless until the next attempt to change that flag, ignoring the possibility of master coming up or some node doing a successful promotion. Therefore, this PR gets rid of that flag completely.
When the cluster is running with 0 instances, there is obviously no master and it makes no sense to create any database objects inside the non-existing master. Therefore, this PR introduces an additional check for that.
recreatePods were assuming that the roles of the pods recorded when the function has stared will not change; for instance, terminated replica pods should start as replicas. Revisit that assumption by looking at the actual role of the re-spawned pods; that avoids a failover if some replica has promoted to the master role while being re-spawned. In addition, if the failover from the old master was unsuccessful, we used to stop and leave the old master running on an old pod, without recording this fact anywhere. This PR makes the failover failure emit a warning, but not stop recreating the last master pod; in the worst case, the running master will be terminated, however, this case is rather unlikely one.
As a side effect, make waitForPodLabel return the pod definition it waited for, avoiding extra API calls in recreatePods and movePodFromEndOfLifeNode
Introduce a new lock called specMu lock to protect the cluster spec.
This lock is held on update and sync, and when retrieving the spec in
the API code. There is no need to acquire it for cluster creation and
deletion: creation assigns the spec to the cluster before linking it to
the controller, and deletion just removes the cluster from the list in
the controller, both holding the global clustersMu Lock.
* Scalyr agent sidecar for log shipping
* Remove the default for the Scalyr image
Now the image needs to be specified explicitly to enable log shipping to
Scalyr. This removes the problem of having to generate the config file
or publish our agent image repository.
* Add configuration variable for Scalyr server URL
Defaults to the EU address.
* Alter style
Newlines are cheap and make code easier to edit/refactor, but ok.
* Fix StatefulSet comparison logic
I broke it when I made the comparison consider all containers in the
PostgreSQL pod.
* Make sure the statefulset that is deleted manually gets re-created.
Per report and analysis by Manuel Gomez.
* Move the existence checks for other objects out of the Create functions.
create{Object} for services, endpoints and PDBs refused to continue if
there is a cached definition in the cluster, however, the only place
where it makes sense is when creating a new cluster. Note that contrary
to the statefulset this doesn't fix any issues, since those definitions
were nullified correspondingly when the sync code detected there is no
object present in the Kubernetes cluster.
* Introduce higher and lower bounds for the number of instances
Reduce the number of instances to the min_instances if it is lower and
to the max_instances if it is higher. -1 for either of those means there
is no lower or upper bound.
In addition, terminate the operator when there is a nonsense in the
configuration (i.e. max_instances < min_instances).
Reviewed by Jan Mußler and Sergey Dudoladov.
- make sure that the secrets for the system users (superuser, replication)
are not deleted when the main cluster is. Therefore, we can re-create
the cluster, potentially forcing Patroni to restore it from the backup
and enable Patroni to connect, since it will use the old password, not
the newly generated random one.
- when syncing users, always check whether they are already in the DB.
Previously, we did this only for the sync cluster case, but the new
cluster could be actually the one restored from the backup by Patroni,
having all or some of the users already in place.
- delete endponts last. Patroni uses the $clustername endpoint in order
to store the leader related metadata. If we remove it before removing
all pods, one of those pods running Patroni will re-create it and the
next attempt to create the cluster with the same name will stuble on
the existing endpoint.
- Use db.Exec instead of db.Query for queries that expect no result.
This also fixes the issue with the DB creation, since we didn't
release an empty Row object it was not possible to create more than
one database for a cluster.
Add options to the PgUser structure, potentially allowing to set
per-role options in the cluster definition as well.
Introduce api_roles_configuration operator option with the default
of log_statement=all
- Call comparison function in the case of the sync as well as for update
- Include full cluster name in PDB name
- Assign cluster labels to the PDB object
Open and close DB connections on-demand.
Previously, we used to leave the DB connection open while the
cluster was registered with the operator, potentially resutling
in dangled connections if the operator terminates abnormally.
Small refactoring around the role syncing code.
* client-go v4.0.0-beta0
* remove unnecessary methods for tpr object
* rest client: use interface instead of structure pointer
* proper names for constants; some clean up for log messages
* remove teams api client from controller and make it per cluster
* Deny all requests to the load balancer by default.
* Operator-wide toggle for the load-balancer.
* Define per-cluster useLoadBalancer option.
If useLoadBalancer is not set - then operator-wide defaults take place. If it
is true - the load balancer is created, otherwise a service type clusterIP is
created.
Internally, we have to completely replace the service if the service type
changes. We cannot patch, since some fields from the old service that will
remain after patch are incompatible with the new one, and handling them
explicitly when updating the service is ugly and error-prone. We cannot
update the service because of the immutable fields, that leaves us the only
option of deleting the old service and creating the new one. Unfortunately,
there is still an issue of unnecessary removal of endpoints associated with
the service, it will be addressed in future commits.
* Revert the unintended effect of go fmt
* Recreate endpoints on service update.
When the service type is changed, the service is deleted and then
the one with the new type is created. Unfortnately, endpoints are
deleted as well. Re-create them afterwards, preserving the original
addresses stored in them.
* Improve error messages and comments. Use generate instead of gen in names.
The flag adds a replica service with the name cluster_name-repl and
a DNS name that defaults to {cluster}-repl.{team}.{hostedzone}.
The implementation converted Service field of the cluster into a map
with one or two elements and deals with the cases when the new flag
is changed on a running cluster
(the update and the sync should create or delete the replica service).
In order to pick up master and replica service and master endpoint
when listing cluster resources.
* Update the spec when updating the cluster.
In order to support volumes different from EBS and filesystems other than EXT2/3/4 the respective code parts were implemented as interfaces. Adding the new resize for the volume or the filesystem will require implementing the interface, but no other changes in the cluster code itself.
Volume resizing first changes the EBS and the filesystem, and only afterwards is reflected in the Kubernetes "PersistentVolume" object. This is done deliberately to be able to check if the volume needs resizing by peeking at the Size of the PersistentVolume structure. We recheck, nevertheless, in the EBSVolumeResizer, whether the actual EBS volume size doesn't match the spec, since call to the AWS ModifyVolume is counted against the resize limit of once every 6 hours, even for those calls that shouldn't result in an actual resize (i.e. when the size matches the one for the running volume).
As a collateral, split the constants into multiple files, move the volume code into a separate file and fix minor issues related to the error reporting.
Updates to statefulset spec for fields other than 'replicas' and
containers' are forbidden. However, it is possible to delete the old
statefulset without deleting its pods and create the new one, using the
changed specs. The new statefulset shall pick up the orphaned pods.
Change the statefulset's comparison to return the combined effect of
all checks, not just the first non-matching field.
Command-line options --nodatabaseaccess and --noteamsapi disable all
teams api interaction and access to the Postgres database. This is
useful for debugging purposes when the operator runs out of cluster
(with --outofcluster flag).
The same effect can be achieved by setting enable_db_access and/or
enable_teams_api to false.
The sync adds or alters database roles based on the roles defined
in the cluster's TPR, Team API and operator's infrastructure roles.
At the moment, roles are not deleted, as it would be dangerous for
the robot roles in case TPR is misconfigured. In addition, ALTER
ROLE does not remove role options, i.e. SUPERUSER or CREATEROLE,
neither it removes role membership: only new options are added and
new role membership is granted. So far, options like NOSUPERUSER
and NOCREATEROLE won't be handed correctly, when mixed with the
non-negative counterparts, also NOLOGIN should be processed correctly.
The code assumes that only MD5 passwords are stored in the DB and
will likely break with the new SCRAM auth in PostgreSQL 10.
On the implementation side, create the new interface to abstract
roles merge and creation, move most of the role-based functionality
from cluster/pg into the new 'users' module, strip create user code
of special cases related to human-based users (moving them to init
instead) and fixed the password md5 generator to avoid processing
already encrypted passwords. In addition, moved the system roles
off the slice containing all other roles in order to avoid extra
efforts to avoid creating them.
Also, fix a leak in DB connections when the new connection is not
considered healthy and discarded without being closed. Initialize
the database during the sync phase before syncing users.
Run operations concerning multiple clusters in parallel. Each cluster gets its
own worker in order to create, update, sync or delete clusters. Each worker
acquires the lock on a cluster. Subsequent operations on the same cluster
have to wait until the current one finishes. There is a pool of parallel
workers, configurable with the `workers` parameter in the configmap and set by
default to 4. The cluster-related tasks are assigned to the workers based on
a cluster name: the tasks for the same cluster will be always assigned to the
same worker. There is no blocking between workers, although there is a chance
that a single worker will become a bottleneck if too many clusters are
assigned to it; therefore, for large-scale deployments it might be necessary
to bump up workers from the default value.
* Add infrastructure roles configured globally.
Those are the roles defined in the operator itself. The operator's
configuration refers to the secret containing role names, passwords
and membership information. While they are referred to as roles, in
reality those are users.
In addition, improve the regex to filter out invalid users and
make sure user secret names are compatible with DNS name spec.
Add an example manifest for the infrastructure roles.
Conceptually, the operator's task is just to change the pod. As it
has no influence over the role the pod will take (either the master
or a replica), it shouldn't wait for the specific role.
This fixes at least one issue, where the pod running in a single-pod
cluster has been waited for forever by the operator expecting it to
have a wrong role (since Patroni callback assiging it the original
replica role has been killed after a quick promote by the next
callback.)
- add a new environment variable for triggering debug log level
- show both new, old object and diff during syncs and updates
- use pretty package to pretty-print go structures
-
* move statefulset creation from cluster spec to the separate function
* sync cluster state with desired state;
* move out from arrays for cluster resources;
* recreate pods instead of deleting them in case of statefulset change
* check for master while creating cluster/updating pods
* simplify retryutil
* list pvc while listing resources
* name kubernetes resources with capital letter
* do rolling update in case of env variables change