* add json:omitempty option to ClusterDomain
* Add default value for ClusterDomain
Unfortunately, omitempty in operator configuration CRD doesn't mean that
defauls from operator config object will be picked up automatically.
Make sure that ClusterDomain default is specified, so that even when
someone will set cluster_domain = "", it will be overwritted with a
default value.
Co-authored-by: mlu42 <mlu42pro@gmail.com>
Since it's an important part of a connection pool configuration, allow
to configure max db connections, that pool will open to a target db.
From this numbers several others (like default pool size, min pool size,
reserve) will be deduced, taking into account desired number of instances.
* bump version to 1.4.0 + some polishing
* align version for UI chart
* update user docs to warn for standby replicas
* minor log message changes for RBAC resources
* define postgres-pod clusterrole and align rbac in chart
* align UI chart rbac with operator and update doc
* operator RBAC needs podsecuritypolicy to grant it to postgres-pod
Add pool configuration into CRD & charts. Add preliminary documentation.
Rename NumberOfInstances to Replicas like in Deployment. Mention couple
of potential improvement points for connection pool specification.
Add synchronization logic. For now get rid of podTemplate, type fields.
Add crd validation & configuration part, put retry on top of lookup
function installation.
* add validation for PG resources and volume size
* check resource requests also on UPDATE and SYNC + update docs
* if cluster was running don't error on sync
* add CRD manifests with validation
* update documentation
* patroni slots is not an array but a nested hash map
* make deps call tools
* cover validation in docs and export it in crds.go
* add toggle to disable creation of CRD validation and document it
* use templated service account also for CRD-configured helm deployment
* Added possibility to add custom annotations to LoadBalancer service.
* Added parameters for custom endpoint, access and secret key for logical backup.
* Modified dump.sh so it knows how to handle new features. Configurable S3 SSE
For optimization purposes operator was creating a cache map to remember
if service accounts and role binding was deployed to a namespace. This
could lead to a problem, when a namespace was deleted, since this
cache was not synchronized. For the sake of correctness remove the
cache, and check every time if required service account and rbac is
present. In the normal case this introduces an overhead of two API calls
per an event (one to get a service accounts, one to get a role binding),
which should not be a problem, unless proven otherwise.
* And attempt to build with modules and remove glide
* new tools.go file to get code-generator dependency + updated codegen + remove Glide files and update docs
* align config map, operator config, helm chart values and templates
* follow helm chart conventions also in CRD templates
* split up values files and add comments
* avoid yaml confusion in postgres manifests
* bump spilo version and use example for logical_backup_s3_bucket
* add ConfigTarget switch to values
* StatefulSet fsGroup config option to allow non-root spilo
* Allow Postgres CRD to overide SpiloFSGroup of the Operator.
* Document FSGroup of a Pod cannot be changed after creation.
* database.go: substitute hardcoded .svc.cluster.local dns suffix with config parameter
Use the pod's configured dns search path, for clusters where .svc.cluster.local is not correct.
* turns PostgresStatus type into a struct with field PostgresClusterStatus
* setStatus patch target is now /status subresource
* unmarshalling PostgresStatus takes care of previous status field convention
* new simple bool functions status.Running(), status.Creating()
* Config option to allow Spilo container to run non-privileged.
Runs non-privileged by default.
Fixes#395
* add spilo_privileged to manifests/configmap.yaml
* add spilo_privileged to helm chart's values.yaml