* define postgres-pod clusterrole and align rbac in chart
* align UI chart rbac with operator and update doc
* operator RBAC needs podsecuritypolicy to grant it to postgres-pod
* add CRD manifests with validation
* update documentation
* patroni slots is not an array but a nested hash map
* make deps call tools
* cover validation in docs and export it in crds.go
* add toggle to disable creation of CRD validation and document it
* use templated service account also for CRD-configured helm deployment
* Added possibility to add custom annotations to LoadBalancer service.
* Added parameters for custom endpoint, access and secret key for logical backup.
* Modified dump.sh so it knows how to handle new features. Configurable S3 SSE
* align config map, operator config, helm chart values and templates
* follow helm chart conventions also in CRD templates
* split up values files and add comments
* avoid yaml confusion in postgres manifests
* bump spilo version and use example for logical_backup_s3_bucket
* add ConfigTarget switch to values
* database.go: substitute hardcoded .svc.cluster.local dns suffix with config parameter
Use the pod's configured dns search path, for clusters where .svc.cluster.local is not correct.
* Config option to allow Spilo container to run non-privileged.
Runs non-privileged by default.
Fixes#395
* add spilo_privileged to manifests/configmap.yaml
* add spilo_privileged to helm chart's values.yaml
* Minor improvements
* Document empty list vs null for users without privileges
* Change the wording for null values
* Add talk by Oleksii in Atmosphere
* Bump up a Spilo version to use Patroni >= v1.4.4 ; this fixes issues with k8s 1.10 API changes
* Bump up an operator version to use the new 'etcd_host' default value
* Re-use 'zalando-postgres-operator' as a pod service account and add extra RBAC permissions to make it work
* Document in quickstart connecting to Postgres via psql
The node_eol_label is obsolete and not used.
The node_readiness_label, if set, will prevent scheduling pods on the node without that label, by default minikube doesn't set any label on the node.
Previously, it was set to the lifecycle-status:ready, breaking a
lot of minikube deployments. Also it was not possible befor to run
with this label set to an empty value.
Document the effect of the label in the new section of the
documentation.
Previously, the operator started to move the pods off the nodes to be
decomissioned by watching the eol_node_label value. Every new postgres
pod has been created with the anti-affinity to that label, making sure
that the pods being moved won't land on another to be decomissioned
node.
The changes introduce another label that indicates the ready node. The
new pod affinity will esnure that the pod is only scheduled to the node
marked as ready, discarding the previous anti-affinity. That way the
nodes can transition from the pending-decomission to the other statuses
(drained, terminating) without having pods suddently scaled to them.
In addition, rename the label that triggers the start of the upgrade
process to node_eol_label (for consistency with node_readiness_label)
and set its default vvalue to lifecycle-status:pending-decomission.
Add options to the PgUser structure, potentially allowing to set
per-role options in the cluster definition as well.
Introduce api_roles_configuration operator option with the default
of log_statement=all
removing parts of config.
* chaning secret name pattern to make things shorter.
* Move section on self building docker image.
* Fix typo.
* Bump image.
* bump version for pdb fix.
* Changes in regards to review.
* Fix xhyve driver link.
* Move to new api, remove service account, not needed for minikube.
* Changed minimal manifest and example to use right file.
* Added service account for operator again, it is needed in pods anyways later.