merge master and improve standy doc
This commit is contained in:
commit
948d9b84f6
4
LICENSE
4
LICENSE
|
|
@ -1,6 +1,6 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2015 Compose, Zalando SE
|
||||
Copyright (c) 2019 Zalando SE
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
@ -18,4 +18,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
SOFTWARE.
|
||||
|
|
|
|||
|
|
@ -43,6 +43,7 @@ rules:
|
|||
verbs:
|
||||
- create
|
||||
- delete
|
||||
- deletecollection
|
||||
- get
|
||||
- list
|
||||
- patch
|
||||
|
|
|
|||
|
|
@ -69,6 +69,8 @@ configAwsOrGcp:
|
|||
# kube_iam_role: ""
|
||||
# log_s3_bucket: ""
|
||||
# wal_s3_bucket: ""
|
||||
# additional_secret_mount: "some-secret-name"
|
||||
# additional_secret_mount_path: "/some/dir"
|
||||
|
||||
configLogicalBackup:
|
||||
logical_backup_schedule: "30 00 * * *"
|
||||
|
|
@ -114,6 +116,7 @@ configKubernetesCRD:
|
|||
cluster_name_label: cluster-name
|
||||
enable_pod_antiaffinity: false
|
||||
pod_antiaffinity_topology_key: "kubernetes.io/hostname"
|
||||
enable_pod_disruption_budget: true
|
||||
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
|
||||
# inherited_labels:
|
||||
# - application
|
||||
|
|
@ -161,7 +164,7 @@ serviceAccount:
|
|||
# The name of the ServiceAccount to use.
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
# When relying solely on the OperatorConfiguration CRD, set this value to "operator"
|
||||
# Otherwise, the operator tries to use the "default" service account which is forbidden
|
||||
# Otherwise, the operator tries to use the "default" service account which is forbidden
|
||||
name: ""
|
||||
|
||||
priorityClassName: ""
|
||||
|
|
|
|||
|
|
@ -177,6 +177,22 @@ data:
|
|||
pod_antiaffinity_topology_key: "failure-domain.beta.kubernetes.io/zone"
|
||||
```
|
||||
|
||||
## Pod Disruption Budget
|
||||
|
||||
By default the operator uses a PodDisruptionBudget (PDB) to protect the cluster
|
||||
from voluntarily disruptions and hence unwanted DB downtime. The `MinAvailable`
|
||||
parameter of the PDB is set to `1` which prevents killing masters in single-node
|
||||
clusters and/or the last remaining running instance in a multi-node cluster.
|
||||
|
||||
The PDB is only relaxed in two scenarios:
|
||||
* If a cluster is scaled down to `0` instances (e.g. for draining nodes)
|
||||
* If the PDB is disabled in the configuration (`enable_pod_disruption_budget`)
|
||||
|
||||
The PDB is still in place having `MinAvailable` set to `0`. If enabled it will
|
||||
be automatically set to `1` on scale up. Disabling PDBs helps avoiding blocking
|
||||
Kubernetes upgrades in managed K8s environments at the cost of prolonged DB
|
||||
downtime. See PR #384 for the use case.
|
||||
|
||||
## Add cluster-specific labels
|
||||
|
||||
In some cases, you might want to add `labels` that are specific to a given
|
||||
|
|
@ -371,6 +387,26 @@ monitoring is outside of the scope of operator responsibilities.
|
|||
configuration. Any such image must ensure the logical backup is able to finish
|
||||
[in presence of pod restarts](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#handling-pod-and-container-failures) and [simultaneous invocations](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations) of the backup cron job.
|
||||
|
||||
<<<<<<< HEAD
|
||||
5. For that feature to work, your RBAC policy must enable operations on the
|
||||
`cronjobs` resource from the `batch` API group for the operator service account.
|
||||
See [example RBAC](../manifests/operator-service-account-rbac.yaml)
|
||||
=======
|
||||
5. For that feature to work, your RBAC policy must enable operations on the `cronjobs` resource from the `batch` API group for the operator service account. See [example RBAC](../manifests/operator-service-account-rbac.yaml)
|
||||
|
||||
## Access to cloud resources from clusters in non cloud environment
|
||||
|
||||
To access cloud resources like S3 from a cluster in a bare metal setup you can use
|
||||
`additional_secret_mount` and `additional_secret_mount_path` config parameters.
|
||||
With this you can provision cloud credentials to the containers in the pods of the StatefulSet.
|
||||
This works this way that it mounts a volume from the given secret in the pod and this can
|
||||
then accessed in the container over the configured mount path. Via [Custum Pod Environment Variables](#custom-pod-environment-variables)
|
||||
you can then point the different cloud sdk's (aws, google etc.) to this mounted secret.
|
||||
With this credentials the cloud sdk can then access cloud resources to upload logs etc.
|
||||
|
||||
A secret can be pre provisioned in different ways:
|
||||
|
||||
* Generic secret created via `kubectl create secret generic some-cloud-creds --from-file=some-cloud-credentials-file.json`
|
||||
|
||||
* Automaticly provisioned via a Controller like [kube-aws-iam-controller](https://github.com/mikkeloscar/kube-aws-iam-controller). This controller would then also rotate the credentials. Please visit the documention for more information.
|
||||
>>>>>>> master
|
||||
|
|
|
|||
|
|
@ -71,10 +71,12 @@ Please, report any issues discovered to https://github.com/zalando/postgres-oper
|
|||
|
||||
## Talks
|
||||
|
||||
1. "PostgreSQL and K8s: DBaaS without a vendor-lock" talk by Oleksii Kliukin, PostgreSQL Sessions 2018: [video](https://www.youtube.com/watch?v=q26U2rQcqMw) | [slides](https://speakerdeck.com/alexeyklyukin/postgresql-and-kubernetes-dbaas-without-a-vendor-lock)
|
||||
1. "Building your own PostgreSQL-as-a-Service on Kubernetes" talk by Alexander Kukushkin, KubeCon NA 2018: [video](https://www.youtube.com/watch?v=G8MnpkbhClc) | [slides](https://static.sched.com/hosted_files/kccna18/1d/Building%20your%20own%20PostgreSQL-as-a-Service%20on%20Kubernetes.pdf)
|
||||
|
||||
2. "PostgreSQL High Availability on Kubernetes with Patroni" talk by Oleksii Kliukin, Atmosphere 2018: [video](https://www.youtube.com/watch?v=cFlwQOPPkeg) | [slides](https://speakerdeck.com/alexeyklyukin/postgresql-high-availability-on-kubernetes-with-patroni)
|
||||
2. "PostgreSQL and Kubernetes: DBaaS without a vendor-lock" talk by Oleksii Kliukin, PostgreSQL Sessions 2018: [video](https://www.youtube.com/watch?v=q26U2rQcqMw) | [slides](https://speakerdeck.com/alexeyklyukin/postgresql-and-kubernetes-dbaas-without-a-vendor-lock)
|
||||
|
||||
2. "Blue elephant on-demand: Postgres + Kubernetes" talk by Oleksii Kliukin and Jan Mussler, FOSDEM 2018: [video](https://fosdem.org/2018/schedule/event/blue_elephant_on_demand_postgres_kubernetes/) | [slides (pdf)](https://www.postgresql.eu/events/fosdem2018/sessions/session/1735/slides/59/FOSDEM%202018_%20Blue_Elephant_On_Demand.pdf)
|
||||
3. "PostgreSQL High Availability on Kubernetes with Patroni" talk by Oleksii Kliukin, Atmosphere 2018: [video](https://www.youtube.com/watch?v=cFlwQOPPkeg) | [slides](https://speakerdeck.com/alexeyklyukin/postgresql-high-availability-on-kubernetes-with-patroni)
|
||||
|
||||
4. "Blue elephant on-demand: Postgres + Kubernetes" talk by Oleksii Kliukin and Jan Mussler, FOSDEM 2018: [video](https://fosdem.org/2018/schedule/event/blue_elephant_on_demand_postgres_kubernetes/) | [slides (pdf)](https://www.postgresql.eu/events/fosdem2018/sessions/session/1735/slides/59/FOSDEM%202018_%20Blue_Elephant_On_Demand.pdf)
|
||||
|
||||
3. "Kube-Native Postgres" talk by Josh Berkus, KubeCon 2017: [video](https://www.youtube.com/watch?v=Zn1vd7sQ_bc)
|
||||
|
|
|
|||
|
|
@ -61,8 +61,8 @@ These parameters are grouped directly under the `spec` key in the manifest.
|
|||
It should be a [Spilo](https://github.com/zalando/spilo) image. Optional.
|
||||
|
||||
* **spiloFSGroup**
|
||||
the Persistent Volumes for the spilo pods in the StatefulSet will be owned
|
||||
and writable by the group ID specified. This will override the **spilo_fsgroup**
|
||||
the Persistent Volumes for the spilo pods in the StatefulSet will be owned and
|
||||
writable by the group ID specified. This will override the **spilo_fsgroup**
|
||||
operator parameter. This is required to run Spilo as a non-root process, but
|
||||
requires a custom spilo image. Note the FSGroup of a Pod cannot be changed
|
||||
without recreating a new Pod.
|
||||
|
|
@ -150,8 +150,8 @@ Those parameters are grouped under the `postgresql` top-level key.
|
|||
|
||||
* **parameters**
|
||||
a dictionary of postgres parameter names and values to apply to the resulting
|
||||
cluster. Optional (Spilo automatically sets reasonable defaults for
|
||||
parameters like work_mem or max_connections).
|
||||
cluster. Optional (Spilo automatically sets reasonable defaults for parameters
|
||||
like work_mem or max_connections).
|
||||
|
||||
|
||||
## Patroni parameters
|
||||
|
|
@ -255,8 +255,13 @@ under the `clone` top-level key and do not affect the already running cluster.
|
|||
timestamp. When this parameter is set the operator will not consider cloning
|
||||
from the live cluster, even if it is running, and instead goes to S3. Optional.
|
||||
|
||||
* **s3_wal_path**
|
||||
the url to S3 bucket containing the WAL archive of the cluster to be cloned.
|
||||
Optional.
|
||||
|
||||
* **s3_endpoint**
|
||||
the url of the S3-compatible service should be set when cloning from non AWS S3. Optional.
|
||||
the url of the S3-compatible service should be set when cloning from non AWS
|
||||
S3. Optional.
|
||||
|
||||
* **s3_access_key_id**
|
||||
the access key id, used for authentication on S3 service. Optional.
|
||||
|
|
@ -265,8 +270,20 @@ under the `clone` top-level key and do not affect the already running cluster.
|
|||
the secret access key, used for authentication on S3 service. Optional.
|
||||
|
||||
* **s3_force_path_style**
|
||||
to enable path-style addressing(i.e., http://s3.amazonaws.com/BUCKET/KEY) when connecting to an S3-compatible service
|
||||
that lack of support for sub-domain style bucket URLs (i.e., http://BUCKET.s3.amazonaws.com/KEY). Optional.
|
||||
to enable path-style addressing(i.e., http://s3.amazonaws.com/BUCKET/KEY)
|
||||
when connecting to an S3-compatible service that lack of support for
|
||||
sub-domain style bucket URLs (i.e., http://BUCKET.s3.amazonaws.com/KEY).
|
||||
Optional.
|
||||
|
||||
## Standby cluster
|
||||
|
||||
On startup, an existing `standby` top-level key creates a standby Postgres
|
||||
cluster streaming from a remote location. So far only streaming from a S3 WAL
|
||||
archive is supported.
|
||||
|
||||
* **s3_wal_path**
|
||||
the url to S3 bucket containing the WAL archive of the remote primary.
|
||||
Optional.
|
||||
|
||||
## EBS volume resizing
|
||||
|
||||
|
|
@ -283,6 +300,9 @@ properties of the persistent storage that stores postgres data.
|
|||
documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/)
|
||||
for the details on storage classes. Optional.
|
||||
|
||||
* **subPath**
|
||||
Subpath to use when mounting volume into Spilo container
|
||||
|
||||
## Sidecar definitions
|
||||
|
||||
Those parameters are defined under the `sidecars` key. They consist of a list
|
||||
|
|
|
|||
|
|
@ -162,6 +162,13 @@ configuration they are grouped under the `kubernetes` key.
|
|||
replaced by the cluster name. Only the `{cluster}` placeholders is allowed in
|
||||
the template.
|
||||
|
||||
* **enable_pod_disruption_budget**
|
||||
PDB is enabled by default to protect the cluster from voluntarily disruptions
|
||||
and hence unwanted DB downtime. However, on some cloud providers it could be
|
||||
necessary to temporarily disabled it, e.g. for node updates. See
|
||||
[admin docs](../administrator.md#pod-disruption-budget) for more information.
|
||||
Default is true.
|
||||
|
||||
* **secret_name_template**
|
||||
a template for the name of the database user secrets generated by the
|
||||
operator. `{username}` is replaced with name of the secret, `{cluster}` with
|
||||
|
|
@ -400,7 +407,13 @@ yet officially supported.
|
|||
empty.
|
||||
|
||||
* **aws_region**
|
||||
AWS region used to store ESB volumes. The default is `eu-central-1`.
|
||||
AWS region used to store EBS volumes. The default is `eu-central-1`.
|
||||
|
||||
* **additional_secret_mount**
|
||||
Additional Secret (aws or gcp credentials) to mount in the pod. The default is empty.
|
||||
|
||||
* **additional_secret_mount_path**
|
||||
Path to mount the above Secret in the filesystem of the container(s). The default is empty.
|
||||
|
||||
## Logical backup
|
||||
|
||||
|
|
|
|||
31
docs/user.md
31
docs/user.md
|
|
@ -280,6 +280,37 @@ spec:
|
|||
s3_force_path_style: true
|
||||
```
|
||||
|
||||
## Setting up a standby cluster
|
||||
|
||||
Standby clusters are like normal cluster but they are streaming from a remote
|
||||
cluster. As the first version of this feature, the only scenario covered by
|
||||
operator is to stream from a WAL archive of the master. Following the more
|
||||
popular infrastructure of using Amazon's S3 buckets, it is mentioned as
|
||||
`s3_wal_path` here. To make a cluster a standby add a `standby` section in the
|
||||
YAML file as follows.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
standby:
|
||||
s3_wal_path: "s3 bucket path to the master"
|
||||
```
|
||||
|
||||
Things to note:
|
||||
|
||||
- An empty string is provided in `s3_wal_path` of the standby cluster will
|
||||
result in error and no statefulset will be created.
|
||||
- Only one pod can be deployed for stand-by cluster.
|
||||
- To manually promote the standby_cluster, use patronictl and remove config
|
||||
entry.
|
||||
- There is no way to transform a non-standby cluster to become a standby cluster
|
||||
through operator. Hence, if a cluster is created without standby section in
|
||||
YAML and later modified by adding that section, there will be no effect on
|
||||
the cluster. However, it can be done through Patroni by adding the
|
||||
[standby_cluster] (https://github.com/zalando/patroni/blob/bd2c54581abb42a7d3a3da551edf0b8732eefd27/docs/replica_bootstrap.rst#standby-cluster)
|
||||
section using `patronictl edit-config`. Note that the transformed standby
|
||||
cluster will not be doing any streaming. It will be in standby mode and allow
|
||||
read-only transactions only.
|
||||
|
||||
## Sidecar Support
|
||||
|
||||
Each cluster can specify arbitrary sidecars to run. These containers could be
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ spec:
|
|||
# cluster: "acid-batman"
|
||||
# timestamp: "2017-12-19T12:40:33+01:00" # timezone required (offset relative to UTC, see RFC 3339 section 5.6)
|
||||
# s3_wal_path: "s3://custom/path/to/bucket"
|
||||
|
||||
|
||||
# run periodic backups with k8s cron jobs
|
||||
# enableLogicalBackup: true
|
||||
# logicalBackupSchedule: "30 00 * * *"
|
||||
|
|
@ -86,4 +86,3 @@ spec:
|
|||
# env:
|
||||
# - name: "USEFUL_VAR"
|
||||
# value: "perhaps-true"
|
||||
|
||||
|
|
|
|||
|
|
@ -33,6 +33,8 @@ data:
|
|||
# https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees
|
||||
# inherited_labels: ""
|
||||
aws_region: eu-central-1
|
||||
# additional_secret_mount: "some-secret-name"
|
||||
# additional_secret_mount_path: "/some/dir"
|
||||
db_hosted_zone: db.example.com
|
||||
master_dns_name_format: '{cluster}.{team}.staging.{hostedzone}'
|
||||
replica_dns_name_format: '{cluster}-repl.{team}.staging.{hostedzone}'
|
||||
|
|
|
|||
|
|
@ -40,6 +40,7 @@ rules:
|
|||
verbs:
|
||||
- create
|
||||
- delete
|
||||
- deletecollection
|
||||
- get
|
||||
- list
|
||||
- patch
|
||||
|
|
|
|||
|
|
@ -1,9 +1,12 @@
|
|||
apiVersion: apps/v1beta1
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: postgres-operator
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
name: postgres-operator
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
|
|
|
|||
|
|
@ -20,6 +20,7 @@ configuration:
|
|||
pod_service_account_name: operator
|
||||
pod_terminate_grace_period: 5m
|
||||
pdb_name_format: "postgres-{cluster}-pdb"
|
||||
enable_pod_disruption_budget: true
|
||||
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
|
||||
cluster_domain: cluster.local
|
||||
oauth_token_secret_name: postgresql-operator
|
||||
|
|
@ -66,6 +67,8 @@ configuration:
|
|||
# log_s3_bucket: ""
|
||||
# kube_iam_role: ""
|
||||
aws_region: eu-central-1
|
||||
# additional_secret_mount: "some-secret-name"
|
||||
# additional_secret_mount_path: "/some/dir"
|
||||
debug:
|
||||
debug_logging: true
|
||||
enable_database_access: true
|
||||
|
|
|
|||
|
|
@ -0,0 +1,20 @@
|
|||
|
||||
apiVersion: "acid.zalan.do/v1"
|
||||
kind: postgresql
|
||||
metadata:
|
||||
name: acid-standby-cluster
|
||||
namespace: default
|
||||
spec:
|
||||
teamId: "ACID"
|
||||
volume:
|
||||
size: 1Gi
|
||||
numberOfInstances: 1
|
||||
postgresql:
|
||||
version: "10"
|
||||
# Make this a standby cluster and provide the s3 bucket path of source cluster for continuous streaming.
|
||||
standby:
|
||||
s3_wal_path: "s3://path/to/bucket/containing/wal/of/source/cluster/"
|
||||
|
||||
maintenanceWindows:
|
||||
- 01:00-06:00 #UTC
|
||||
- Sat:00:00-04:00
|
||||
|
|
@ -49,6 +49,7 @@ type KubernetesMetaConfiguration struct {
|
|||
SpiloFSGroup *int64 `json:"spilo_fsgroup,omitempty"`
|
||||
WatchedNamespace string `json:"watched_namespace,omitempty"`
|
||||
PDBNameFormat config.StringTemplate `json:"pdb_name_format,omitempty"`
|
||||
EnablePodDisruptionBudget *bool `json:"enable_pod_disruption_budget,omitempty"`
|
||||
SecretNameTemplate config.StringTemplate `json:"secret_name_template,omitempty"`
|
||||
ClusterDomain string `json:"cluster_domain"`
|
||||
OAuthTokenSecretName spec.NamespacedName `json:"oauth_token_secret_name,omitempty"`
|
||||
|
|
@ -100,10 +101,12 @@ type LoadBalancerConfiguration struct {
|
|||
// AWSGCPConfiguration defines the configuration for AWS
|
||||
// TODO complete Google Cloud Platform (GCP) configuration
|
||||
type AWSGCPConfiguration struct {
|
||||
WALES3Bucket string `json:"wal_s3_bucket,omitempty"`
|
||||
AWSRegion string `json:"aws_region,omitempty"`
|
||||
LogS3Bucket string `json:"log_s3_bucket,omitempty"`
|
||||
KubeIAMRole string `json:"kube_iam_role,omitempty"`
|
||||
WALES3Bucket string `json:"wal_s3_bucket,omitempty"`
|
||||
AWSRegion string `json:"aws_region,omitempty"`
|
||||
LogS3Bucket string `json:"log_s3_bucket,omitempty"`
|
||||
KubeIAMRole string `json:"kube_iam_role,omitempty"`
|
||||
AdditionalSecretMount string `json:"additional_secret_mount,omitempty"`
|
||||
AdditionalSecretMountPath string `json:"additional_secret_mount_path" default:"/meta/credentials"`
|
||||
}
|
||||
|
||||
// OperatorDebugConfiguration defines options for the debug mode
|
||||
|
|
|
|||
|
|
@ -58,6 +58,7 @@ type PostgresSpec struct {
|
|||
ShmVolume *bool `json:"enableShmVolume,omitempty"`
|
||||
EnableLogicalBackup bool `json:"enableLogicalBackup,omitempty"`
|
||||
LogicalBackupSchedule string `json:"logicalBackupSchedule,omitempty"`
|
||||
StandbyCluster *StandbyDescription `json:"standby"`
|
||||
}
|
||||
|
||||
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
|
||||
|
|
@ -82,6 +83,7 @@ type MaintenanceWindow struct {
|
|||
type Volume struct {
|
||||
Size string `json:"size"`
|
||||
StorageClass string `json:"storageClass"`
|
||||
SubPath string `json:"subPath,omitempty"`
|
||||
}
|
||||
|
||||
// PostgresqlParam describes PostgreSQL version and pairs of configuration parameter name - values.
|
||||
|
|
@ -113,6 +115,11 @@ type Patroni struct {
|
|||
Slots map[string]map[string]string `json:"slots"`
|
||||
}
|
||||
|
||||
//StandbyCluster
|
||||
type StandbyDescription struct {
|
||||
S3WalPath string `json:"s3_wal_path,omitempty"`
|
||||
}
|
||||
|
||||
// CloneDescription describes which cluster the new should clone and up to which point in time
|
||||
type CloneDescription struct {
|
||||
ClusterName string `json:"cluster,omitempty"`
|
||||
|
|
|
|||
|
|
@ -181,7 +181,8 @@ var unmarshalCluster = []struct {
|
|||
"teamId": "ACID",
|
||||
"volume": {
|
||||
"size": "5Gi",
|
||||
"storageClass": "SSD"
|
||||
"storageClass": "SSD",
|
||||
"subPath": "subdir"
|
||||
},
|
||||
"numberOfInstances": 2,
|
||||
"users": {
|
||||
|
|
@ -263,6 +264,7 @@ var unmarshalCluster = []struct {
|
|||
Volume: Volume{
|
||||
Size: "5Gi",
|
||||
StorageClass: "SSD",
|
||||
SubPath: "subdir",
|
||||
},
|
||||
Patroni: Patroni{
|
||||
InitDB: map[string]string{
|
||||
|
|
@ -311,7 +313,7 @@ var unmarshalCluster = []struct {
|
|||
},
|
||||
Error: "",
|
||||
},
|
||||
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"acid-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"9.6","parameters":{"log_statement":"all","max_connections":"10","shared_buffers":"32MB"}},"volume":{"size":"5Gi","storageClass":"SSD"},"patroni":{"initdb":{"data-checksums":"true","encoding":"UTF8","locale":"en_US.UTF-8"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"],"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}}},"resources":{"requests":{"cpu":"10m","memory":"50Mi"},"limits":{"cpu":"300m","memory":"3000Mi"}},"teamId":"ACID","allowedSourceRanges":["127.0.0.1/32"],"numberOfInstances":2,"users":{"zalando":["superuser","createdb"]},"maintenanceWindows":["Mon:01:00-06:00","Sat:00:00-04:00","05:00-05:15"],"clone":{"cluster":"acid-batman"}},"status":{"PostgresClusterStatus":""}}`),
|
||||
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"acid-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"9.6","parameters":{"log_statement":"all","max_connections":"10","shared_buffers":"32MB"}},"volume":{"size":"5Gi","storageClass":"SSD", "subPath": "subdir"},"patroni":{"initdb":{"data-checksums":"true","encoding":"UTF8","locale":"en_US.UTF-8"},"pg_hba":["hostssl all all 0.0.0.0/0 md5","host all all 0.0.0.0/0 md5"],"ttl":30,"loop_wait":10,"retry_timeout":10,"maximum_lag_on_failover":33554432,"slots":{"permanent_logical_1":{"database":"foo","plugin":"pgoutput","type":"logical"}}},"resources":{"requests":{"cpu":"10m","memory":"50Mi"},"limits":{"cpu":"300m","memory":"3000Mi"}},"teamId":"ACID","allowedSourceRanges":["127.0.0.1/32"],"numberOfInstances":2,"users":{"zalando":["superuser","createdb"]},"maintenanceWindows":["Mon:01:00-06:00","Sat:00:00-04:00","05:00-05:15"],"clone":{"cluster":"acid-batman"}},"status":{"PostgresClusterStatus":""}}`),
|
||||
err: nil},
|
||||
// example with teamId set in input
|
||||
{
|
||||
|
|
@ -328,7 +330,7 @@ var unmarshalCluster = []struct {
|
|||
Status: PostgresStatus{PostgresClusterStatus: ClusterStatusInvalid},
|
||||
Error: errors.New("name must match {TEAM}-{NAME} format").Error(),
|
||||
},
|
||||
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"teapot-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"","parameters":null},"volume":{"size":"","storageClass":""},"patroni":{"initdb":null,"pg_hba":null,"ttl":0,"loop_wait":0,"retry_timeout":0,"maximum_lag_on_failover":0,"slots":null},"resources":{"requests":{"cpu":"","memory":""},"limits":{"cpu":"","memory":""}},"teamId":"acid","allowedSourceRanges":null,"numberOfInstances":0,"users":null,"clone":{}},"status":{"PostgresClusterStatus":"Invalid"}}`),
|
||||
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"teapot-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"","parameters":null},"volume":{"size":"","storageClass":""},"patroni":{"initdb":null,"pg_hba":null,"ttl":0,"loop_wait":0,"retry_timeout":0,"maximum_lag_on_failover":0,"slots":null} ,"resources":{"requests":{"cpu":"","memory":""},"limits":{"cpu":"","memory":""}},"teamId":"acid","allowedSourceRanges":null,"numberOfInstances":0,"users":null,"clone":{}},"status":{"PostgresClusterStatus":"Invalid"}}`),
|
||||
err: nil},
|
||||
// clone example
|
||||
{
|
||||
|
|
@ -352,6 +354,28 @@ var unmarshalCluster = []struct {
|
|||
},
|
||||
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"acid-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"","parameters":null},"volume":{"size":"","storageClass":""},"patroni":{"initdb":null,"pg_hba":null,"ttl":0,"loop_wait":0,"retry_timeout":0,"maximum_lag_on_failover":0,"slots":null},"resources":{"requests":{"cpu":"","memory":""},"limits":{"cpu":"","memory":""}},"teamId":"acid","allowedSourceRanges":null,"numberOfInstances":0,"users":null,"clone":{"cluster":"team-batman"}},"status":{"PostgresClusterStatus":""}}`),
|
||||
err: nil},
|
||||
// standby example
|
||||
{
|
||||
in: []byte(`{"kind": "Postgresql","apiVersion": "acid.zalan.do/v1","metadata": {"name": "acid-testcluster1"}, "spec": {"teamId": "acid", "standby": {"s3_wal_path": "s3://custom/path/to/bucket/"}}}`),
|
||||
out: Postgresql{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: "Postgresql",
|
||||
APIVersion: "acid.zalan.do/v1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "acid-testcluster1",
|
||||
},
|
||||
Spec: PostgresSpec{
|
||||
TeamID: "acid",
|
||||
StandbyCluster: &StandbyDescription{
|
||||
S3WalPath: "s3://custom/path/to/bucket/",
|
||||
},
|
||||
ClusterName: "testcluster1",
|
||||
},
|
||||
Error: "",
|
||||
},
|
||||
marshal: []byte(`{"kind":"Postgresql","apiVersion":"acid.zalan.do/v1","metadata":{"name":"acid-testcluster1","creationTimestamp":null},"spec":{"postgresql":{"version":"","parameters":null},"volume":{"size":"","storageClass":""},"patroni":{"initdb":null,"pg_hba":null,"ttl":0,"loop_wait":0,"retry_timeout":0,"maximum_lag_on_failover":0,"slots":null},"resources":{"requests":{"cpu":"","memory":""},"limits":{"cpu":"","memory":""}},"teamId":"acid","allowedSourceRanges":null,"numberOfInstances":0,"users":null,"standby":{"s3_wal_path":"s3://custom/path/to/bucket/"}},"status":{"PostgresClusterStatus":""}}`),
|
||||
err: nil},
|
||||
// erroneous examples
|
||||
{
|
||||
in: []byte(`{"kind": "Postgresql","apiVersion": "acid.zalan.do/v1"`),
|
||||
|
|
|
|||
|
|
@ -76,6 +76,11 @@ func (in *KubernetesMetaConfiguration) DeepCopyInto(out *KubernetesMetaConfigura
|
|||
*out = new(int64)
|
||||
**out = **in
|
||||
}
|
||||
if in.EnablePodDisruptionBudget != nil {
|
||||
in, out := &in.EnablePodDisruptionBudget, &out.EnablePodDisruptionBudget
|
||||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
out.OAuthTokenSecretName = in.OAuthTokenSecretName
|
||||
out.InfrastructureRolesSecretName = in.InfrastructureRolesSecretName
|
||||
if in.ClusterLabels != nil {
|
||||
|
|
@ -498,6 +503,11 @@ func (in *PostgresSpec) DeepCopyInto(out *PostgresSpec) {
|
|||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
if in.StandbyCluster != nil {
|
||||
in, out := &in.StandbyCluster, &out.StandbyCluster
|
||||
*out = new(StandbyDescription)
|
||||
**out = **in
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
|
|
@ -706,6 +716,22 @@ func (in *Sidecar) DeepCopy() *Sidecar {
|
|||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *StandbyDescription) DeepCopyInto(out *StandbyDescription) {
|
||||
*out = *in
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StandbyDescription.
|
||||
func (in *StandbyDescription) DeepCopy() *StandbyDescription {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(StandbyDescription)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *TeamsAPIConfiguration) DeepCopyInto(out *TeamsAPIConfiguration) {
|
||||
*out = *in
|
||||
|
|
|
|||
|
|
@ -287,7 +287,7 @@ func (c *Cluster) Create() error {
|
|||
c.logger.Infof("pods are ready")
|
||||
|
||||
// create database objects unless we are running without pods or disabled that feature explicitly
|
||||
if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&c.Spec) <= 0) {
|
||||
if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&c.Spec) <= 0 || c.Spec.StandbyCluster != nil) {
|
||||
if err = c.createRoles(); err != nil {
|
||||
return fmt.Errorf("could not create users: %v", err)
|
||||
}
|
||||
|
|
@ -579,6 +579,15 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
}
|
||||
}()
|
||||
|
||||
// pod disruption budget
|
||||
if oldSpec.Spec.NumberOfInstances != newSpec.Spec.NumberOfInstances {
|
||||
c.logger.Debug("syncing pod disruption budgets")
|
||||
if err := c.syncPodDisruptionBudget(true); err != nil {
|
||||
c.logger.Errorf("could not sync pod disruption budget: %v", err)
|
||||
updateFailed = true
|
||||
}
|
||||
}
|
||||
|
||||
// logical backup job
|
||||
func() {
|
||||
|
||||
|
|
@ -617,7 +626,7 @@ func (c *Cluster) Update(oldSpec, newSpec *acidv1.Postgresql) error {
|
|||
}()
|
||||
|
||||
// Roles and Databases
|
||||
if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&c.Spec) <= 0) {
|
||||
if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&c.Spec) <= 0 || c.Spec.StandbyCluster != nil) {
|
||||
c.logger.Debugf("syncing roles")
|
||||
if err := c.syncRoles(); err != nil {
|
||||
c.logger.Errorf("could not sync roles: %v", err)
|
||||
|
|
|
|||
|
|
@ -342,11 +342,12 @@ func isBootstrapOnlyParameter(param string) bool {
|
|||
param == "track_commit_timestamp"
|
||||
}
|
||||
|
||||
func generateVolumeMounts() []v1.VolumeMount {
|
||||
func generateVolumeMounts(volume acidv1.Volume) []v1.VolumeMount {
|
||||
return []v1.VolumeMount{
|
||||
{
|
||||
Name: constants.DataVolumeName,
|
||||
MountPath: constants.PostgresDataMount, //TODO: fetch from manifest
|
||||
SubPath: volume.SubPath,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
|
@ -359,6 +360,8 @@ func generateContainer(
|
|||
volumeMounts []v1.VolumeMount,
|
||||
privilegedMode bool,
|
||||
) *v1.Container {
|
||||
falseBool := false
|
||||
|
||||
return &v1.Container{
|
||||
Name: name,
|
||||
Image: *dockerImage,
|
||||
|
|
@ -381,7 +384,8 @@ func generateContainer(
|
|||
VolumeMounts: volumeMounts,
|
||||
Env: envVars,
|
||||
SecurityContext: &v1.SecurityContext{
|
||||
Privileged: &privilegedMode,
|
||||
Privileged: &privilegedMode,
|
||||
ReadOnlyRootFilesystem: &falseBool,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
|
@ -441,6 +445,8 @@ func generatePodTemplate(
|
|||
shmVolume bool,
|
||||
podAntiAffinity bool,
|
||||
podAntiAffinityTopologyKey string,
|
||||
additionalSecretMount string,
|
||||
additionalSecretMountPath string,
|
||||
) (*v1.PodTemplateSpec, error) {
|
||||
|
||||
terminateGracePeriodSeconds := terminateGracePeriod
|
||||
|
|
@ -475,6 +481,10 @@ func generatePodTemplate(
|
|||
podSpec.PriorityClassName = priorityClassName
|
||||
}
|
||||
|
||||
if additionalSecretMount != "" {
|
||||
addSecretVolume(&podSpec, additionalSecretMount, additionalSecretMountPath)
|
||||
}
|
||||
|
||||
template := v1.PodTemplateSpec{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: labels,
|
||||
|
|
@ -490,7 +500,7 @@ func generatePodTemplate(
|
|||
}
|
||||
|
||||
// generatePodEnvVars generates environment variables for the Spilo Pod
|
||||
func (c *Cluster) generateSpiloPodEnvVars(uid types.UID, spiloConfiguration string, cloneDescription *acidv1.CloneDescription, customPodEnvVarsList []v1.EnvVar) []v1.EnvVar {
|
||||
func (c *Cluster) generateSpiloPodEnvVars(uid types.UID, spiloConfiguration string, cloneDescription *acidv1.CloneDescription, standbyDescription *acidv1.StandbyDescription, customPodEnvVarsList []v1.EnvVar) []v1.EnvVar {
|
||||
envVars := []v1.EnvVar{
|
||||
{
|
||||
Name: "SCOPE",
|
||||
|
|
@ -594,6 +604,10 @@ func (c *Cluster) generateSpiloPodEnvVars(uid types.UID, spiloConfiguration stri
|
|||
envVars = append(envVars, c.generateCloneEnvironment(cloneDescription)...)
|
||||
}
|
||||
|
||||
if c.Spec.StandbyCluster != nil {
|
||||
envVars = append(envVars, c.generateStandbyEnvironment(standbyDescription)...)
|
||||
}
|
||||
|
||||
if len(customPodEnvVarsList) > 0 {
|
||||
envVars = append(envVars, customPodEnvVarsList...)
|
||||
}
|
||||
|
|
@ -783,6 +797,9 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State
|
|||
sort.Slice(customPodEnvVarsList,
|
||||
func(i, j int) bool { return customPodEnvVarsList[i].Name < customPodEnvVarsList[j].Name })
|
||||
}
|
||||
if spec.StandbyCluster != nil && spec.StandbyCluster.S3WalPath == "" {
|
||||
return nil, fmt.Errorf("s3_wal_path is empty for standby cluster")
|
||||
}
|
||||
|
||||
spiloConfiguration, err := generateSpiloJSONConfiguration(&spec.PostgresqlParam, &spec.Patroni, c.OpConfig.PamRoleName, c.logger)
|
||||
if err != nil {
|
||||
|
|
@ -792,12 +809,12 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State
|
|||
// generate environment variables for the spilo container
|
||||
spiloEnvVars := deduplicateEnvVars(
|
||||
c.generateSpiloPodEnvVars(c.Postgresql.GetUID(), spiloConfiguration, &spec.Clone,
|
||||
customPodEnvVarsList), c.containerName(), c.logger)
|
||||
spec.StandbyCluster, customPodEnvVarsList), c.containerName(), c.logger)
|
||||
|
||||
// pickup the docker image for the spilo container
|
||||
effectiveDockerImage := util.Coalesce(spec.DockerImage, c.OpConfig.DockerImage)
|
||||
|
||||
volumeMounts := generateVolumeMounts()
|
||||
volumeMounts := generateVolumeMounts(spec.Volume)
|
||||
|
||||
// generate the spilo container
|
||||
c.logger.Debugf("Generating Spilo container, environment variables: %v", spiloEnvVars)
|
||||
|
|
@ -860,7 +877,9 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State
|
|||
effectivePodPriorityClassName,
|
||||
mountShmVolumeNeeded(c.OpConfig, spec),
|
||||
c.OpConfig.EnablePodAntiAffinity,
|
||||
c.OpConfig.PodAntiAffinityTopologyKey); err != nil {
|
||||
c.OpConfig.PodAntiAffinityTopologyKey,
|
||||
c.OpConfig.AdditionalSecretMount,
|
||||
c.OpConfig.AdditionalSecretMountPath); err != nil {
|
||||
return nil, fmt.Errorf("could not generate pod template: %v", err)
|
||||
}
|
||||
|
||||
|
|
@ -970,6 +989,11 @@ func (c *Cluster) getNumberOfInstances(spec *acidv1.PostgresSpec) int32 {
|
|||
cur := spec.NumberOfInstances
|
||||
newcur := cur
|
||||
|
||||
/* Limit the max number of pods to one, if this is standby-cluster */
|
||||
if spec.StandbyCluster != nil {
|
||||
c.logger.Info("Standby cluster can have maximum of 1 pod")
|
||||
max = 1
|
||||
}
|
||||
if max >= 0 && newcur > max {
|
||||
newcur = max
|
||||
}
|
||||
|
|
@ -1009,6 +1033,28 @@ func addShmVolume(podSpec *v1.PodSpec) {
|
|||
podSpec.Volumes = volumes
|
||||
}
|
||||
|
||||
func addSecretVolume(podSpec *v1.PodSpec, additionalSecretMount string, additionalSecretMountPath string) {
|
||||
volumes := append(podSpec.Volumes, v1.Volume{
|
||||
Name: additionalSecretMount,
|
||||
VolumeSource: v1.VolumeSource{
|
||||
Secret: &v1.SecretVolumeSource{
|
||||
SecretName: additionalSecretMount,
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
for i := range podSpec.Containers {
|
||||
mounts := append(podSpec.Containers[i].VolumeMounts,
|
||||
v1.VolumeMount{
|
||||
Name: additionalSecretMount,
|
||||
MountPath: additionalSecretMountPath,
|
||||
})
|
||||
podSpec.Containers[i].VolumeMounts = mounts
|
||||
}
|
||||
|
||||
podSpec.Volumes = volumes
|
||||
}
|
||||
|
||||
func generatePersistentVolumeClaimTemplate(volumeSize, volumeStorageClass string) (*v1.PersistentVolumeClaim, error) {
|
||||
|
||||
var storageClassName *string
|
||||
|
|
@ -1268,34 +1314,61 @@ func (c *Cluster) generateCloneEnvironment(description *acidv1.CloneDescription)
|
|||
result = append(result, v1.EnvVar{Name: "CLONE_WAL_BUCKET_SCOPE_PREFIX", Value: ""})
|
||||
|
||||
if description.S3Endpoint != "" {
|
||||
result = append(result, v1.EnvVar{Name: "CLONE_AWS_ENDPOINT", Value: description.S3Endpoint})
|
||||
result = append(result, v1.EnvVar{Name: "CLONE_WALE_S3_ENDPOINT", Value: description.S3Endpoint})
|
||||
result = append(result, v1.EnvVar{Name: "CLONE_AWS_ENDPOINT", Value: description.S3Endpoint})
|
||||
result = append(result, v1.EnvVar{Name: "CLONE_WALE_S3_ENDPOINT", Value: description.S3Endpoint})
|
||||
}
|
||||
|
||||
if description.S3AccessKeyId != "" {
|
||||
result = append(result, v1.EnvVar{Name: "CLONE_AWS_ACCESS_KEY_ID", Value: description.S3AccessKeyId})
|
||||
result = append(result, v1.EnvVar{Name: "CLONE_AWS_ACCESS_KEY_ID", Value: description.S3AccessKeyId})
|
||||
}
|
||||
|
||||
if description.S3SecretAccessKey != "" {
|
||||
result = append(result, v1.EnvVar{Name: "CLONE_AWS_SECRET_ACCESS_KEY", Value: description.S3SecretAccessKey})
|
||||
result = append(result, v1.EnvVar{Name: "CLONE_AWS_SECRET_ACCESS_KEY", Value: description.S3SecretAccessKey})
|
||||
}
|
||||
|
||||
if description.S3ForcePathStyle != nil {
|
||||
s3ForcePathStyle := "0"
|
||||
s3ForcePathStyle := "0"
|
||||
|
||||
if *description.S3ForcePathStyle {
|
||||
s3ForcePathStyle = "1"
|
||||
}
|
||||
if *description.S3ForcePathStyle {
|
||||
s3ForcePathStyle = "1"
|
||||
}
|
||||
|
||||
result = append(result, v1.EnvVar{Name: "CLONE_AWS_S3_FORCE_PATH_STYLE", Value: s3ForcePathStyle})
|
||||
result = append(result, v1.EnvVar{Name: "CLONE_AWS_S3_FORCE_PATH_STYLE", Value: s3ForcePathStyle})
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (c *Cluster) generateStandbyEnvironment(description *acidv1.StandbyDescription) []v1.EnvVar {
|
||||
result := make([]v1.EnvVar, 0)
|
||||
|
||||
if description.S3WalPath == "" {
|
||||
return nil
|
||||
}
|
||||
// standby with S3, find out the bucket to setup standby
|
||||
msg := "Standby from S3 bucket using custom parsed S3WalPath from the manifest %s "
|
||||
c.logger.Infof(msg, description.S3WalPath)
|
||||
|
||||
result = append(result, v1.EnvVar{
|
||||
Name: "STANDBY_WALE_S3_PREFIX",
|
||||
Value: description.S3WalPath,
|
||||
})
|
||||
|
||||
result = append(result, v1.EnvVar{Name: "STANDBY_METHOD", Value: "STANDBY_WITH_WALE"})
|
||||
result = append(result, v1.EnvVar{Name: "STANDBY_WAL_BUCKET_SCOPE_PREFIX", Value: ""})
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (c *Cluster) generatePodDisruptionBudget() *policybeta1.PodDisruptionBudget {
|
||||
minAvailable := intstr.FromInt(1)
|
||||
pdbEnabled := c.OpConfig.EnablePodDisruptionBudget
|
||||
|
||||
// if PodDisruptionBudget is disabled or if there are no DB pods, set the budget to 0.
|
||||
if (pdbEnabled != nil && !*pdbEnabled) || c.Spec.NumberOfInstances <= 0 {
|
||||
minAvailable = intstr.FromInt(0)
|
||||
}
|
||||
|
||||
return &policybeta1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
|
|
@ -1385,6 +1458,8 @@ func (c *Cluster) generateLogicalBackupJob() (*batchv1beta1.CronJob, error) {
|
|||
"",
|
||||
false,
|
||||
false,
|
||||
"",
|
||||
"",
|
||||
""); err != nil {
|
||||
return nil, fmt.Errorf("could not generate pod template for logical backup pod: %v", err)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,8 @@
|
|||
package cluster
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
|
||||
"k8s.io/api/core/v1"
|
||||
|
||||
"testing"
|
||||
|
|
@ -9,6 +11,10 @@ import (
|
|||
"github.com/zalando/postgres-operator/pkg/util/config"
|
||||
"github.com/zalando/postgres-operator/pkg/util/constants"
|
||||
"github.com/zalando/postgres-operator/pkg/util/k8sutil"
|
||||
|
||||
policyv1beta1 "k8s.io/api/policy/v1beta1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/util/intstr"
|
||||
)
|
||||
|
||||
func True() *bool {
|
||||
|
|
@ -21,6 +27,11 @@ func False() *bool {
|
|||
return &b
|
||||
}
|
||||
|
||||
func toIntStr(val int) *intstr.IntOrString {
|
||||
b := intstr.FromInt(val)
|
||||
return &b
|
||||
}
|
||||
|
||||
func TestGenerateSpiloJSONConfiguration(t *testing.T) {
|
||||
var cluster = New(
|
||||
Config{
|
||||
|
|
@ -143,6 +154,113 @@ func TestCreateLoadBalancerLogic(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestGeneratePodDisruptionBudget(t *testing.T) {
|
||||
tests := []struct {
|
||||
c *Cluster
|
||||
out policyv1beta1.PodDisruptionBudget
|
||||
}{
|
||||
// With multiple instances.
|
||||
{
|
||||
New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger),
|
||||
policyv1beta1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "postgres-myapp-database-pdb",
|
||||
Namespace: "myapp",
|
||||
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||
},
|
||||
Spec: policyv1beta1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: toIntStr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
// With zero instances.
|
||||
{
|
||||
New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb"}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 0}},
|
||||
logger),
|
||||
policyv1beta1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "postgres-myapp-database-pdb",
|
||||
Namespace: "myapp",
|
||||
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||
},
|
||||
Spec: policyv1beta1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: toIntStr(0),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
// With PodDisruptionBudget disabled.
|
||||
{
|
||||
New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-pdb", EnablePodDisruptionBudget: False()}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger),
|
||||
policyv1beta1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "postgres-myapp-database-pdb",
|
||||
Namespace: "myapp",
|
||||
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||
},
|
||||
Spec: policyv1beta1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: toIntStr(0),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
// With non-default PDBNameFormat and PodDisruptionBudget explicitly enabled.
|
||||
{
|
||||
New(
|
||||
Config{OpConfig: config.Config{Resources: config.Resources{ClusterNameLabel: "cluster-name", PodRoleLabel: "spilo-role"}, PDBNameFormat: "postgres-{cluster}-databass-budget", EnablePodDisruptionBudget: True()}},
|
||||
k8sutil.KubernetesClient{},
|
||||
acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "myapp-database", Namespace: "myapp"},
|
||||
Spec: acidv1.PostgresSpec{TeamID: "myapp", NumberOfInstances: 3}},
|
||||
logger),
|
||||
policyv1beta1.PodDisruptionBudget{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "postgres-myapp-database-databass-budget",
|
||||
Namespace: "myapp",
|
||||
Labels: map[string]string{"team": "myapp", "cluster-name": "myapp-database"},
|
||||
},
|
||||
Spec: policyv1beta1.PodDisruptionBudgetSpec{
|
||||
MinAvailable: toIntStr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"spilo-role": "master", "cluster-name": "myapp-database"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
result := tt.c.generatePodDisruptionBudget()
|
||||
if !reflect.DeepEqual(*result, tt.out) {
|
||||
t.Errorf("Expected PodDisruptionBudget: %#v, got %#v", tt.out, *result)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestShmVolume(t *testing.T) {
|
||||
testName := "TestShmVolume"
|
||||
tests := []struct {
|
||||
|
|
@ -269,6 +387,76 @@ func TestCloneEnv(t *testing.T) {
|
|||
t.Errorf("%s %s: Expected env value %s, have %s instead",
|
||||
testName, tt.subTest, tt.env.Value, env.Value)
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
func TestSecretVolume(t *testing.T) {
|
||||
testName := "TestSecretVolume"
|
||||
tests := []struct {
|
||||
subTest string
|
||||
podSpec *v1.PodSpec
|
||||
secretPos int
|
||||
}{
|
||||
{
|
||||
subTest: "empty PodSpec",
|
||||
podSpec: &v1.PodSpec{
|
||||
Volumes: []v1.Volume{},
|
||||
Containers: []v1.Container{
|
||||
{
|
||||
VolumeMounts: []v1.VolumeMount{},
|
||||
},
|
||||
},
|
||||
},
|
||||
secretPos: 0,
|
||||
},
|
||||
{
|
||||
subTest: "non empty PodSpec",
|
||||
podSpec: &v1.PodSpec{
|
||||
Volumes: []v1.Volume{{}},
|
||||
Containers: []v1.Container{
|
||||
{
|
||||
VolumeMounts: []v1.VolumeMount{
|
||||
{
|
||||
Name: "data",
|
||||
ReadOnly: false,
|
||||
MountPath: "/data",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
secretPos: 1,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
additionalSecretMount := "aws-iam-s3-role"
|
||||
additionalSecretMountPath := "/meta/credentials"
|
||||
|
||||
numMounts := len(tt.podSpec.Containers[0].VolumeMounts)
|
||||
|
||||
addSecretVolume(tt.podSpec, additionalSecretMount, additionalSecretMountPath)
|
||||
|
||||
volumeName := tt.podSpec.Volumes[tt.secretPos].Name
|
||||
|
||||
if volumeName != additionalSecretMount {
|
||||
t.Errorf("%s %s: Expected volume %s was not created, have %s instead",
|
||||
testName, tt.subTest, additionalSecretMount, volumeName)
|
||||
}
|
||||
|
||||
for i := range tt.podSpec.Containers {
|
||||
volumeMountName := tt.podSpec.Containers[i].VolumeMounts[tt.secretPos].Name
|
||||
|
||||
if volumeMountName != additionalSecretMount {
|
||||
t.Errorf("%s %s: Expected mount %s was not created, have %s instead",
|
||||
testName, tt.subTest, additionalSecretMount, volumeMountName)
|
||||
}
|
||||
}
|
||||
|
||||
numMountsCheck := len(tt.podSpec.Containers[0].VolumeMounts)
|
||||
|
||||
if numMountsCheck != numMounts+1 {
|
||||
t.Errorf("Unexpected number of VolumeMounts: got %v instead of %v",
|
||||
numMountsCheck, numMounts+1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -73,20 +73,6 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
|
|||
}
|
||||
}
|
||||
|
||||
// create database objects unless we are running without pods or disabled that feature explicitly
|
||||
if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&newSpec.Spec) <= 0) {
|
||||
c.logger.Debugf("syncing roles")
|
||||
if err = c.syncRoles(); err != nil {
|
||||
err = fmt.Errorf("could not sync roles: %v", err)
|
||||
return err
|
||||
}
|
||||
c.logger.Debugf("syncing databases")
|
||||
if err = c.syncDatabases(); err != nil {
|
||||
err = fmt.Errorf("could not sync databases: %v", err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
c.logger.Debug("syncing pod disruption budgets")
|
||||
if err = c.syncPodDisruptionBudget(false); err != nil {
|
||||
err = fmt.Errorf("could not sync pod disruption budget: %v", err)
|
||||
|
|
@ -103,6 +89,20 @@ func (c *Cluster) Sync(newSpec *acidv1.Postgresql) error {
|
|||
}
|
||||
}
|
||||
|
||||
// create database objects unless we are running without pods or disabled that feature explicitly
|
||||
if !(c.databaseAccessDisabled() || c.getNumberOfInstances(&newSpec.Spec) <= 0 || c.Spec.StandbyCluster != nil) {
|
||||
c.logger.Debugf("syncing roles")
|
||||
if err = c.syncRoles(); err != nil {
|
||||
err = fmt.Errorf("could not sync roles: %v", err)
|
||||
return err
|
||||
}
|
||||
c.logger.Debugf("syncing databases")
|
||||
if err = c.syncDatabases(); err != nil {
|
||||
err = fmt.Errorf("could not sync databases: %v", err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -46,6 +46,7 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
|
|||
result.ClusterDomain = fromCRD.Kubernetes.ClusterDomain
|
||||
result.WatchedNamespace = fromCRD.Kubernetes.WatchedNamespace
|
||||
result.PDBNameFormat = fromCRD.Kubernetes.PDBNameFormat
|
||||
result.EnablePodDisruptionBudget = fromCRD.Kubernetes.EnablePodDisruptionBudget
|
||||
result.SecretNameTemplate = fromCRD.Kubernetes.SecretNameTemplate
|
||||
result.OAuthTokenSecretName = fromCRD.Kubernetes.OAuthTokenSecretName
|
||||
result.InfrastructureRolesSecretName = fromCRD.Kubernetes.InfrastructureRolesSecretName
|
||||
|
|
@ -85,6 +86,8 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
|
|||
result.AWSRegion = fromCRD.AWSGCP.AWSRegion
|
||||
result.LogS3Bucket = fromCRD.AWSGCP.LogS3Bucket
|
||||
result.KubeIAMRole = fromCRD.AWSGCP.KubeIAMRole
|
||||
result.AdditionalSecretMount = fromCRD.AWSGCP.AdditionalSecretMount
|
||||
result.AdditionalSecretMountPath = fromCRD.AWSGCP.AdditionalSecretMountPath
|
||||
|
||||
result.DebugLogging = fromCRD.OperatorDebug.DebugLogging
|
||||
result.EnableDBAccess = fromCRD.OperatorDebug.EnableDBAccess
|
||||
|
|
|
|||
|
|
@ -98,6 +98,8 @@ type Config struct {
|
|||
WALES3Bucket string `name:"wal_s3_bucket"`
|
||||
LogS3Bucket string `name:"log_s3_bucket"`
|
||||
KubeIAMRole string `name:"kube_iam_role"`
|
||||
AdditionalSecretMount string `name:"additional_secret_mount"`
|
||||
AdditionalSecretMountPath string `name:"additional_secret_mount_path" default:"/meta/credentials"`
|
||||
DebugLogging bool `name:"debug_logging" default:"true"`
|
||||
EnableDBAccess bool `name:"enable_database_access" default:"true"`
|
||||
EnableTeamsAPI bool `name:"enable_teams_api" default:"true"`
|
||||
|
|
@ -110,20 +112,21 @@ type Config struct {
|
|||
EnablePodAntiAffinity bool `name:"enable_pod_antiaffinity" default:"false"`
|
||||
PodAntiAffinityTopologyKey string `name:"pod_antiaffinity_topology_key" default:"kubernetes.io/hostname"`
|
||||
// deprecated and kept for backward compatibility
|
||||
EnableLoadBalancer *bool `name:"enable_load_balancer"`
|
||||
MasterDNSNameFormat StringTemplate `name:"master_dns_name_format" default:"{cluster}.{team}.{hostedzone}"`
|
||||
ReplicaDNSNameFormat StringTemplate `name:"replica_dns_name_format" default:"{cluster}-repl.{team}.{hostedzone}"`
|
||||
PDBNameFormat StringTemplate `name:"pdb_name_format" default:"postgres-{cluster}-pdb"`
|
||||
Workers uint32 `name:"workers" default:"4"`
|
||||
APIPort int `name:"api_port" default:"8080"`
|
||||
RingLogLines int `name:"ring_log_lines" default:"100"`
|
||||
ClusterHistoryEntries int `name:"cluster_history_entries" default:"1000"`
|
||||
TeamAPIRoleConfiguration map[string]string `name:"team_api_role_configuration" default:"log_statement:all"`
|
||||
PodTerminateGracePeriod time.Duration `name:"pod_terminate_grace_period" default:"5m"`
|
||||
PodManagementPolicy string `name:"pod_management_policy" default:"ordered_ready"`
|
||||
ProtectedRoles []string `name:"protected_role_names" default:"admin"`
|
||||
PostgresSuperuserTeams []string `name:"postgres_superuser_teams" default:""`
|
||||
SetMemoryRequestToLimit bool `name:"set_memory_request_to_limit" defaults:"false"`
|
||||
EnableLoadBalancer *bool `name:"enable_load_balancer"`
|
||||
MasterDNSNameFormat StringTemplate `name:"master_dns_name_format" default:"{cluster}.{team}.{hostedzone}"`
|
||||
ReplicaDNSNameFormat StringTemplate `name:"replica_dns_name_format" default:"{cluster}-repl.{team}.{hostedzone}"`
|
||||
PDBNameFormat StringTemplate `name:"pdb_name_format" default:"postgres-{cluster}-pdb"`
|
||||
EnablePodDisruptionBudget *bool `name:"enable_pod_disruption_budget" default:"true"`
|
||||
Workers uint32 `name:"workers" default:"4"`
|
||||
APIPort int `name:"api_port" default:"8080"`
|
||||
RingLogLines int `name:"ring_log_lines" default:"100"`
|
||||
ClusterHistoryEntries int `name:"cluster_history_entries" default:"1000"`
|
||||
TeamAPIRoleConfiguration map[string]string `name:"team_api_role_configuration" default:"log_statement:all"`
|
||||
PodTerminateGracePeriod time.Duration `name:"pod_terminate_grace_period" default:"5m"`
|
||||
PodManagementPolicy string `name:"pod_management_policy" default:"ordered_ready"`
|
||||
ProtectedRoles []string `name:"protected_role_names" default:"admin"`
|
||||
PostgresSuperuserTeams []string `name:"postgres_superuser_teams" default:""`
|
||||
SetMemoryRequestToLimit bool `name:"set_memory_request_to_limit" default:"false"`
|
||||
}
|
||||
|
||||
// MustMarshal marshals the config or panics
|
||||
|
|
|
|||
|
|
@ -158,7 +158,7 @@ func SamePDB(cur, new *policybeta1.PodDisruptionBudget) (match bool, reason stri
|
|||
//TODO: improve comparison
|
||||
match = reflect.DeepEqual(new.Spec, cur.Spec)
|
||||
if !match {
|
||||
reason = "new service spec doesn't match the current one"
|
||||
reason = "new PDB spec doesn't match the current one"
|
||||
}
|
||||
|
||||
return
|
||||
|
|
|
|||
Loading…
Reference in New Issue