update referece docs

This commit is contained in:
Felix Kunde 2019-07-04 21:21:12 +02:00
parent d1d2341477
commit 5d0818c402
3 changed files with 44 additions and 49 deletions

View File

@ -72,10 +72,8 @@ Tiller) in your local cluster you can install the operator chart. You can define
a release name that is prepended to the operator resource's names.
Use `--name zalando` to match with the default service account name as older
operator versions do not support custom names for service accounts. When relying
solely on the CRD-based configuration use the [values-crd yaml file](../charts/values-crd.yaml)
and comment the ConfigMap template in the [helmignore](../charts/.helmignore)
file.
operator versions do not support custom names for service accounts. To use
CRD-based configuration use the [values-crd yaml file](../charts/values-crd.yaml).
```bash
# 1) initialize helm

View File

@ -1,3 +1,5 @@
# Cluster manifest reference
Individual postgres clusters are described by the Kubernetes *cluster manifest*
that has the structure defined by the `postgres CRD` (custom resource
definition). The following section describes the structure of the manifest and
@ -201,10 +203,9 @@ explanation of `ttl` and `loop_wait` parameters.
## Postgres container resources
Those parameters define [CPU and memory requests and
limits](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
Those parameters define [CPU and memory requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
for the postgres container. They are grouped under the `resources` top-level
key. There are two subgroups, `requests` and `limits`.
key with subgroups `requests` and `limits`.
### Requests
@ -218,7 +219,7 @@ CPU and memory requests for the postgres container.
memory requests for the postgres container. Optional, overrides the
`default_memory_request` operator configuration parameter. Optional.
#### Limits
### Limits
CPU and memory limits for the postgres container.
@ -267,7 +268,7 @@ under the `clone` top-level key and do not affect the already running cluster.
to enable path-style addressing(i.e., http://s3.amazonaws.com/BUCKET/KEY) when connecting to an S3-compatible service
that lack of support for sub-domain style bucket URLs (i.e., http://BUCKET.s3.amazonaws.com/KEY). Optional.
### EBS volume resizing
## EBS volume resizing
Those parameters are grouped under the `volume` top-level key and define the
properties of the persistent storage that stores postgres data.
@ -282,7 +283,7 @@ properties of the persistent storage that stores postgres data.
documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/)
for the details on storage classes. Optional.
### Sidecar definitions
## Sidecar definitions
Those parameters are defined under the `sidecars` key. They consist of a list
of dictionaries, each defining one sidecar (an extra container running
@ -300,16 +301,11 @@ defined in the sidecar dictionary:
(https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/)
for environment variables. Optional.
* **resources** see below. Optional.
* **resources**
[CPU and memory requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container)
for each sidecar container. Optional.
#### Sidecar container resources
Those parameters define [CPU and memory requests and
limits](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
for the sidecar container. They are grouped under the `resources` key for each sidecar.
There are two subgroups, `requests` and `limits`.
##### Requests
### Requests
CPU and memory requests for the sidecar container.
@ -321,7 +317,7 @@ CPU and memory requests for the sidecar container.
memory requests for the sidecar container. Optional, overrides the
`default_memory_request` operator configuration parameter. Optional.
##### Limits
### Limits
CPU and memory limits for the sidecar container.

View File

@ -1,3 +1,5 @@
# Configuration parameters
There are two mutually-exclusive methods to set the Postgres Operator
configuration.
@ -32,10 +34,10 @@ configuration.
kubectl create -f manifests/postgresql-operator-default-configuration.yaml
kubectl get operatorconfigurations postgresql-operator-default-configuration -o yaml
```
Note that the operator first attempts to register the CRD of the `OperatorConfiguration`
and then waits for an instance to be created. In between these two event the
operator pod may be failing since it cannot fetch the not-yet-existing
`OperatorConfiguration` instance.
Note that the operator first attempts to register the CRD of the
`OperatorConfiguration` and then waits for an instance to be created. In
between these two event the operator pod may be failing since it cannot fetch
the not-yet-existing `OperatorConfiguration` instance.
The CRD-based configuration is more powerful than the one based on ConfigMaps
and should be used unless there is a compatibility requirement to use an already
@ -145,8 +147,7 @@ configuration they are grouped under the `kubernetes` key.
be used. The default is empty.
* **pod_terminate_grace_period**
Postgres pods are [terminated
forcefully](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods)
Postgres pods are [terminated forcefully](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods)
after this timeout. The default is `5m`.
* **watched_namespace**
@ -229,8 +230,9 @@ configuration they are grouped under the `kubernetes` key.
be defined in advance. Default is empty (use the default priority class).
* **spilo_fsgroup**
the Persistent Volumes for the spilo pods in the StatefulSet will be owned and writable by the group ID specified.
This is required to run Spilo as a non-root process, but requires a custom spilo image. Note the FSGroup of a Pod
the Persistent Volumes for the spilo pods in the StatefulSet will be owned and
writable by the group ID specified. This is required to run Spilo as a
non-root process, but requires a custom spilo image. Note the FSGroup of a Pod
cannot be changed without recreating a new Pod.
* **spilo_privileged**
@ -400,6 +402,26 @@ yet officially supported.
* **aws_region**
AWS region used to store ESB volumes. The default is `eu-central-1`.
## Logical backup
These parameters configure a k8s cron job managed by the operator to produce
Postgres logical backups. In the CRD-based configuration those parameters are
grouped under the `logical_backup` key.
* **logical_backup_schedule**
Backup schedule in the cron format. Please take [the reference schedule format](https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#schedule) into account. Default: "30 00 \* \* \*"
* **logical_backup_docker_image**
An image for pods of the logical backup job. The [example image](../../docker/logical-backup/Dockerfile)
runs `pg_dumpall` on a replica if possible and uploads compressed results to
an S3 bucket under the key `/spilo/pg_cluster_name/cluster_k8s_uuid/logical_backups`.
The default image is the same image built with the Zalando-internal CI
pipeline. Default: "registry.opensource.zalan.do/acid/logical-backup"
* **logical_backup_s3_bucket**
S3 bucket to store backup results. The bucket has to be present and
accessible by Postgres pods. Default: empty.
## Debugging the operator
Options to aid debugging of the operator itself. Grouped under the `debug` key.
@ -514,24 +536,3 @@ scalyr sidecar. In the CRD-based configuration they are grouped under the
* **scalyr_memory_limit**
Memory limit value for the Scalyr sidecar. The default is `1Gi`.
## Logical backup
These parameters configure a k8s cron job managed by the operator to produce
Postgres logical backups. In the CRD-based configuration those parameters are
grouped under the `logical_backup` key.
* **logical_backup_schedule**
Backup schedule in the cron format. Please take [the reference schedule format](https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#schedule) into account. Default: "30 00 \* \* \*"
* **logical_backup_docker_image**
An image for pods of the logical backup job. The [example image](../../docker/logical-backup/Dockerfile)
runs `pg_dumpall` on a replica if possible and uploads compressed results to
an S3 bucket under the key `/spilo/pg_cluster_name/cluster_k8s_uuid/logical_backups`.
The default image is the same image built with the Zalando-internal CI
pipeline. Default: "registry.opensource.zalan.do/acid/logical-backup"
* **logical_backup_s3_bucket**
S3 bucket to store backup results. The bucket has to be present and
accessible by Postgres pods. Default: empty.