fix typos, headings and code alignment in docs

This commit is contained in:
Felix Kunde 2019-07-10 14:50:45 +02:00
parent 7db4ce67ae
commit 82adf956ee
9 changed files with 153 additions and 140 deletions

View File

@ -40,7 +40,7 @@ There is a browser-friendly version of this documentation at
The Postgres Operator made it to the [Google Summer of Code 2019](https://summerofcode.withgoogle.com/)!
As a brand new mentoring organization, we are now looking for our first mentees.
Check [our ideas](https://github.com/zalando/postgres-operator/blob/master/docs/gsoc-2019/ideas.md#google-summer-of-code-2019)
Check [our ideas](docs/gsoc-2019/ideas.md#google-summer-of-code-2019)
and start discussion in [the issue tracker](https://github.com/zalando/postgres-operator/issues).
And don't forget to spread a word about our GSoC participation to attract even
more students.

View File

@ -16,5 +16,5 @@ maintainers:
- name: kimxogus
email: kgyoo8232@gmail.com
sources:
- https://github.com/zalando-incubator/postgres-operator
- https://github.com/zalando/postgres-operator
engine: gotpl

View File

@ -12,8 +12,8 @@ the `test` namespace, run the following before deploying the operator's
manifests:
```bash
$ kubectl create namespace test
$ kubectl config set-context $(kubectl config current-context) --namespace=test
kubectl create namespace test
kubectl config set-context $(kubectl config current-context) --namespace=test
```
All subsequent `kubectl` commands will work with the `test` namespace. The
@ -24,16 +24,15 @@ needs to be adjusted to the non-default value.
### Specify the namespace to watch
Watching a namespace for an operator means tracking requests to change
Postgresql clusters in the namespace such as "increase the number of Postgresql
replicas to 5" and reacting to the requests, in this example by actually
scaling up.
Watching a namespace for an operator means tracking requests to change Postgres
clusters in the namespace such as "increase the number of Postgres replicas to
5" and reacting to the requests, in this example by actually scaling up.
By default, the operator watches the namespace it is deployed to. You can
change this by setting the `WATCHED_NAMESPACE` var in the `env` section of the
[operator deployment](../manifests/postgres-operator.yaml) manifest or by
altering the `watched_namespace` field in the operator
[ConfigMap](../manifests/configmap.yaml#L6).
[ConfigMap](../manifests/configmap.yaml#L79).
In the case both are set, the env var takes the precedence. To make the
operator listen to all namespaces, explicitly set the field/env var to "`*`".
@ -80,10 +79,10 @@ to function under access control restrictions. To deploy the operator with this
RBAC policy use:
```bash
$ kubectl create -f manifests/configmap.yaml
$ kubectl create -f manifests/operator-service-account-rbac.yaml
$ kubectl create -f manifests/postgres-operator.yaml
$ kubectl create -f manifests/minimal-postgres-manifest.yaml
kubectl create -f manifests/configmap.yaml
kubectl create -f manifests/operator-service-account-rbac.yaml
kubectl create -f manifests/postgres-operator.yaml
kubectl create -f manifests/minimal-postgres-manifest.yaml
```
Note that the service account is named `zalando-postgres-operator`. You may have
@ -103,10 +102,10 @@ operator's ConfigMap to run Postgres clusters.
By default `postgresql` custom resources can only by listed and changed by
cluster admins. To allow read and/or write access to other human users apply
the `ùser-facing-clusterrole` manifest:
the `user-facing-clusterrole` manifest:
```bash
$ kubectl create -f manifests/user-facing-clusterroles.yaml
kubectl create -f manifests/user-facing-clusterroles.yaml
```
It creates zalando-postgres-operator:user:view, :edit and :admin clusterroles
@ -121,7 +120,7 @@ and configure the required toleration in the operator ConfigMap.
As an example you can set following node taint:
```bash
$ kubectl taint nodes <nodeName> postgres=:NoSchedule
kubectl taint nodes <nodeName> postgres=:NoSchedule
```
And configure the toleration for the Postgres pods by adding following line
@ -191,7 +190,8 @@ The PDB is only relaxed in two scenarios:
The PDB is still in place having `MinAvailable` set to `0`. If enabled it will
be automatically set to `1` on scale up. Disabling PDBs helps avoiding blocking
Kubernetes upgrades in managed K8s environments at the cost of prolonged DB
downtime. See PR #384 for the use case.
downtime. See PR [#384](https://github.com/zalando/postgres-operator/pull/384)
for the use case.
## Add cluster-specific labels
@ -294,7 +294,7 @@ instances. By default, both parameters are set to `-1`.
## Load balancers and allowed IP ranges
For any Postgresql/Spilo cluster, the operator creates two separate K8s
For any Postgres/Spilo cluster, the operator creates two separate K8s
services: one for the master pod and one for replica pods. To expose these
services to an outer network, one can attach load balancers to them by setting
`enableMasterLoadBalancer` and/or `enableReplicaLoadBalancer` to `true` in the
@ -303,7 +303,7 @@ manifest, the operator configmap's settings `enable_master_load_balancer` and
`enable_replica_load_balancer` apply. Note that the operator settings affect
all Postgresql services running in all namespaces watched by the operator.
To limit the range of IP adresses that can reach a load balancer, specify the
To limit the range of IP addresses that can reach a load balancer, specify the
desired ranges in the `allowedSourceRanges` field (applies to both master and
replica load balancers). To prevent exposing load balancers to the entire
Internet, this field is set at cluster creation time to `127.0.0.1/32` unless
@ -393,14 +393,14 @@ of the backup cron job.
`cronjobs` resource from the `batch` API group for the operator service account.
See [example RBAC](../manifests/operator-service-account-rbac.yaml)
## Access to cloud resources from clusters in non cloud environment
## Access to cloud resources from clusters in non-cloud environment
To access cloud resources like S3 from a cluster in a bare metal setup you can
use `additional_secret_mount` and `additional_secret_mount_path` configuration
parameters. With this you can provision cloud credentials to the containers in
the pods of the StatefulSet. This works this way that it mounts a volume from
the given secret in the pod and this can then accessed in the container over the
configured mount path. Via [Custum Pod Environment Variables](#custom-pod-environment-variables)
configured mount path. Via [Custom Pod Environment Variables](#custom-pod-environment-variables)
you can then point the different cloud SDK's (AWS, GCP etc.) to this mounted
secret. With this credentials the cloud SDK can then access cloud resources to
upload logs etc.
@ -410,4 +410,4 @@ A secret can be pre-provisioned in different ways:
* Generic secret created via `kubectl create secret generic some-cloud-creds --from-file=some-cloud-credentials-file.json`
* Automatically provisioned via a Controller like [kube-aws-iam-controller](https://github.com/mikkeloscar/kube-aws-iam-controller).
This controller would then also rotate the credentials. Please visit the
documention for more information.
documentation for more information.

View File

@ -19,10 +19,10 @@ Given the schema above, the Postgres Operator source code located at
-`~/go/src/github.com/zalando/postgres-operator`.
```bash
$ export GOPATH=~/go
$ mkdir -p ${GOPATH}/src/github.com/zalando/
$ cd ${GOPATH}/src/github.com/zalando/
$ git clone https://github.com/zalando/postgres-operator.git
export GOPATH=~/go
mkdir -p ${GOPATH}/src/github.com/zalando/
cd ${GOPATH}/src/github.com/zalando/
git clone https://github.com/zalando/postgres-operator.git
```
## Building the operator
@ -30,33 +30,33 @@ Given the schema above, the Postgres Operator source code located at
You need Glide to fetch all dependencies. Install it with:
```bash
$ make tools
make tools
```
Next, install dependencies with glide by issuing:
```bash
$ make deps
make deps
```
This would take a while to complete. You have to redo `make deps` every time
you dependencies list changes, i.e. after adding a new library dependency.
Build the operator docker image and pushing it to Pier One:
Build the operator with the `make docker` command. You may define the TAG
variable to assign an explicit tag to your docker image and the IMAGE to set
the image name. By default, the tag is computed with
`git describe --tags --always --dirty` and the image is
`registry.opensource.zalan.do/acid/postgres-operator`
```bash
$ make docker push
export TAG=$(git describe --tags --always --dirty)
make docker
```
You may define the TAG variable to assign an explicit tag to your docker image
and the IMAGE to set the image name. By default, the tag is computed with
`git describe --tags --always --dirty` and the image is
`pierone.stups.zalan.do/acid/postgres-operator`
Building the operator binary (for testing the out-of-cluster option):
```bash
$ make
make
```
The binary will be placed into the build directory.
@ -64,20 +64,18 @@ The binary will be placed into the build directory.
## Deploying self build image
The fastest way to run and test your docker image locally is to reuse the docker
from [minikube]((https://github.com/kubernetes/minikube/releases)) or use the
from [minikube](https://github.com/kubernetes/minikube/releases) or use the
`load docker-image` from [kind](https://kind.sigs.k8s.io/). The following steps
will get you the docker image built and deployed.
```bash
# minikube
$ eval $(minikube docker-env)
$ export TAG=$(git describe --tags --always --dirty)
$ make docker
eval $(minikube docker-env)
make docker
# kind
$ export TAG=$(git describe --tags --always --dirty)
$ make docker
$ kind load docker-image <image> --name <kind-cluster-name>
make docker
kind load docker-image <image> --name <kind-cluster-name>
```
Then create a new Postgres Operator deployment. You can reuse the provided
@ -85,12 +83,12 @@ manifest but replace the version and tag. Don't forget to also apply
configuration and RBAC manifests first, e.g.:
```bash
$ kubectl create -f manifests/configmap.yaml
$ kubectl create -f manifests/operator-service-account-rbac.yaml
$ sed -e "s/\(image\:.*\:\).*$/\1$TAG/" manifests/postgres-operator.yaml | kubectl create -f -
kubectl create -f manifests/configmap.yaml
kubectl create -f manifests/operator-service-account-rbac.yaml
sed -e "s/\(image\:.*\:\).*$/\1$TAG/" manifests/postgres-operator.yaml | kubectl create -f -
# check if the operator is coming up
$ kubectl get pod -l name=postgres-operator
kubectl get pod -l name=postgres-operator
```
## Code generation
@ -124,15 +122,17 @@ the developer's laptop (and not in a docker container).
There is a web interface in the operator to observe its internal state. The
operator listens on port 8080. It is possible to expose it to the
localhost:8080 by doing:
`localhost:8080` by doing:
$ kubectl --context minikube port-forward $(kubectl --context minikube get pod -l name=postgres-operator -o jsonpath={.items..metadata.name}) 8080:8080
```bash
kubectl --context minikube port-forward $(kubectl --context minikube get pod -l name=postgres-operator -o jsonpath={.items..metadata.name}) 8080:8080
```
The inner 'query' gets the name of the Postgres Operator pod, and the outer
The inner query gets the name of the Postgres Operator pod, and the outer one
enables port forwarding. Afterwards, you can access the operator API with:
```
$ curl --location http://127.0.0.1:8080/$endpoint | jq .
```bash
curl --location http://127.0.0.1:8080/$endpoint | jq .
```
The available endpoints are listed below. Note that the worker ID is an integer
@ -165,15 +165,15 @@ The operator also supports pprof endpoints listed at the
* /debug/pprof/trace
It's possible to attach a debugger to troubleshoot postgres-operator inside a
docker container. It's possible with gdb and
[delve](https://github.com/derekparker/delve). Since the latter one is a
docker container. It's possible with [gdb](https://www.gnu.org/software/gdb/)
and [delve](https://github.com/derekparker/delve). Since the latter one is a
specialized debugger for golang, we will use it as an example. To use it you
need:
* Install delve locally
```
$ go get -u github.com/derekparker/delve/cmd/dlv
```bash
go get -u github.com/derekparker/delve/cmd/dlv
```
* Add following dependencies to the `Dockerfile`
@ -184,7 +184,8 @@ RUN go get github.com/derekparker/delve/cmd/dlv
```
* Update the `Makefile` to build the project with debugging symbols. For that
you need to add `gcflags` to a build target for corresponding OS (e.g. linux)
you need to add `gcflags` to a build target for corresponding OS (e.g.
GNU/Linux)
```
-gcflags "-N -l"
@ -199,50 +200,50 @@ CMD ["/root/go/bin/dlv", "--listen=:DLV_PORT", "--headless=true", "--api-version
* Forward the listening port
```
$ kubectl port-forward POD_NAME DLV_PORT:DLV_PORT
```bash
kubectl port-forward POD_NAME DLV_PORT:DLV_PORT
```
* Attach to it
```
$ dlv connect 127.0.0.1:DLV_PORT
```bash
dlv connect 127.0.0.1:DLV_PORT
```
## Unit tests
To run all unit tests, you can simply do:
```
$ go test ./...
```bash
go test ./...
```
For go 1.9 `vendor` directory would be excluded automatically. For previous
versions you can exclude it manually:
```
$ go test $(glide novendor)
```bash
go test $(glide novendor)
```
In case if you need to debug your unit test, it's possible to use delve:
```
$ dlv test ./pkg/util/retryutil/
```bash
dlv test ./pkg/util/retryutil/
Type 'help' for list of commands.
(dlv) c
PASS
```
To test the multinamespace setup, you can use
To test the multi-namespace setup, you can use
```
$ ./run_operator_locally.sh --rebuild-operator
```bash
./run_operator_locally.sh --rebuild-operator
```
It will automatically create an `acid-minimal-cluster` in the namespace `test`.
Then you can for example check the Patroni logs:
```
$ kubectl logs acid-minimal-cluster-0
```bash
kubectl logs acid-minimal-cluster-0
```
## End-to-end tests
@ -260,10 +261,10 @@ End-to-end tests are executed automatically during builds:
```bash
# invoke them from the project's top directory
$ make e2e-run
make e2e-run
# install kind and build test image before first run
$ make e2e-tools e2e-build
make e2e-tools e2e-build
```
End-to-end tests are written in Python and use `flake8` for code quality.
@ -287,10 +288,7 @@ Note: If one option is defined in the operator configuration and in the cluster
[manifest](../manifests/complete-postgres-manifest.yaml), the latter takes
precedence.
So, first define the parameters in:
* the [ConfigMap](../manifests/configmap.yaml) manifest
* the CR's [default configuration](../manifests/postgresql-operator-default-configuration.yaml)
* the Helm chart [values](../charts/postgres-operator/values.yaml)
### Go code
Update the following Go files that obtain the configuration parameter from the
manifest files:
@ -298,12 +296,30 @@ manifest files:
* [operator_config.go](../pkg/controller/operator_config.go)
* [config.go](../pkg/util/config/config.go)
Postgres manifest parameters are defined in the [api package](../pkg/apis/acid.zalan.do/v1/postgresql_type.go).
The operator behavior has to be implemented at least in [k8sres.go](../pkg/cluster/k8sres.go).
Please, reflect your changes in tests, for example in:
* [config_test.go](../pkg/util/config/config_test.go)
* [k8sres_test.go](../pkg/cluster/k8sres_test.go)
* [util_test.go](../pkg/apis/acid.zalan.do/v1/util_test.go)
Finally, document the new configuration option(s) for the operator in its
[reference](reference/operator_parameters.md) document and explain the feature
in the [administrator docs](administrator.md).
### Updating manifest files
For the CRD-based configuration, please update the following files:
* the default [OperatorConfiguration](../manifests/postgresql-operator-default-configuration.yaml)
* the Helm chart's [values-crd file](../charts/postgres-operator/values.yaml)
Reflect the changes in the ConfigMap configuration as well (note that numeric
and boolean parameters have to use double quotes here):
* [ConfigMap](../manifests/configmap.yaml) manifest
* the Helm chart's default [values file](../charts/postgres-operator/values.yaml)
### Updating documentation
Finally, add a section for each new configuration option and/or cluster manifest
parameter in the reference documents:
* [config reference](reference/operator_parameters.md)
* [manifest reference](reference/cluster_manifest.md)
It also helps users to explain new features with examples in the
[administrator docs](administrator.md).

View File

@ -1,4 +1,4 @@
# Introduction
# Concepts
The Postgres [operator](https://coreos.com/blog/introducing-operators.html)
manages PostgreSQL clusters on Kubernetes (K8s):
@ -8,10 +8,10 @@ manages PostgreSQL clusters on Kubernetes (K8s):
user submits a new manifest, the operator fetches that manifest and spawns a
new Postgres cluster along with all necessary entities such as K8s
StatefulSets and Postgres roles. See this
[Postgres cluster manifest](https://github.com/zalando/postgres-operator/blob/master/manifests/complete-postgres-manifest.yaml)
[Postgres cluster manifest](../manifests/complete-postgres-manifest.yaml)
for settings that a manifest may contain.
2. The operator also watches updates to [its own configuration](https://github.com/zalando/postgres-operator/blob/master/manifests/configmap.yaml)
2. The operator also watches updates to [its own configuration](../manifests/configmap.yaml)
and alters running Postgres clusters if necessary. For instance, if a pod
docker image is changed, the operator carries out the rolling update. That
is, the operator re-spawns one-by-one pods of each StatefulSet it manages
@ -24,9 +24,7 @@ manages PostgreSQL clusters on Kubernetes (K8s):
manifests and its own config. This enables easy integration in automated
deploy pipelines with no access to K8s directly.
## Concepts
### Scope
## Scope
The scope of the Postgres Operator is on provisioning, modifying configuration
and cleaning up Postgres clusters that use Patroni, basically to make it easy
@ -56,8 +54,7 @@ cluster pod, so let's zoom in:
These two diagrams should help you to understand the basics of what kind of
functionality the operator provides.
### Status
## Status
This project is currently in active development. It is however already
[used internally by Zalando](https://jobs.zalando.com/tech/blog/postgresql-in-a-time-of-kubernetes/)
@ -79,4 +76,4 @@ Please, report any issues discovered to https://github.com/zalando/postgres-oper
4. "Blue elephant on-demand: Postgres + Kubernetes" talk by Oleksii Kliukin and Jan Mussler, FOSDEM 2018: [video](https://fosdem.org/2018/schedule/event/blue_elephant_on_demand_postgres_kubernetes/) | [slides (pdf)](https://www.postgresql.eu/events/fosdem2018/sessions/session/1735/slides/59/FOSDEM%202018_%20Blue_Elephant_On_Demand.pdf)
3. "Kube-Native Postgres" talk by Josh Berkus, KubeCon 2017: [video](https://www.youtube.com/watch?v=Zn1vd7sQ_bc)
5. "Kube-Native Postgres" talk by Josh Berkus, KubeCon 2017: [video](https://www.youtube.com/watch?v=Zn1vd7sQ_bc)

View File

@ -15,21 +15,19 @@ For local tests we recommend to use one of the following solutions:
To interact with the K8s infrastructure install it's CLI runtime [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl).
This quickstart assumes that you haved started minikube or created a local kind
cluster. Note that you can also use built-in K8s support in the Docker
Desktop for Mac to follow the steps of this tutorial. You would have to replace
This quickstart assumes that you have started minikube or created a local kind
cluster. Note that you can also use built-in K8s support in the Docker Desktop
for Mac to follow the steps of this tutorial. You would have to replace
`minikube start` and `minikube delete` with your launch actions for the Docker
built-in K8s support.
## Configuration Options
If you want to configure the Postgres Operator it must happen before deploying a
Postgres cluster. This can happen in two ways: Via a ConfigMap or a custom
If you want to configure the Postgres Operator it must happen before deploying
a Postgres cluster. This can work in two ways: Via a ConfigMap or a custom
`OperatorConfiguration` object. More details on configuration can be found
[here](reference/operator_parameters.md).
## Deployment options
The Postgres Operator can be deployed in different ways:
@ -55,9 +53,14 @@ kubectl create -f manifests/operator-service-account-rbac.yaml # identity and p
kubectl create -f manifests/postgres-operator.yaml # deployment
```
For convenience, we have automated starting the operator and submitting the
`acid-minimal-cluster`. From inside the cloned repository execute the
`run_operator_locally` shell script.
When using kubectl 1.14 or newer the mentioned manifests could be also be
bundled in a [Kustomization](https://github.com/kubernetes-sigs/kustomize) so
that one command is enough.
For convenience, we have automated starting the operator with minikube and
submitting the [`acid-minimal-cluster`](../manifests/minimal-postgres-manifest).
From inside the cloned repository execute the `run_operator_locally` shell
script.
```bash
./run_operator_locally.sh
@ -96,7 +99,6 @@ kubectl create -f https://operatorhub.io/install/postgres-operator.yaml
This installs the operator in the `operators` namespace. More information can be
found on [operatorhub.io](https://operatorhub.io/operator/postgres-operator).
## Create a Postgres cluster
Starting the operator may take a few seconds. Check if the operator pod is
@ -134,7 +136,6 @@ kubectl get pods -l application=spilo -L spilo-role
kubectl get svc -l application=spilo -L spilo-role
```
## Connect to the Postgres cluster via psql
You can create a port-forward on a database pod to connect to Postgres. See the
@ -155,7 +156,6 @@ export PGPASSWORD=$(kubectl get secret postgres.acid-minimal-cluster.credentials
psql -U postgres
```
## Delete a Postgres cluster
To delete a Postgres cluster simply delete the `postgresql` custom resource.

View File

@ -4,9 +4,9 @@ Individual Postgres clusters are described by the Kubernetes *cluster manifest*
that has the structure defined by the `postgresql` CRD (custom resource
definition). The following section describes the structure of the manifest and
the purpose of individual keys. You can take a look at the examples of the
[minimal](https://github.com/zalando/postgres-operator/blob/master/manifests/minimal-postgres-manifest.yaml)
[minimal](../manifests/minimal-postgres-manifest.yaml)
and the
[complete](https://github.com/zalando/postgres-operator/blob/master/manifests/complete-postgres-manifest.yaml)
[complete](../manifests/complete-postgres-manifest.yaml)
cluster manifests.
When Kubernetes resources, such as memory, CPU or volumes, are configured,

View File

@ -10,12 +10,12 @@ configuration.
maps. String values containing ':' should be enclosed in quotes. The
configuration is flat, parameter group names below are not reflected in the
configuration structure. There is an
[example](https://github.com/zalando/postgres-operator/blob/master/manifests/configmap.yaml)
[example](../manifests/configmap.yaml)
* CRD-based configuration. The configuration is stored in a custom YAML
manifest. The manifest is an instance of the custom resource definition (CRD)
called `OperatorConfiguration`. The operator registers this CRD during the
start and uses it for configuration if the [operator deployment manifest ](https://github.com/zalando/postgres-operator/blob/master/manifests/postgres-operator.yaml#L21)
start and uses it for configuration if the [operator deployment manifest](../manifests/postgres-operator.yaml#L36)
sets the `POSTGRES_OPERATOR_CONFIGURATION_OBJECT` env variable to a non-empty
value. The variable should point to the `postgresql-operator-configuration`
object in the operator's namespace.
@ -24,7 +24,7 @@ configuration.
simply represented in the usual YAML way. There are no default values built-in
in the operator, each parameter that is not supplied in the configuration
receives an empty value. In order to create your own configuration just copy
the [default one](https://github.com/zalando/postgres-operator/blob/master/manifests/postgresql-operator-default-configuration.yaml)
the [default one](../manifests/postgresql-operator-default-configuration.yaml)
and change it.
To test the CRD-based configuration locally, use the following
@ -58,11 +58,11 @@ parameters, those parameters have no effect and are replaced by the
`CRD_READY_WAIT_INTERVAL` and `CRD_READY_WAIT_TIMEOUT` environment variables.
They will be deprecated and removed in the future.
For the configmap operator configuration, the [default parameter values](https://github.com/zalando-incubator/postgres-operator/blob/master/pkg/util/config/config.go#L14)
For the configmap configuration, the [default parameter values](../pkg/util/config/config.go#L14)
mentioned here are likely to be overwritten in your local operator installation
via your local version of the operator configmap. In the case you use the
operator CRD, all the CRD defaults are provided in the
[operator's default configuration manifest](https://github.com/zalando-incubator/postgres-operator/blob/master/manifests/postgresql-operator-default-configuration.yaml)
[operator's default configuration manifest](../manifests/postgresql-operator-default-configuration.yaml)
Variable names are underscore-separated words.
@ -122,7 +122,7 @@ Those are top-level keys, containing both leaf keys and groups.
containers with high memory limits due to the lack of memory on Kubernetes
cluster nodes. This affects all containers created by the operator (Postgres,
Scalyr sidecar, and other sidecars); to set resources for the operator's own
container, change the [operator deployment manually](https://github.com/zalando/postgres-operator/blob/master/manifests/postgres-operator.yaml#L13).
container, change the [operator deployment manually](../manifests/postgres-operator.yaml#L20).
The default is `false`.
## Postgres users

View File

@ -5,7 +5,7 @@ Learn how to work with the Postgres Operator in a Kubernetes (K8s) environment.
## Create a manifest for a new PostgreSQL cluster
Make sure you have [set up](quickstart.md) the operator. Then you can create a
new Postgres cluster by applying manifest like this [minimal example](https://github.com/zalando/postgres-operator/blob/master/manifests/minimal-postgres-manifest.yaml):
new Postgres cluster by applying manifest like this [minimal example](../manifests/minimal-postgres-manifest.yaml):
```yaml
apiVersion: "acid.zalan.do/v1"
@ -37,13 +37,13 @@ If you have clone the Postgres Operator [repository](https://github.com/zalando/
you can find this example also in the manifests folder:
```bash
$ kubectl create -f manifests/minimal-postgres-manifest.yaml
kubectl create -f manifests/minimal-postgres-manifest.yaml
```
## Watch pods being created
```bash
$ kubectl get pods -w --show-labels
kubectl get pods -w --show-labels
```
## Connect to PostgreSQL
@ -65,11 +65,11 @@ Open another CLI and connect to the database. Use the generated secret of the
in Minikube:
```bash
$ export PGPASSWORD=$(kubectl get secret postgres.acid-minimal-cluster.credentials -o 'jsonpath={.data.password}' | base64 -d)
$ psql -U postgres -p 6432
export PGPASSWORD=$(kubectl get secret postgres.acid-minimal-cluster.credentials -o 'jsonpath={.data.password}' | base64 -d)
psql -U postgres -p 6432
```
# Defining database roles in the operator
## Defining database roles in the operator
Postgres Operator allows defining roles to be created in the resulting database
cluster. It covers three use-cases:
@ -84,10 +84,10 @@ owning the database cluster.
In the next sections, we will cover those use cases in more details.
## Manifest roles
### Manifest roles
Manifest roles are defined directly in the cluster manifest. See
[minimal postgres manifest](https://github.com/zalando/postgres-operator/blob/master/manifests/minimal-postgres-manifest.yaml)
[minimal postgres manifest](../manifests/minimal-postgres-manifest.yaml)
for an example of `zalando` role, defined with `superuser` and `createdb` flags.
Manifest roles are defined as a dictionary, with a role name as a key and a
@ -113,7 +113,7 @@ from the secret, without ever sharing it outside of the cluster.
At the moment it is not possible to define membership of the manifest role in
other roles.
## Infrastructure roles
### Infrastructure roles
An infrastructure role is a role that should be present on every PostgreSQL
cluster managed by the operator. An example of such a role is a monitoring
@ -122,7 +122,7 @@ user. There are two ways to define them:
* With the infrastructure roles secret only
* With both the the secret and the infrastructure role ConfigMap.
### Infrastructure roles secret
#### Infrastructure roles secret
The infrastructure roles secret is specified by the `infrastructure_roles_secret_name`
parameter. The role definition looks like this (values are base64 encoded):
@ -142,7 +142,7 @@ Note that with definitions that solely use the infrastructure roles secret
there is no way to specify role options (like superuser or nologin) or role
memberships. This is where the ConfigMap comes into play.
### Secret plus ConfigMap
#### Secret plus ConfigMap
A [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/)
allows for defining more details regarding the infrastructure roles. Therefore,
@ -178,8 +178,9 @@ Since an infrastructure role is created uniformly on all clusters managed by
the operator, it makes no sense to define it without the password. Such
definitions will be ignored with a prior warning.
See [infrastructure roles secret](https://github.com/zalando/postgres-operator/blob/master/manifests/infrastructure-roles.yaml)
and [infrastructure roles configmap](https://github.com/zalando/postgres-operator/blob/master/manifests/infrastructure-roles-configmap.yaml) for the examples.
See [infrastructure roles secret](../manifests/infrastructure-roles.yaml)
and [infrastructure roles configmap](../manifests/infrastructure-roles-configmap.yaml)
for the examples.
## Use taints and tolerations for dedicated PostgreSQL nodes
@ -206,7 +207,6 @@ You can spin up a new cluster as a clone of the existing one, using a clone
section in the spec. There are two options here:
* Clone directly from a source cluster using `pg_basebackup`
* Clone from an S3 bucket
### Clone directly
@ -351,7 +351,6 @@ variables are always passed to sidecars:
The PostgreSQL volume is shared with sidecars and is mounted at
`/home/postgres/pgdata`.
## InitContainers Support
Each cluster can specify arbitrary init containers to run. These containers can
@ -376,7 +375,6 @@ spec:
`initContainers` accepts full `v1.Container` definition.
## Increase volume size
PostgreSQL operator supports statefulset volume resize if you're using the
@ -414,13 +412,15 @@ size of volumes that correspond to the previously running pods is not changed.
## Logical backups
If you add
You can enable logical backups from the cluster manifest by adding the following
parameter in the spec section:
```
enableLogicalBackup: true
```
to the cluster manifest, the operator will create and sync a k8s cron job to do
periodic logical backups of this particular Postgres cluster. Due to the
[limitation of K8s cron jobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations)
The operator will create and sync a K8s cron job to do periodic logical backups
of this particular Postgres cluster. Due to the [limitation of K8s cron jobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations)
it is highly advisable to set up additional monitoring for this feature; such
monitoring is outside of the scope of operator responsibilities. See
[configuration reference](reference/cluster_manifest.md) and