reflect comments from Rafia

This commit is contained in:
Felix Kunde 2019-07-11 13:31:21 +02:00
parent 1b2ce33d69
commit a87b6edc48
7 changed files with 50 additions and 53 deletions

View File

@ -243,7 +243,7 @@ serviceAccount:
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
# When relying solely on the OperatorConfiguration CRD, set this value to "operator"
# When relying solely on the OperatorConfiguration CRD, this value has to be "operator"
# Otherwise, the operator tries to use the "default" service account which is forbidden
name: operator

View File

@ -224,8 +224,6 @@ serviceAccount:
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
# When relying solely on the OperatorConfiguration CRD, set this value to "operator"
# Otherwise, the operator tries to use the "default" service account which is forbidden
name:
priorityClassName: ""

View File

@ -65,9 +65,9 @@ namespace. The operator performs **no** further syncing of this account.
## Non-default cluster domain
If your cluster uses a different DNS domain than `cluster.local`, this needs
to be set in the operator configuration (`cluster_domain` variable). This is
used by the operator to connect to the clusters after creation.
If your cluster uses a DNS domain other than the default `cluster.local`, this
needs to be set in the operator configuration (`cluster_domain` variable). This
is used by the operator to connect to the clusters after creation.
## Role-based access control for the operator
@ -93,14 +93,14 @@ default `operator` account. In the future the operator should ideally be run
under the `zalando-postgres-operator` service account.
The service account defined in `operator-service-account-rbac.yaml` acquires
some privileges not really used by the operator (i.e. we only need `list` and
`watch` on `configmaps` resources), this is also done intentionally to avoid
breaking things if someone decides to configure the same service account in the
some privileges not used by the operator (i.e. we only need `list` and `watch`
on `configmaps` resources). This is also done intentionally to avoid breaking
things if someone decides to configure the same service account in the
operator's ConfigMap to run Postgres clusters.
### Give K8S users access to create/list `postgresqls`
By default `postgresql` custom resources can only by listed and changed by
By default `postgresql` custom resources can only be listed and changed by
cluster admins. To allow read and/or write access to other human users apply
the `user-facing-clusterrole` manifest:
@ -337,7 +337,7 @@ creating such roles to Patroni and only establishes relevant secrets.
* **Infrastructure roles** are roles for processes originating from external
systems, e.g. monitoring robots. The operator creates such roles in all Postgres
clusters it manages assuming that K8s secrets with the relevant
clusters it manages, assuming that K8s secrets with the relevant
credentials exist beforehand.
* **Per-cluster robot users** are also roles for processes originating from
@ -395,19 +395,17 @@ See [example RBAC](../manifests/operator-service-account-rbac.yaml)
## Access to cloud resources from clusters in non-cloud environment
To access cloud resources like S3 from a cluster in a bare metal setup you can
use `additional_secret_mount` and `additional_secret_mount_path` configuration
parameters. With this you can provision cloud credentials to the containers in
the pods of the StatefulSet. This works this way that it mounts a volume from
the given secret in the pod and this can then accessed in the container over the
configured mount path. Via [Custom Pod Environment Variables](#custom-pod-environment-variables)
you can then point the different cloud SDK's (AWS, GCP etc.) to this mounted
secret. With this credentials the cloud SDK can then access cloud resources to
upload logs etc.
To access cloud resources like S3 from a cluster on bare metal you can use
`additional_secret_mount` and `additional_secret_mount_path` configuration
parameters. The cloud credentials will be provisioned in the Postgres containers
by mounting an additional volume from the given secret to database pods. They
can then be accessed over the configured mount path. Via
[Custom Pod Environment Variables](#custom-pod-environment-variables) you can
point different cloud SDK's (AWS, GCP etc.) to this mounted secret, e.g. to
access cloud resources for uploading logs etc.
A secret can be pre-provisioned in different ways:
* Generic secret created via `kubectl create secret generic some-cloud-creds --from-file=some-cloud-credentials-file.json`
* Automatically provisioned via a Controller like [kube-aws-iam-controller](https://github.com/mikkeloscar/kube-aws-iam-controller).
This controller would then also rotate the credentials. Please visit the
documentation for more information.
* Automatically provisioned via a custom K8s controller like
[kube-aws-iam-controller](https://github.com/mikkeloscar/kube-aws-iam-controller)

View File

@ -167,8 +167,7 @@ The operator also supports pprof endpoints listed at the
It's possible to attach a debugger to troubleshoot postgres-operator inside a
docker container. It's possible with [gdb](https://www.gnu.org/software/gdb/)
and [delve](https://github.com/derekparker/delve). Since the latter one is a
specialized debugger for golang, we will use it as an example. To use it you
need:
specialized debugger for Go, we will use it as an example. To use it you need:
* Install delve locally

View File

@ -12,17 +12,17 @@ manages PostgreSQL clusters on Kubernetes (K8s):
for settings that a manifest may contain.
2. The operator also watches updates to [its own configuration](../manifests/configmap.yaml)
and alters running Postgres clusters if necessary. For instance, if a pod
docker image is changed, the operator carries out the rolling update. That
is, the operator re-spawns one-by-one pods of each StatefulSet it manages
and alters running Postgres clusters if necessary. For instance, if the
docker image in a pod is changed, the operator carries out the rolling
update, which means it re-spawns pods of each managed StatefulSet one-by-one
with the new Docker image.
3. Finally, the operator periodically synchronizes the actual state of each
Postgres cluster with the desired state defined in the cluster's manifest.
4. The operator aims to be hands free and configuration happens only via
manifests and its own config. This enables easy integration in automated
deploy pipelines with no access to K8s directly.
4. The operator aims to be hands free as configuration works only via manifests.
This enables easy integration in automated deploy pipelines with no access to
K8s directly.
## Scope
@ -36,8 +36,10 @@ the cluster bootstrap and high availability. The operator is however involved
in some overarching orchestration, like rolling updates to improve the user
experience.
Monitoring of clusters is not in scope, for this good tools already exist from
ZMON to Prometheus and more Postgres specific options.
Monitoring or tuning Postgres is not in scope of the operator in the current
state. Other tools like [ZMON](https://opensource.zalando.com/zmon/),
[Prometheus](https://prometheus.io/) or more Postgres specific options can be
used to complement it.
## Overview of involved entities

View File

@ -5,8 +5,9 @@ Operator on a local Kubernetes environment.
## Prerequisites
The Postgres Operator runs on Kubernetes (K8s) which you have to setup first.
For local tests we recommend to use one of the following solutions:
Since the Postgres Operator is designed for the Kubernetes (K8s) framework,
hence set it up first. For local tests we recommend to use one of the following
solutions:
* [minikube](https://github.com/kubernetes/minikube/releases), which creates a
single-node K8s cluster inside a VM (requires KVM or VirtualBox),
@ -23,14 +24,14 @@ built-in K8s support.
## Configuration Options
If you want to configure the Postgres Operator it must happen before deploying
a Postgres cluster. This can work in two ways: Via a ConfigMap or a custom
Configuring the Postgres Operator is only possible before deploying a new
Postgres cluster. This can work in two ways: via a ConfigMap or a custom
`OperatorConfiguration` object. More details on configuration can be found
[here](reference/operator_parameters.md).
## Deployment options
The Postgres Operator can be deployed in different ways:
The Postgres Operator can be deployed in the following ways:
* Manual deployment
* Helm chart
@ -54,13 +55,12 @@ kubectl create -f manifests/postgres-operator.yaml # deployment
```
When using kubectl 1.14 or newer the mentioned manifests could be also be
bundled in a [Kustomization](https://github.com/kubernetes-sigs/kustomize) so
that one command is enough.
bundled in one [Kustomization](https://github.com/kubernetes-sigs/kustomize)
manifest.
For convenience, we have automated starting the operator with minikube and
submitting the [`acid-minimal-cluster`](../manifests/minimal-postgres-manifest).
From inside the cloned repository execute the `run_operator_locally` shell
script.
For convenience, we have automated starting the operator with minikube using the
`run_operator_locally` script. It applies the [`acid-minimal-cluster`](../manifests/minimal-postgres-manifest).
manifest.
```bash
./run_operator_locally.sh
@ -69,14 +69,14 @@ script.
### Helm chart
Alternatively, the operator can be installed by using the provided [Helm](https://helm.sh/)
chart which saves you the manual steps. Therefore, you would need to install
the helm CLI on your machine. After initializing helm (and its server component
Tiller) in your local cluster you can install the operator chart. You can define
a release name that is prepended to the operator resource's names.
chart which saves you the manual steps. Therefore, install the helm CLI on your
machine. After initializing helm (and its server component Tiller) in your local
cluster you can install the operator chart. You can define a release name that
is prepended to the operator resource's names.
Use `--name zalando` to match with the default service account name as older
operator versions do not support custom names for service accounts. To use
CRD-based configuration use the [values-crd yaml file](../charts/values-crd.yaml).
CRD-based configuration you need to specify the [values-crd yaml file](../charts/values-crd.yaml).
```bash
# 1) initialize helm
@ -88,9 +88,9 @@ helm install --name zalando ./charts/postgres-operator
### Operator Lifecycle Manager (OLM)
The [Operator Lifecycle Manager (OLM)](https://github.com/operator-framework/operator-lifecycle-manager)
has been designed to facilitate the management of K8s operators. You have to
install it in your K8s environment. When OLM is set up you can simply download
and deploy the Postgres Operator with the following command:
has been designed to facilitate management of K8s operators. It has to be
installed in your K8s environment. When OLM is set up simply download and deploy
the Postgres Operator with the following command:
```bash
kubectl create -f https://operatorhub.io/install/postgres-operator.yaml

View File

@ -196,7 +196,7 @@ configuration they are grouped under the `kubernetes` key.
`{username}.{cluster}.credentials.{tprkind}.{tprgroup}`.
* **cluster_domain**
defines the default dns domain for the kubernetes cluster the operator is
defines the default DNS domain for the kubernetes cluster the operator is
running in. The default is `cluster.local`. Used by the operator to connect
to the Postgres clusters after creation.