use Spilo 2.0-r4 and update docs

This commit is contained in:
Felix Kunde 2021-02-18 11:30:03 +01:00
parent b7ec5d8d97
commit 77f8c72565
13 changed files with 62 additions and 38 deletions

View File

@ -65,7 +65,7 @@ We introduce the major version into the backup path to smoothen the [major versi
The new operator configuration can set a compatibility flag *enable_spilo_wal_path_compat* to make Spilo look for wal segments in the current path but also old format paths. The new operator configuration can set a compatibility flag *enable_spilo_wal_path_compat* to make Spilo look for wal segments in the current path but also old format paths.
This comes at potential performance costs and should be disabled after a few days. This comes at potential performance costs and should be disabled after a few days.
The newest Spilo 13 image is: `registry.opensource.zalan.do/acid/spilo-13:2.0-p3` The newest Spilo 13 image is: `registry.opensource.zalan.do/acid/spilo-13:2.0-p4`
The last Spilo 12 image is: `registry.opensource.zalan.do/acid/spilo-12:1.6-p5` The last Spilo 12 image is: `registry.opensource.zalan.do/acid/spilo-12:1.6-p5`

View File

@ -65,7 +65,7 @@ spec:
properties: properties:
docker_image: docker_image:
type: string type: string
default: "registry.opensource.zalan.do/acid/spilo-13:2.0-p3" default: "registry.opensource.zalan.do/acid/spilo-13:2.0-p4"
enable_crd_validation: enable_crd_validation:
type: boolean type: boolean
default: true default: true

View File

@ -32,7 +32,7 @@ configGeneral:
# Select if setup uses endpoints (default), or configmaps to manage leader (DCS=k8s) # Select if setup uses endpoints (default), or configmaps to manage leader (DCS=k8s)
# kubernetes_use_configmaps: false # kubernetes_use_configmaps: false
# Spilo docker image # Spilo docker image
docker_image: registry.opensource.zalan.do/acid/spilo-13:2.0-p3 docker_image: registry.opensource.zalan.do/acid/spilo-13:2.0-p4
# max number of instances in Postgres cluster. -1 = no limit # max number of instances in Postgres cluster. -1 = no limit
min_instances: -1 min_instances: -1
# min number of instances in Postgres cluster. -1 = no limit # min number of instances in Postgres cluster. -1 = no limit

View File

@ -35,7 +35,7 @@ configGeneral:
# Select if setup uses endpoints (default), or configmaps to manage leader (DCS=k8s) # Select if setup uses endpoints (default), or configmaps to manage leader (DCS=k8s)
# kubernetes_use_configmaps: "false" # kubernetes_use_configmaps: "false"
# Spilo docker image # Spilo docker image
docker_image: registry.opensource.zalan.do/acid/spilo-13:2.0-p3 docker_image: registry.opensource.zalan.do/acid/spilo-13:2.0-p4
# max number of instances in Postgres cluster. -1 = no limit # max number of instances in Postgres cluster. -1 = no limit
min_instances: "-1" min_instances: "-1"
# min number of instances in Postgres cluster. -1 = no limit # min number of instances in Postgres cluster. -1 = no limit

View File

@ -618,38 +618,35 @@ A secret can be pre-provisioned in different ways:
* Automatically provisioned via a custom K8s controller like * Automatically provisioned via a custom K8s controller like
[kube-aws-iam-controller](https://github.com/mikkeloscar/kube-aws-iam-controller) [kube-aws-iam-controller](https://github.com/mikkeloscar/kube-aws-iam-controller)
## WAL archiving and basebackups ## WAL archiving and physical basebackups
Spilo is shipped with [WAL-E](https://github.com/wal-e/wal-e) and its successor Spilo is shipped with [WAL-E](https://github.com/wal-e/wal-e) and its successor
[WAL-G](https://github.com/wal-g/wal-g) to perform WAL archiving. By default, [WAL-G](https://github.com/wal-g/wal-g) to perform WAL archiving. By default,
WAL-E is used because it is more battle-tested. Additionally to the continuous WAL-E is used for backups because it is more battle-tested. In addition to the
backup stream a [basebackup](https://www.postgresql.org/docs/13/app-pgbasebackup.html) continuous backup stream WAL-E/G pushes a physical base backup every night and
is initialized every night and 1am UTC. 01:00 am UTC.
These are the pre-configured settings in the docker image: These are the pre-configured settings in the docker image:
```bash
BACKUP_NUM_TO_RETAIN: 5
BACKUP_SCHEDULE: '00 01 * * *'
USE_WALG_BACKUP: false (true for Azure and SSH)
USE_WALG_RESTORE: false (true for S3, Azure and SSH)
```
Within Postgres you can check the pre-configured commands for archiving and
restoring WAL files. You can find the log files to the respective commands
under `$HOME/pgdata/pgroot/pg_log/postgres-?.log`.
```bash ```bash
archive_command: `envdir "{WALE_ENV_DIR}" {WALE_BINARY} wal-push "%p"` archive_command: `envdir "{WALE_ENV_DIR}" {WALE_BINARY} wal-push "%p"`
restore_command: `envdir "{{WALE_ENV_DIR}}" /scripts/restore_command.sh "%f" "%p"` restore_command: `envdir "{{WALE_ENV_DIR}}" /scripts/restore_command.sh "%f" "%p"`
AWS_ENDPOINT: 'https://s3.AWS_REGION.amazonaws.com:443'
BACKUP_NUM_TO_RETAIN: 5
BACKUP_SCHEDULE: '00 01 * * *'
USE_WALG_BACKUP: false (not set)
USE_WALG_RESTORE: false (not set)
WALE_S3_ENDPOINT: 'https+path://s3.AWS_REGION.amazonaws.com:443'
WALE_S3_PREFIX: 's3://bucket-name/very/long/path'
``` ```
If the prefix is not specified Spilo will generate it from `WAL_S3_BUCKET`. Depending on the cloud storage provider different [environment variables](https://github.com/zalando/spilo/blob/master/ENVIRONMENT.rst)
When the `AWS_REGION` is set you `AWS_ENDPOINT` and `WALE_S3_ENDPOINT` are have to be set for Spilo. Not all of them are generated automatically by the
generated automatically. operator by changing its configuration. In this case you have to use an
[extra configmap or secret](#custom-pod-environment-variables).
The backup path has to be specified in the operator configuration. You have to
make sure that Postgres is allowed to send compressed WAL files to the backup
location, e.g. an S3 bucket. If you want to change some settings you have to
overwrite Spilo's [environment variables](https://github.com/zalando/spilo/blob/master/ENVIRONMENT.rst)
using an [extra configmap or secret](#custom-pod-environment-variables).
### Using AWS S3 or compliant services ### Using AWS S3 or compliant services
@ -683,7 +680,8 @@ configuration:
wal_s3_bucket: your-backup-path wal_s3_bucket: your-backup-path
``` ```
The referenced IAM role should contain the following privileges: The referenced IAM role should contain the following privileges to make sure
Postgres can send compressed WAL files to the given S3 bucket:
```yaml ```yaml
PostgresPodRole: PostgresPodRole:
@ -703,6 +701,21 @@ The referenced IAM role should contain the following privileges:
- "arn:aws:s3:::your-backup-path/*" - "arn:aws:s3:::your-backup-path/*"
``` ```
This should produce the following settings for the essential environment
variables:
```bash
AWS_ENDPOINT: 'https://s3.eu-central-1.amazonaws.com:443'
WAL_S3_BUCKET: '/spilo/{WAL_BUCKET_SCOPE_PREFIX}{SCOPE}{WAL_BUCKET_SCOPE_SUFFIX}/wal/{PGVERSION}'
WALE_S3_ENDPOINT: 'https+path://s3.eu-central-1.amazonaws.com:443'
WALE_S3_PREFIX: 's3://your-backup-path'
WALG_S3_PREFIX: like WALE_S3_PREFIX
```
If the prefix is not specified Spilo will generate it from WAL_S3_BUCKET.
When the AWS_REGION is set you AWS_ENDPOINT and WALE_S3_ENDPOINT are
generated automatically. `SCOPE` is the Postgres cluster name.
### Google Cloud Platform setup ### Google Cloud Platform setup
To configure the operator on GCP these prerequisites that are needed: To configure the operator on GCP these prerequisites that are needed:
@ -772,6 +785,15 @@ pod_environment_configmap: "postgres-operator-system/pod-env-overrides"
... ...
``` ```
### Restoring physical backups
If cluster members have to be (re)initialized restoring physical backups
happens automatically either from the backup location or by running
[pg_basebackup](https://www.postgresql.org/docs/13/app-pgbasebackup.html)
on one of the other running instances (preferably replicas if they do not lag
behind). You can test restoring backups by [cloning](user.md#how-to-clone-an-existing-postgresql-cluster)
clusters.
## Logical backups ## Logical backups
The operator can manage K8s cron jobs to run logical backups (SQL dumps) of The operator can manage K8s cron jobs to run logical backups (SQL dumps) of
@ -792,11 +814,12 @@ spec:
There a few things to consider when using logical backups: There a few things to consider when using logical backups:
1. Logical backups should not seen as a proper alternative to basebackups and WAL 1. Logical backups should not be seen as a proper alternative to basebackups
archiving which are described above. At the moment, the operator cannot restore and WAL archiving which are described above. At the moment, the operator cannot
logical backups automatically and you do not get point-in-time recovery but only restore logical backups automatically and you do not get point-in-time recovery
snapshots of your data. In its current state, see logical backups as a way to but only snapshots of your data. In its current state, see logical backups as a
quickly create SQL dumps that you can easily restore in an empty test cluster. way to quickly create SQL dumps that you can easily restore in an empty test
cluster.
2. The [example image](../docker/logical-backup/Dockerfile) implements the backup 2. The [example image](../docker/logical-backup/Dockerfile) implements the backup
via `pg_dumpall` and upload of compressed and encrypted results to an S3 bucket. via `pg_dumpall` and upload of compressed and encrypted results to an S3 bucket.

View File

@ -706,7 +706,8 @@ spec:
### Clone directly ### Clone directly
Another way to get a fresh copy of your source DB cluster is via basebackup. To Another way to get a fresh copy of your source DB cluster is via
[pg_basebackup](https://www.postgresql.org/docs/13/app-pgbasebackup.html). To
use this feature simply leave out the timestamp field from the clone section. use this feature simply leave out the timestamp field from the clone section.
The operator will connect to the service of the source cluster by name. If the The operator will connect to the service of the source cluster by name. If the
cluster is called test, then the connection string will look like host=test cluster is called test, then the connection string will look like host=test

View File

@ -9,7 +9,7 @@ metadata:
# "delete-date": "2020-08-31" # can only be deleted on that day if "delete-date "key is configured # "delete-date": "2020-08-31" # can only be deleted on that day if "delete-date "key is configured
# "delete-clustername": "acid-test-cluster" # can only be deleted when name matches if "delete-clustername" key is configured # "delete-clustername": "acid-test-cluster" # can only be deleted when name matches if "delete-clustername" key is configured
spec: spec:
dockerImage: registry.opensource.zalan.do/acid/spilo-13:2.0-p3 dockerImage: registry.opensource.zalan.do/acid/spilo-13:2.0-p4
teamId: "acid" teamId: "acid"
numberOfInstances: 2 numberOfInstances: 2
users: # Application/Robot users users: # Application/Robot users

View File

@ -32,7 +32,7 @@ data:
# default_memory_request: 100Mi # default_memory_request: 100Mi
# delete_annotation_date_key: delete-date # delete_annotation_date_key: delete-date
# delete_annotation_name_key: delete-clustername # delete_annotation_name_key: delete-clustername
docker_image: registry.opensource.zalan.do/acid/spilo-13:2.0-p3 docker_image: registry.opensource.zalan.do/acid/spilo-13:2.0-p4
# downscaler_annotations: "deployment-time,downscaler/*" # downscaler_annotations: "deployment-time,downscaler/*"
# enable_admin_role_for_users: "true" # enable_admin_role_for_users: "true"
# enable_crd_validation: "true" # enable_crd_validation: "true"

View File

@ -61,7 +61,7 @@ spec:
properties: properties:
docker_image: docker_image:
type: string type: string
default: "registry.opensource.zalan.do/acid/spilo-13:2.0-p3" default: "registry.opensource.zalan.do/acid/spilo-13:2.0-p4"
enable_crd_validation: enable_crd_validation:
type: boolean type: boolean
default: true default: true

View File

@ -3,7 +3,7 @@ kind: OperatorConfiguration
metadata: metadata:
name: postgresql-operator-default-configuration name: postgresql-operator-default-configuration
configuration: configuration:
docker_image: registry.opensource.zalan.do/acid/spilo-13:2.0-p3 docker_image: registry.opensource.zalan.do/acid/spilo-13:2.0-p4
# enable_crd_validation: true # enable_crd_validation: true
# enable_lazy_spilo_upgrade: false # enable_lazy_spilo_upgrade: false
enable_pgversion_env_var: true enable_pgversion_env_var: true

View File

@ -39,7 +39,7 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
result.EnableSpiloWalPathCompat = fromCRD.EnableSpiloWalPathCompat result.EnableSpiloWalPathCompat = fromCRD.EnableSpiloWalPathCompat
result.EtcdHost = fromCRD.EtcdHost result.EtcdHost = fromCRD.EtcdHost
result.KubernetesUseConfigMaps = fromCRD.KubernetesUseConfigMaps result.KubernetesUseConfigMaps = fromCRD.KubernetesUseConfigMaps
result.DockerImage = util.Coalesce(fromCRD.DockerImage, "registry.opensource.zalan.do/acid/spilo-13:2.0-p3") result.DockerImage = util.Coalesce(fromCRD.DockerImage, "registry.opensource.zalan.do/acid/spilo-13:2.0-p4")
result.Workers = util.CoalesceUInt32(fromCRD.Workers, 8) result.Workers = util.CoalesceUInt32(fromCRD.Workers, 8)
result.MinInstances = fromCRD.MinInstances result.MinInstances = fromCRD.MinInstances
result.MaxInstances = fromCRD.MaxInstances result.MaxInstances = fromCRD.MaxInstances

View File

@ -151,7 +151,7 @@ type Config struct {
WatchedNamespace string `name:"watched_namespace"` // special values: "*" means 'watch all namespaces', the empty string "" means 'watch a namespace where operator is deployed to' WatchedNamespace string `name:"watched_namespace"` // special values: "*" means 'watch all namespaces', the empty string "" means 'watch a namespace where operator is deployed to'
KubernetesUseConfigMaps bool `name:"kubernetes_use_configmaps" default:"false"` KubernetesUseConfigMaps bool `name:"kubernetes_use_configmaps" default:"false"`
EtcdHost string `name:"etcd_host" default:""` // special values: the empty string "" means Patroni will use K8s as a DCS EtcdHost string `name:"etcd_host" default:""` // special values: the empty string "" means Patroni will use K8s as a DCS
DockerImage string `name:"docker_image" default:"registry.opensource.zalan.do/acid/spilo-13:2.0-p3"` DockerImage string `name:"docker_image" default:"registry.opensource.zalan.do/acid/spilo-13:2.0-p4"`
SidecarImages map[string]string `name:"sidecar_docker_images"` // deprecated in favour of SidecarContainers SidecarImages map[string]string `name:"sidecar_docker_images"` // deprecated in favour of SidecarContainers
SidecarContainers []v1.Container `name:"sidecars"` SidecarContainers []v1.Container `name:"sidecars"`
PodServiceAccountName string `name:"pod_service_account_name" default:"postgres-pod"` PodServiceAccountName string `name:"pod_service_account_name" default:"postgres-pod"`