Merge branch 'master' into gh-pages

This commit is contained in:
Felix Kunde 2022-05-20 20:17:46 +02:00
commit 42020a473e
44 changed files with 451 additions and 221 deletions

View File

@ -9,7 +9,7 @@ assignees: ''
Please, answer some short questions which should help us to understand your problem / question better? Please, answer some short questions which should help us to understand your problem / question better?
- **Which image of the operator are you using?** e.g. registry.opensource.zalan.do/acid/postgres-operator:v1.8.0 - **Which image of the operator are you using?** e.g. registry.opensource.zalan.do/acid/postgres-operator:v1.8.1
- **Where do you run it - cloud or metal? Kubernetes or OpenShift?** [AWS K8s | GCP ... | Bare Metal K8s] - **Where do you run it - cloud or metal? Kubernetes or OpenShift?** [AWS K8s | GCP ... | Bare Metal K8s]
- **Are you running Postgres Operator in production?** [yes | no] - **Are you running Postgres Operator in production?** [yes | no]
- **Type of issue?** [Bug report, question, feature request, etc.] - **Type of issue?** [Bug report, question, feature request, etc.]

View File

@ -1,2 +1,2 @@
# global owners # global owners
* @sdudoladov @Jan-M @CyberDem0n @FxKu @jopadi * @sdudoladov @Jan-M @CyberDem0n @FxKu @jopadi @idanovinda

View File

@ -1,4 +1,5 @@
Sergey Dudoladov <sergey.dudoladov@zalando.de> Sergey Dudoladov <sergey.dudoladov@zalando.de>
Felix Kunde <felix.kunde@zalando.de> Felix Kunde <felix.kunde@zalando.de>
Jan Mussler <jan.mussler@zalando.de> Jan Mussler <jan.mussler@zalando.de>
Jociele Padilha <jociele.padilha@zalando.de> Jociele Padilha <jociele.padilha@zalando.de>
Ida Novindasari <ida.novindasari@zalando.de>

View File

@ -1,7 +1,7 @@
apiVersion: v1 apiVersion: v2
name: postgres-operator-ui name: postgres-operator-ui
version: 1.8.0 version: 1.8.1
appVersion: 1.8.0 appVersion: 1.8.1
home: https://github.com/zalando/postgres-operator home: https://github.com/zalando/postgres-operator
description: Postgres Operator UI provides a graphical interface for a convenient database-as-a-service user experience description: Postgres Operator UI provides a graphical interface for a convenient database-as-a-service user experience
keywords: keywords:

View File

@ -1,9 +1,32 @@
apiVersion: v1 apiVersion: v1
entries: entries:
postgres-operator-ui: postgres-operator-ui:
- apiVersion: v2
appVersion: 1.8.1
created: "2022-05-19T16:03:34.70846034+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience
digest: d26342e385ea51a0fbfbe23477999863e9489664ae803ea5c56da8897db84d24
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- ui
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator-ui
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-ui-1.8.1.tgz
version: 1.8.1
- apiVersion: v1 - apiVersion: v1
appVersion: 1.8.0 appVersion: 1.8.0
created: "2022-04-20T15:39:16.094386569+02:00" created: "2022-05-19T16:03:34.707925712+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience database-as-a-service user experience
digest: d4a7b40c23fd167841cc28342afdbd5ecc809181913a5c31061c83139187f148 digest: d4a7b40c23fd167841cc28342afdbd5ecc809181913a5c31061c83139187f148
@ -26,7 +49,7 @@ entries:
version: 1.8.0 version: 1.8.0
- apiVersion: v1 - apiVersion: v1
appVersion: 1.7.1 appVersion: 1.7.1
created: "2022-04-20T15:39:16.093853803+02:00" created: "2022-05-19T16:03:34.707388723+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience database-as-a-service user experience
digest: 97aed1a1d37cd5f8441eea9522f38e56cc829786ad2134c437a5e6a15c995869 digest: 97aed1a1d37cd5f8441eea9522f38e56cc829786ad2134c437a5e6a15c995869
@ -49,7 +72,7 @@ entries:
version: 1.7.1 version: 1.7.1
- apiVersion: v1 - apiVersion: v1
appVersion: 1.7.0 appVersion: 1.7.0
created: "2022-04-20T15:39:16.093334397+02:00" created: "2022-05-19T16:03:34.706864701+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience database-as-a-service user experience
digest: 37fba1968347daad393dbd1c6ee6e5b6a24d1095f972c0102197531c62dcada8 digest: 37fba1968347daad393dbd1c6ee6e5b6a24d1095f972c0102197531c62dcada8
@ -72,7 +95,7 @@ entries:
version: 1.7.0 version: 1.7.0
- apiVersion: v1 - apiVersion: v1
appVersion: 1.6.3 appVersion: 1.6.3
created: "2022-04-20T15:39:16.092419178+02:00" created: "2022-05-19T16:03:34.705931681+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience database-as-a-service user experience
digest: 08b810aa632dcc719e4785ef184e391267f7c460caa99677f2d00719075aac78 digest: 08b810aa632dcc719e4785ef184e391267f7c460caa99677f2d00719075aac78
@ -95,7 +118,7 @@ entries:
version: 1.6.3 version: 1.6.3
- apiVersion: v1 - apiVersion: v1
appVersion: 1.6.2 appVersion: 1.6.2
created: "2022-04-20T15:39:16.091945123+02:00" created: "2022-05-19T16:03:34.705441492+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience database-as-a-service user experience
digest: 14d1559bb0bd1e1e828f2daaaa6f6ac9ffc268d79824592c3589b55dd39241f6 digest: 14d1559bb0bd1e1e828f2daaaa6f6ac9ffc268d79824592c3589b55dd39241f6
@ -118,7 +141,7 @@ entries:
version: 1.6.2 version: 1.6.2
- apiVersion: v1 - apiVersion: v1
appVersion: 1.6.1 appVersion: 1.6.1
created: "2022-04-20T15:39:16.0914401+02:00" created: "2022-05-19T16:03:34.704908895+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience database-as-a-service user experience
digest: 3d321352f2f1e7bb7450aa8876e3d818aa9f9da9bd4250507386f0490f2c1969 digest: 3d321352f2f1e7bb7450aa8876e3d818aa9f9da9bd4250507386f0490f2c1969
@ -141,7 +164,7 @@ entries:
version: 1.6.1 version: 1.6.1
- apiVersion: v1 - apiVersion: v1
appVersion: 1.6.0 appVersion: 1.6.0
created: "2022-04-20T15:39:16.090887513+02:00" created: "2022-05-19T16:03:34.704432119+02:00"
description: Postgres Operator UI provides a graphical interface for a convenient description: Postgres Operator UI provides a graphical interface for a convenient
database-as-a-service user experience database-as-a-service user experience
digest: 1e0aa1e7db3c1daa96927ffbf6fdbcdb434562f961833cb5241ddbe132220ee4 digest: 1e0aa1e7db3c1daa96927ffbf6fdbcdb434562f961833cb5241ddbe132220ee4
@ -162,4 +185,4 @@ entries:
urls: urls:
- postgres-operator-ui-1.6.0.tgz - postgres-operator-ui-1.6.0.tgz
version: 1.6.0 version: 1.6.0
generated: "2022-04-20T15:39:16.0877032+02:00" generated: "2022-05-19T16:03:34.70375145+02:00"

View File

@ -8,7 +8,7 @@ replicaCount: 1
image: image:
registry: registry.opensource.zalan.do registry: registry.opensource.zalan.do
repository: acid/postgres-operator-ui repository: acid/postgres-operator-ui
tag: v1.8.0 tag: v1.8.1
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
# Optionally specify an array of imagePullSecrets. # Optionally specify an array of imagePullSecrets.

View File

@ -1,7 +1,7 @@
apiVersion: v1 apiVersion: v2
name: postgres-operator name: postgres-operator
version: 1.8.0 version: 1.8.1
appVersion: 1.8.0 appVersion: 1.8.1
home: https://github.com/zalando/postgres-operator home: https://github.com/zalando/postgres-operator
description: Postgres Operator creates and manages PostgreSQL clusters running in Kubernetes description: Postgres Operator creates and manages PostgreSQL clusters running in Kubernetes
keywords: keywords:

View File

@ -450,7 +450,7 @@ spec:
properties: properties:
logical_backup_docker_image: logical_backup_docker_image:
type: string type: string
default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0" default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.1"
logical_backup_google_application_credentials: logical_backup_google_application_credentials:
type: string type: string
logical_backup_job_prefix: logical_backup_job_prefix:

View File

@ -479,7 +479,6 @@ spec:
- standby_host - standby_host
streams: streams:
type: array type: array
nullable: true
items: items:
type: object type: object
required: required:
@ -588,12 +587,12 @@ spec:
- SUPERUSER - SUPERUSER
- nosuperuser - nosuperuser
- NOSUPERUSER - NOSUPERUSER
usersWithPasswordRotation: usersWithInPlaceSecretRotation:
type: array type: array
nullable: true nullable: true
items: items:
type: string type: string
usersWithInPlacePasswordRotation: usersWithSecretRotation:
type: array type: array
nullable: true nullable: true
items: items:
@ -612,17 +611,26 @@ spec:
type: array type: array
items: items:
type: object type: object
required:
- key
- operator
properties: properties:
key: key:
type: string type: string
operator: operator:
type: string type: string
enum:
- DoesNotExists
- Exists
- In
- NotIn
values: values:
type: array type: array
items: items:
type: string type: string
matchLabels: matchLabels:
type: object type: object
x-kubernetes-preserve-unknown-fields: true
size: size:
type: string type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$' pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'

View File

@ -1,9 +1,31 @@
apiVersion: v1 apiVersion: v1
entries: entries:
postgres-operator: postgres-operator:
- apiVersion: v2
appVersion: 1.8.1
created: "2022-05-19T16:01:17.868770557+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes
digest: ee0c3bb6ba72fa4289ba3b1c6060e5b312dd023faba2a61b4cb7d9e5e2cc57a5
home: https://github.com/zalando/postgres-operator
keywords:
- postgres
- operator
- cloud-native
- patroni
- spilo
maintainers:
- email: opensource@zalando.de
name: Zalando
name: postgres-operator
sources:
- https://github.com/zalando/postgres-operator
urls:
- postgres-operator-1.8.1.tgz
version: 1.8.1
- apiVersion: v1 - apiVersion: v1
appVersion: 1.8.0 appVersion: 1.8.0
created: "2022-04-14T15:02:07.818613578+02:00" created: "2022-05-19T16:01:17.866519324+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes in Kubernetes
digest: 3ae232cf009e09aa2ad11c171484cd2f1b72e63c59735e58fbe2b6eb842f4c86 digest: 3ae232cf009e09aa2ad11c171484cd2f1b72e63c59735e58fbe2b6eb842f4c86
@ -25,7 +47,7 @@ entries:
version: 1.8.0 version: 1.8.0
- apiVersion: v1 - apiVersion: v1
appVersion: 1.7.1 appVersion: 1.7.1
created: "2022-04-14T15:02:07.817076674+02:00" created: "2022-05-19T16:01:17.863939923+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes in Kubernetes
digest: 7262563bec0b058e669ae6bcff0226e33fa9ece9c41ac46a53274046afe7700c digest: 7262563bec0b058e669ae6bcff0226e33fa9ece9c41ac46a53274046afe7700c
@ -47,7 +69,7 @@ entries:
version: 1.7.1 version: 1.7.1
- apiVersion: v1 - apiVersion: v1
appVersion: 1.7.0 appVersion: 1.7.0
created: "2022-04-14T15:02:07.815161671+02:00" created: "2022-05-19T16:01:17.861563817+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes in Kubernetes
digest: c3e99fb94305f81484b8b1af18eefb78681f3b5d057d5ad10565e4afb7c65ffe digest: c3e99fb94305f81484b8b1af18eefb78681f3b5d057d5ad10565e4afb7c65ffe
@ -69,7 +91,7 @@ entries:
version: 1.7.0 version: 1.7.0
- apiVersion: v1 - apiVersion: v1
appVersion: 1.6.3 appVersion: 1.6.3
created: "2022-04-14T15:02:07.813087244+02:00" created: "2022-05-19T16:01:17.857400801+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes in Kubernetes
digest: ea08f991bf23c9ad114bca98ebcbe3e2fa15beab163061399394905eaee89b35 digest: ea08f991bf23c9ad114bca98ebcbe3e2fa15beab163061399394905eaee89b35
@ -91,7 +113,7 @@ entries:
version: 1.6.3 version: 1.6.3
- apiVersion: v1 - apiVersion: v1
appVersion: 1.6.2 appVersion: 1.6.2
created: "2022-04-14T15:02:07.8114121+02:00" created: "2022-05-19T16:01:17.853990686+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes in Kubernetes
digest: d886f8a0879ca07d1e5246ee7bc55710e1c872f3977280fe495db6fc2057a7f4 digest: d886f8a0879ca07d1e5246ee7bc55710e1c872f3977280fe495db6fc2057a7f4
@ -113,7 +135,7 @@ entries:
version: 1.6.2 version: 1.6.2
- apiVersion: v1 - apiVersion: v1
appVersion: 1.6.1 appVersion: 1.6.1
created: "2022-04-14T15:02:07.809829808+02:00" created: "2022-05-19T16:01:17.851310112+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes in Kubernetes
digest: 4ba5972cd486dcaa2d11c5613a6f97f6b7b831822e610fe9e10a57ea1db23556 digest: 4ba5972cd486dcaa2d11c5613a6f97f6b7b831822e610fe9e10a57ea1db23556
@ -135,7 +157,7 @@ entries:
version: 1.6.1 version: 1.6.1
- apiVersion: v1 - apiVersion: v1
appVersion: 1.6.0 appVersion: 1.6.0
created: "2022-04-14T15:02:07.808307624+02:00" created: "2022-05-19T16:01:17.848853103+02:00"
description: Postgres Operator creates and manages PostgreSQL clusters running description: Postgres Operator creates and manages PostgreSQL clusters running
in Kubernetes in Kubernetes
digest: f52149718ea364f46b4b9eec9a65f6253ad182bb78df541d14cd5277b9c8a8c3 digest: f52149718ea364f46b4b9eec9a65f6253ad182bb78df541d14cd5277b9c8a8c3
@ -155,4 +177,4 @@ entries:
urls: urls:
- postgres-operator-1.6.0.tgz - postgres-operator-1.6.0.tgz
version: 1.6.0 version: 1.6.0
generated: "2022-04-14T15:02:07.806370532+02:00" generated: "2022-05-19T16:01:17.843701398+02:00"

Binary file not shown.

View File

@ -1,7 +1,7 @@
image: image:
registry: registry.opensource.zalan.do registry: registry.opensource.zalan.do
repository: acid/postgres-operator repository: acid/postgres-operator
tag: v1.8.0 tag: v1.8.1
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
# Optionally specify an array of imagePullSecrets. # Optionally specify an array of imagePullSecrets.

View File

@ -1286,7 +1286,7 @@ make docker
# build in image in minikube docker env # build in image in minikube docker env
eval $(minikube docker-env) eval $(minikube docker-env)
docker build -t registry.opensource.zalan.do/acid/postgres-operator-ui:v1.8.0 . docker build -t registry.opensource.zalan.do/acid/postgres-operator-ui:v1.8.1 .
# apply UI manifests next to a running Postgres Operator # apply UI manifests next to a running Postgres Operator
kubectl apply -f manifests/ kubectl apply -f manifests/

View File

@ -178,13 +178,18 @@ under the `users` key.
`standby`. `standby`.
* **additional_owner_roles** * **additional_owner_roles**
Specifies database roles that will become members of all database owners. Specifies database roles that will be granted to all database owners. Owners
Then owners can use `SET ROLE` to obtain privileges of these roles to e.g. can then use `SET ROLE` to obtain privileges of these roles to e.g. create
create/update functionality from extensions as part of a migration script. or update functionality from extensions as part of a migration script. One
Note, that roles listed here should be preconfigured in the docker image such role can be `cron_admin` which is provided by the Spilo docker image to
and already exist in the database cluster on startup. One such role can be set up cron jobs inside the `postgres` database. In general, roles listed
`cron_admin` which is provided by the Spilo docker image to set up cron here should be preconfigured in the docker image and already exist in the
jobs inside the `postgres` database. Default is `empty`. database cluster on startup. Otherwise, syncing roles will return an error
on each cluster sync process. Alternatively, you have to create the role and
do the GRANT manually. Note, the operator will not allow additional owner
roles to be members of database owners because it should be vice versa. If
the operator cannot set up the correct membership it tries to revoke all
additional owner roles from database owners. Default is `empty`.
* **enable_password_rotation** * **enable_password_rotation**
For all `LOGIN` roles that are not database owners the operator can rotate For all `LOGIN` roles that are not database owners the operator can rotate
@ -679,7 +684,7 @@ grouped under the `logical_backup` key.
runs `pg_dumpall` on a replica if possible and uploads compressed results to runs `pg_dumpall` on a replica if possible and uploads compressed results to
an S3 bucket under the key `/spilo/pg_cluster_name/cluster_k8s_uuid/logical_backups`. an S3 bucket under the key `/spilo/pg_cluster_name/cluster_k8s_uuid/logical_backups`.
The default image is the same image built with the Zalando-internal CI The default image is the same image built with the Zalando-internal CI
pipeline. Default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0" pipeline. Default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.1"
* **logical_backup_google_application_credentials** * **logical_backup_google_application_credentials**
Specifies the path of the google cloud service account json file. Default is empty. Specifies the path of the google cloud service account json file. Default is empty.

View File

@ -12,8 +12,8 @@ from kubernetes import client
from tests.k8s_api import K8s from tests.k8s_api import K8s
from kubernetes.client.rest import ApiException from kubernetes.client.rest import ApiException
SPILO_CURRENT = "registry.opensource.zalan.do/acid/spilo-14-e2e:0.1" SPILO_CURRENT = "registry.opensource.zalan.do/acid/spilo-14-e2e:0.3"
SPILO_LAZY = "registry.opensource.zalan.do/acid/spilo-14-e2e:0.2" SPILO_LAZY = "registry.opensource.zalan.do/acid/spilo-14-e2e:0.4"
def to_selector(labels): def to_selector(labels):
@ -161,10 +161,21 @@ class EndToEndTestCase(unittest.TestCase):
@timeout_decorator.timeout(TEST_TIMEOUT_SEC) @timeout_decorator.timeout(TEST_TIMEOUT_SEC)
def test_additional_owner_roles(self): def test_additional_owner_roles(self):
''' '''
Test adding additional member roles to existing database owner roles Test granting additional roles to existing database owners
''' '''
k8s = self.k8s k8s = self.k8s
# first test - wait for the operator to get in sync and set everything up
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"},
"Operator does not get in sync")
leader = k8s.get_cluster_leader_pod()
# produce wrong membership for cron_admin
grant_dbowner = """
GRANT bar_owner TO cron_admin;
"""
self.query_database(leader.metadata.name, "postgres", grant_dbowner)
# enable PostgresTeam CRD and lower resync # enable PostgresTeam CRD and lower resync
owner_roles = { owner_roles = {
"data": { "data": {
@ -175,16 +186,15 @@ class EndToEndTestCase(unittest.TestCase):
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"}, self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"},
"Operator does not get in sync") "Operator does not get in sync")
leader = k8s.get_cluster_leader_pod()
owner_query = """ owner_query = """
SELECT a2.rolname SELECT a2.rolname
FROM pg_catalog.pg_authid a FROM pg_catalog.pg_authid a
JOIN pg_catalog.pg_auth_members am JOIN pg_catalog.pg_auth_members am
ON a.oid = am.member ON a.oid = am.member
AND a.rolname = 'cron_admin' AND a.rolname IN ('zalando', 'bar_owner', 'bar_data_owner')
JOIN pg_catalog.pg_authid a2 JOIN pg_catalog.pg_authid a2
ON a2.oid = am.roleid ON a2.oid = am.roleid
WHERE a2.rolname IN ('zalando', 'bar_owner', 'bar_data_owner'); WHERE a2.rolname = 'cron_admin';
""" """
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", owner_query)), 3, self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", owner_query)), 3,
"Not all additional users found in database", 10, 5) "Not all additional users found in database", 10, 5)

View File

@ -5,7 +5,7 @@ go 1.17
require ( require (
github.com/spf13/cobra v1.2.1 github.com/spf13/cobra v1.2.1
github.com/spf13/viper v1.9.0 github.com/spf13/viper v1.9.0
github.com/zalando/postgres-operator v1.8.0 github.com/zalando/postgres-operator v1.8.1
k8s.io/api v0.22.4 k8s.io/api v0.22.4
k8s.io/apiextensions-apiserver v0.22.4 k8s.io/apiextensions-apiserver v0.22.4
k8s.io/apimachinery v0.22.4 k8s.io/apimachinery v0.22.4

View File

@ -71,7 +71,7 @@ data:
# kube_iam_role: "" # kube_iam_role: ""
# kubernetes_use_configmaps: "false" # kubernetes_use_configmaps: "false"
# log_s3_bucket: "" # log_s3_bucket: ""
logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0" logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.8.1"
# logical_backup_google_application_credentials: "" # logical_backup_google_application_credentials: ""
logical_backup_job_prefix: "logical-backup-" logical_backup_job_prefix: "logical-backup-"
logical_backup_provider: "s3" logical_backup_provider: "s3"

View File

@ -448,7 +448,7 @@ spec:
properties: properties:
logical_backup_docker_image: logical_backup_docker_image:
type: string type: string
default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0" default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.1"
logical_backup_google_application_credentials: logical_backup_google_application_credentials:
type: string type: string
logical_backup_job_prefix: logical_backup_job_prefix:

View File

@ -19,7 +19,7 @@ spec:
serviceAccountName: postgres-operator serviceAccountName: postgres-operator
containers: containers:
- name: postgres-operator - name: postgres-operator
image: registry.opensource.zalan.do/acid/postgres-operator:v1.8.0 image: registry.opensource.zalan.do/acid/postgres-operator:v1.8.1
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
resources: resources:
requests: requests:

View File

@ -144,7 +144,7 @@ configuration:
# wal_gs_bucket: "" # wal_gs_bucket: ""
# wal_s3_bucket: "" # wal_s3_bucket: ""
logical_backup: logical_backup:
logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0" logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.8.1"
# logical_backup_google_application_credentials: "" # logical_backup_google_application_credentials: ""
logical_backup_job_prefix: "logical-backup-" logical_backup_job_prefix: "logical-backup-"
logical_backup_provider: "s3" logical_backup_provider: "s3"

View File

@ -477,7 +477,6 @@ spec:
- standby_host - standby_host
streams: streams:
type: array type: array
nullable: true
items: items:
type: object type: object
required: required:
@ -586,12 +585,12 @@ spec:
- SUPERUSER - SUPERUSER
- nosuperuser - nosuperuser
- NOSUPERUSER - NOSUPERUSER
usersWithPasswordRotation: usersWithInPlaceSecretRotation:
type: array type: array
nullable: true nullable: true
items: items:
type: string type: string
usersWithInPlacePasswordRotation: usersWithSecretRotation:
type: array type: array
nullable: true nullable: true
items: items:
@ -610,17 +609,26 @@ spec:
type: array type: array
items: items:
type: object type: object
required:
- key
- operator
properties: properties:
key: key:
type: string type: string
operator: operator:
type: string type: string
enum:
- DoesNotExists
- Exists
- In
- NotIn
values: values:
type: array type: array
items: items:
type: string type: string
matchLabels: matchLabels:
type: object type: object
x-kubernetes-preserve-unknown-fields: true
size: size:
type: string type: string
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$' pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'

View File

@ -957,7 +957,7 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
}, },
}, },
}, },
"usersWithSecretRotation": { "usersWithInPlaceSecretRotation": {
Type: "array", Type: "array",
Nullable: true, Nullable: true,
Items: &apiextv1.JSONSchemaPropsOrArray{ Items: &apiextv1.JSONSchemaPropsOrArray{
@ -966,7 +966,7 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
}, },
}, },
}, },
"usersWithInPlaceSecretRotation": { "usersWithSecretRotation": {
Type: "array", Type: "array",
Nullable: true, Nullable: true,
Items: &apiextv1.JSONSchemaPropsOrArray{ Items: &apiextv1.JSONSchemaPropsOrArray{
@ -990,7 +990,7 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
Items: &apiextv1.JSONSchemaPropsOrArray{ Items: &apiextv1.JSONSchemaPropsOrArray{
Schema: &apiextv1.JSONSchemaProps{ Schema: &apiextv1.JSONSchemaProps{
Type: "object", Type: "object",
Required: []string{"key", "operator", "values"}, Required: []string{"key", "operator"},
Properties: map[string]apiextv1.JSONSchemaProps{ Properties: map[string]apiextv1.JSONSchemaProps{
"key": { "key": {
Type: "string", Type: "string",
@ -999,16 +999,16 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
Type: "string", Type: "string",
Enum: []apiextv1.JSON{ Enum: []apiextv1.JSON{
{ {
Raw: []byte(`"In"`), Raw: []byte(`"DoesNotExist"`),
},
{
Raw: []byte(`"NotIn"`),
}, },
{ {
Raw: []byte(`"Exists"`), Raw: []byte(`"Exists"`),
}, },
{ {
Raw: []byte(`"DoesNotExist"`), Raw: []byte(`"In"`),
},
{
Raw: []byte(`"NotIn"`),
}, },
}, },
}, },

View File

@ -238,6 +238,7 @@ type ConnectionPooler struct {
*Resources `json:"resources,omitempty"` *Resources `json:"resources,omitempty"`
} }
// Stream defines properties for creating FabricEventStream resources
type Stream struct { type Stream struct {
ApplicationId string `json:"applicationId"` ApplicationId string `json:"applicationId"`
Database string `json:"database"` Database string `json:"database"`
@ -246,6 +247,7 @@ type Stream struct {
BatchSize *uint32 `json:"batchSize,omitempty"` BatchSize *uint32 `json:"batchSize,omitempty"`
} }
// StreamTable defines properties of outbox tables for FabricEventStreams
type StreamTable struct { type StreamTable struct {
EventType string `json:"eventType"` EventType string `json:"eventType"`
IdColumn *string `json:"idColumn,omitempty"` IdColumn *string `json:"idColumn,omitempty"`

View File

@ -133,8 +133,10 @@ func New(cfg Config, kubeClient k8sutil.KubernetesClient, pgSpec acidv1.Postgres
Services: make(map[PostgresRole]*v1.Service), Services: make(map[PostgresRole]*v1.Service),
Endpoints: make(map[PostgresRole]*v1.Endpoints)}, Endpoints: make(map[PostgresRole]*v1.Endpoints)},
userSyncStrategy: users.DefaultUserSyncStrategy{ userSyncStrategy: users.DefaultUserSyncStrategy{
PasswordEncryption: passwordEncryption, PasswordEncryption: passwordEncryption,
RoleDeletionSuffix: cfg.OpConfig.RoleDeletionSuffix}, RoleDeletionSuffix: cfg.OpConfig.RoleDeletionSuffix,
AdditionalOwnerRoles: cfg.OpConfig.AdditionalOwnerRoles,
},
deleteOptions: metav1.DeleteOptions{PropagationPolicy: &deletePropagationPolicy}, deleteOptions: metav1.DeleteOptions{PropagationPolicy: &deletePropagationPolicy},
podEventsQueue: podEventsQueue, podEventsQueue: podEventsQueue,
KubeClient: kubeClient, KubeClient: kubeClient,
@ -1030,12 +1032,20 @@ func (c *Cluster) processPodEvent(obj interface{}) error {
return fmt.Errorf("could not cast to PodEvent") return fmt.Errorf("could not cast to PodEvent")
} }
// can only take lock when (un)registerPodSubscriber is finshed
c.podSubscribersMu.RLock() c.podSubscribersMu.RLock()
subscriber, ok := c.podSubscribers[spec.NamespacedName(event.PodName)] subscriber, ok := c.podSubscribers[spec.NamespacedName(event.PodName)]
c.podSubscribersMu.RUnlock()
if ok { if ok {
subscriber <- event select {
case subscriber <- event:
default:
// ending up here when there is no receiver on the channel (i.e. waitForPodLabel finished)
// avoids blocking channel: https://gobyexample.com/non-blocking-channel-operations
}
} }
// hold lock for the time of processing the event to avoid race condition
// with unregisterPodSubscriber closing the channel (see #1876)
c.podSubscribersMu.RUnlock()
return nil return nil
} }
@ -1308,28 +1318,15 @@ func (c *Cluster) initRobotUsers() error {
} }
func (c *Cluster) initAdditionalOwnerRoles() { func (c *Cluster) initAdditionalOwnerRoles() {
for _, additionalOwner := range c.OpConfig.AdditionalOwnerRoles { if len(c.OpConfig.AdditionalOwnerRoles) == 0 {
// fetch all database owners the additional should become a member of return
memberOf := make([]string, 0) }
for username, pgUser := range c.pgUsers {
if pgUser.IsDbOwner {
memberOf = append(memberOf, username)
}
}
if len(memberOf) > 0 { // fetch database owners and assign additional owner roles
namespace := c.Namespace for username, pgUser := range c.pgUsers {
additionalOwnerPgUser := spec.PgUser{ if pgUser.IsDbOwner {
Origin: spec.RoleOriginSpilo, pgUser.MemberOf = append(pgUser.MemberOf, c.OpConfig.AdditionalOwnerRoles...)
MemberOf: memberOf, c.pgUsers[username] = pgUser
Name: additionalOwner,
Namespace: namespace,
}
if currentRole, present := c.pgUsers[additionalOwner]; present {
c.pgUsers[additionalOwner] = c.resolveNameConflict(&currentRole, &additionalOwnerPgUser)
} else {
c.pgUsers[additionalOwner] = additionalOwnerPgUser
}
} }
} }
} }
@ -1512,34 +1509,16 @@ func (c *Cluster) Switchover(curMaster *v1.Pod, candidate spec.NamespacedName) e
var err error var err error
c.logger.Debugf("switching over from %q to %q", curMaster.Name, candidate) c.logger.Debugf("switching over from %q to %q", curMaster.Name, candidate)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switching over from %q to %q", curMaster.Name, candidate) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switching over from %q to %q", curMaster.Name, candidate)
var wg sync.WaitGroup
podLabelErr := make(chan error)
stopCh := make(chan struct{}) stopCh := make(chan struct{})
ch := c.registerPodSubscriber(candidate)
wg.Add(1) defer c.unregisterPodSubscriber(candidate)
defer close(stopCh)
go func() {
defer wg.Done()
ch := c.registerPodSubscriber(candidate)
defer c.unregisterPodSubscriber(candidate)
role := Master
select {
case <-stopCh:
case podLabelErr <- func() (err2 error) {
_, err2 = c.waitForPodLabel(ch, stopCh, &role)
return
}():
}
}()
if err = c.patroni.Switchover(curMaster, candidate.Name); err == nil { if err = c.patroni.Switchover(curMaster, candidate.Name); err == nil {
c.logger.Debugf("successfully switched over from %q to %q", curMaster.Name, candidate) c.logger.Debugf("successfully switched over from %q to %q", curMaster.Name, candidate)
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Successfully switched over from %q to %q", curMaster.Name, candidate) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Successfully switched over from %q to %q", curMaster.Name, candidate)
if err = <-podLabelErr; err != nil { _, err = c.waitForPodLabel(ch, stopCh, nil)
if err != nil {
err = fmt.Errorf("could not get master pod label: %v", err) err = fmt.Errorf("could not get master pod label: %v", err)
} }
} else { } else {
@ -1547,14 +1526,6 @@ func (c *Cluster) Switchover(curMaster *v1.Pod, candidate spec.NamespacedName) e
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switchover from %q to %q FAILED: %v", curMaster.Name, candidate, err) c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switchover from %q to %q FAILED: %v", curMaster.Name, candidate, err)
} }
// signal the role label waiting goroutine to close the shop and go home
close(stopCh)
// wait until the goroutine terminates, since unregisterPodSubscriber
// must be called before the outer return; otherwise we risk subscribing to the same pod twice.
wg.Wait()
// close the label waiting channel no sooner than the waiting goroutine terminates.
close(podLabelErr)
return err return err
} }

View File

@ -148,11 +148,9 @@ func TestInitAdditionalOwnerRoles(t *testing.T) {
manifestUsers := map[string]acidv1.UserFlags{"foo_owner": {}, "bar_owner": {}, "app_user": {}} manifestUsers := map[string]acidv1.UserFlags{"foo_owner": {}, "bar_owner": {}, "app_user": {}}
expectedUsers := map[string]spec.PgUser{ expectedUsers := map[string]spec.PgUser{
"foo_owner": {Origin: spec.RoleOriginManifest, Name: "foo_owner", Namespace: cl.Namespace, Password: "f123", Flags: []string{"LOGIN"}, IsDbOwner: true}, "foo_owner": {Origin: spec.RoleOriginManifest, Name: "foo_owner", Namespace: cl.Namespace, Password: "f123", Flags: []string{"LOGIN"}, IsDbOwner: true, MemberOf: []string{"cron_admin", "part_man"}},
"bar_owner": {Origin: spec.RoleOriginManifest, Name: "bar_owner", Namespace: cl.Namespace, Password: "b123", Flags: []string{"LOGIN"}, IsDbOwner: true}, "bar_owner": {Origin: spec.RoleOriginManifest, Name: "bar_owner", Namespace: cl.Namespace, Password: "b123", Flags: []string{"LOGIN"}, IsDbOwner: true, MemberOf: []string{"cron_admin", "part_man"}},
"app_user": {Origin: spec.RoleOriginManifest, Name: "app_user", Namespace: cl.Namespace, Password: "a123", Flags: []string{"LOGIN"}, IsDbOwner: false}, "app_user": {Origin: spec.RoleOriginManifest, Name: "app_user", Namespace: cl.Namespace, Password: "a123", Flags: []string{"LOGIN"}, IsDbOwner: false},
"cron_admin": {Origin: spec.RoleOriginSpilo, Name: "cron_admin", Namespace: cl.Namespace, MemberOf: []string{"foo_owner", "bar_owner"}},
"part_man": {Origin: spec.RoleOriginSpilo, Name: "part_man", Namespace: cl.Namespace, MemberOf: []string{"foo_owner", "bar_owner"}},
} }
cl.Spec.Databases = map[string]string{"foo_db": "foo_owner", "bar_db": "bar_owner"} cl.Spec.Databases = map[string]string{"foo_db": "foo_owner", "bar_db": "bar_owner"}
@ -163,24 +161,15 @@ func TestInitAdditionalOwnerRoles(t *testing.T) {
t.Errorf("%s could not init manifest users", testName) t.Errorf("%s could not init manifest users", testName)
} }
// update passwords to compare with result // now assign additional roles to owners
for manifestUser := range manifestUsers {
pgUser := cl.pgUsers[manifestUser]
pgUser.Password = manifestUser[0:1] + "123"
cl.pgUsers[manifestUser] = pgUser
}
cl.initAdditionalOwnerRoles() cl.initAdditionalOwnerRoles()
for _, additionalOwnerRole := range cl.Config.OpConfig.AdditionalOwnerRoles { // update passwords to compare with result
expectedPgUser := expectedUsers[additionalOwnerRole] for username, existingPgUser := range cl.pgUsers {
existingPgUser, exists := cl.pgUsers[additionalOwnerRole] expectedPgUser := expectedUsers[username]
if !exists {
t.Errorf("%s additional owner role %q not initilaized", testName, additionalOwnerRole)
}
if !util.IsEqualIgnoreOrder(expectedPgUser.MemberOf, existingPgUser.MemberOf) { if !util.IsEqualIgnoreOrder(expectedPgUser.MemberOf, existingPgUser.MemberOf) {
t.Errorf("%s unexpected membership of additional owner role %q: expected member of %#v, got member of %#v", t.Errorf("%s unexpected membership of user %q: expected member of %#v, got member of %#v",
testName, additionalOwnerRole, expectedPgUser.MemberOf, existingPgUser.MemberOf) testName, username, expectedPgUser.MemberOf, existingPgUser.MemberOf)
} }
} }
} }

View File

@ -42,6 +42,7 @@ const (
logicalBackupContainerName = "logical-backup" logicalBackupContainerName = "logical-backup"
connectionPoolerContainer = "connection-pooler" connectionPoolerContainer = "connection-pooler"
pgPort = 5432 pgPort = 5432
operatorPort = 8080
) )
type pgUser struct { type pgUser struct {
@ -567,7 +568,7 @@ func generateContainer(
Protocol: v1.ProtocolTCP, Protocol: v1.ProtocolTCP,
}, },
{ {
ContainerPort: patroni.ApiPort, ContainerPort: operatorPort,
Protocol: v1.ProtocolTCP, Protocol: v1.ProtocolTCP,
}, },
}, },
@ -939,7 +940,6 @@ func (c *Cluster) generateSpiloPodEnvVars(
func appendEnvVars(envs []v1.EnvVar, appEnv ...v1.EnvVar) []v1.EnvVar { func appendEnvVars(envs []v1.EnvVar, appEnv ...v1.EnvVar) []v1.EnvVar {
collectedEnvs := envs collectedEnvs := envs
for _, env := range appEnv { for _, env := range appEnv {
env.Name = strings.ToUpper(env.Name)
if !isEnvVarPresent(collectedEnvs, env.Name) { if !isEnvVarPresent(collectedEnvs, env.Name) {
collectedEnvs = append(collectedEnvs, env) collectedEnvs = append(collectedEnvs, env)
} }
@ -949,7 +949,7 @@ func appendEnvVars(envs []v1.EnvVar, appEnv ...v1.EnvVar) []v1.EnvVar {
func isEnvVarPresent(envs []v1.EnvVar, key string) bool { func isEnvVarPresent(envs []v1.EnvVar, key string) bool {
for _, env := range envs { for _, env := range envs {
if env.Name == key { if strings.EqualFold(env.Name, key) {
return true return true
} }
} }
@ -1649,7 +1649,7 @@ func (c *Cluster) generateUserSecrets() map[string]*v1.Secret {
func (c *Cluster) generateSingleUserSecret(namespace string, pgUser spec.PgUser) *v1.Secret { func (c *Cluster) generateSingleUserSecret(namespace string, pgUser spec.PgUser) *v1.Secret {
//Skip users with no password i.e. human users (they'll be authenticated using pam) //Skip users with no password i.e. human users (they'll be authenticated using pam)
if pgUser.Password == "" { if pgUser.Password == "" {
if pgUser.Origin != spec.RoleOriginTeamsAPI && pgUser.Origin != spec.RoleOriginSpilo { if pgUser.Origin != spec.RoleOriginTeamsAPI {
c.logger.Warningf("could not generate secret for a non-teamsAPI role %q: role has no password", c.logger.Warningf("could not generate secret for a non-teamsAPI role %q: role has no password",
pgUser.Name) pgUser.Name)
} }

View File

@ -504,7 +504,7 @@ func TestGenerateSpiloPodEnvVars(t *testing.T) {
expectedS3BucketConfigMap := []ExpectedValue{ expectedS3BucketConfigMap := []ExpectedValue{
{ {
envIndex: 17, envIndex: 17,
envVarConstant: "WAL_S3_BUCKET", envVarConstant: "wal_s3_bucket",
envVarValue: "global-s3-bucket-configmap", envVarValue: "global-s3-bucket-configmap",
}, },
} }
@ -518,7 +518,7 @@ func TestGenerateSpiloPodEnvVars(t *testing.T) {
expectedCustomVariableSecret := []ExpectedValue{ expectedCustomVariableSecret := []ExpectedValue{
{ {
envIndex: 16, envIndex: 16,
envVarConstant: "CUSTOM_VARIABLE", envVarConstant: "custom_variable",
envVarValueRef: &v1.EnvVarSource{ envVarValueRef: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{ SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{ LocalObjectReference: v1.LocalObjectReference{
@ -532,7 +532,7 @@ func TestGenerateSpiloPodEnvVars(t *testing.T) {
expectedCustomVariableConfigMap := []ExpectedValue{ expectedCustomVariableConfigMap := []ExpectedValue{
{ {
envIndex: 16, envIndex: 16,
envVarConstant: "CUSTOM_VARIABLE", envVarConstant: "custom_variable",
envVarValue: "configmap-test", envVarValue: "configmap-test",
}, },
} }
@ -573,14 +573,14 @@ func TestGenerateSpiloPodEnvVars(t *testing.T) {
}, },
{ {
envIndex: 20, envIndex: 20,
envVarConstant: "CLONE_AWS_ENDPOINT", envVarConstant: "clone_aws_endpoint",
envVarValue: "s3.eu-west-1.amazonaws.com", envVarValue: "s3.eu-west-1.amazonaws.com",
}, },
} }
expectedCloneEnvSecret := []ExpectedValue{ expectedCloneEnvSecret := []ExpectedValue{
{ {
envIndex: 20, envIndex: 20,
envVarConstant: "CLONE_AWS_ACCESS_KEY_ID", envVarConstant: "clone_aws_access_key_id",
envVarValueRef: &v1.EnvVarSource{ envVarValueRef: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{ SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{ LocalObjectReference: v1.LocalObjectReference{
@ -599,7 +599,7 @@ func TestGenerateSpiloPodEnvVars(t *testing.T) {
}, },
{ {
envIndex: 20, envIndex: 20,
envVarConstant: "STANDBY_GOOGLE_APPLICATION_CREDENTIALS", envVarConstant: "standby_google_application_credentials",
envVarValueRef: &v1.EnvVarSource{ envVarValueRef: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{ SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{ LocalObjectReference: v1.LocalObjectReference{

View File

@ -67,7 +67,7 @@ func (c *Cluster) markRollingUpdateFlagForPod(pod *v1.Pod, msg string) error {
return fmt.Errorf("could not form patch for pod's rolling update flag: %v", err) return fmt.Errorf("could not form patch for pod's rolling update flag: %v", err)
} }
err = retryutil.Retry(c.OpConfig.PatroniAPICheckInterval, c.OpConfig.PatroniAPICheckTimeout, err = retryutil.Retry(1*time.Second, 5*time.Second,
func() (bool, error) { func() (bool, error) {
_, err2 := c.KubeClient.Pods(pod.Namespace).Patch( _, err2 := c.KubeClient.Pods(pod.Namespace).Patch(
context.TODO(), context.TODO(),
@ -151,12 +151,13 @@ func (c *Cluster) unregisterPodSubscriber(podName spec.NamespacedName) {
c.podSubscribersMu.Lock() c.podSubscribersMu.Lock()
defer c.podSubscribersMu.Unlock() defer c.podSubscribersMu.Unlock()
if _, ok := c.podSubscribers[podName]; !ok { ch, ok := c.podSubscribers[podName]
if !ok {
panic("subscriber for pod '" + podName.String() + "' is not found") panic("subscriber for pod '" + podName.String() + "' is not found")
} }
close(c.podSubscribers[podName])
delete(c.podSubscribers, podName) delete(c.podSubscribers, podName)
close(ch)
} }
func (c *Cluster) registerPodSubscriber(podName spec.NamespacedName) chan PodEvent { func (c *Cluster) registerPodSubscriber(podName spec.NamespacedName) chan PodEvent {
@ -399,11 +400,12 @@ func (c *Cluster) getPatroniMemberData(pod *v1.Pod) (patroni.MemberData, error)
} }
func (c *Cluster) recreatePod(podName spec.NamespacedName) (*v1.Pod, error) { func (c *Cluster) recreatePod(podName spec.NamespacedName) (*v1.Pod, error) {
stopCh := make(chan struct{})
ch := c.registerPodSubscriber(podName) ch := c.registerPodSubscriber(podName)
defer c.unregisterPodSubscriber(podName) defer c.unregisterPodSubscriber(podName)
stopChan := make(chan struct{}) defer close(stopCh)
err := retryutil.Retry(c.OpConfig.PatroniAPICheckInterval, c.OpConfig.PatroniAPICheckTimeout, err := retryutil.Retry(1*time.Second, 5*time.Second,
func() (bool, error) { func() (bool, error) {
err2 := c.KubeClient.Pods(podName.Namespace).Delete( err2 := c.KubeClient.Pods(podName.Namespace).Delete(
context.TODO(), context.TODO(),
@ -421,7 +423,7 @@ func (c *Cluster) recreatePod(podName spec.NamespacedName) (*v1.Pod, error) {
if err := c.waitForPodDeletion(ch); err != nil { if err := c.waitForPodDeletion(ch); err != nil {
return nil, err return nil, err
} }
pod, err := c.waitForPodLabel(ch, stopChan, nil) pod, err := c.waitForPodLabel(ch, stopCh, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -446,7 +448,7 @@ func (c *Cluster) recreatePods(pods []v1.Pod, switchoverCandidates []spec.Namesp
continue continue
} }
podName := util.NameFromMeta(pod.ObjectMeta) podName := util.NameFromMeta(pods[i].ObjectMeta)
newPod, err := c.recreatePod(podName) newPod, err := c.recreatePod(podName)
if err != nil { if err != nil {
return fmt.Errorf("could not recreate replica pod %q: %v", util.NameFromMeta(pod.ObjectMeta), err) return fmt.Errorf("could not recreate replica pod %q: %v", util.NameFromMeta(pod.ObjectMeta), err)
@ -520,13 +522,13 @@ func (c *Cluster) getSwitchoverCandidate(master *v1.Pod) (spec.NamespacedName, e
// if sync_standby replicas were found assume synchronous_mode is enabled and ignore other candidates list // if sync_standby replicas were found assume synchronous_mode is enabled and ignore other candidates list
if len(syncCandidates) > 0 { if len(syncCandidates) > 0 {
sort.Slice(syncCandidates, func(i, j int) bool { sort.Slice(syncCandidates, func(i, j int) bool {
return util.IntFromIntStr(syncCandidates[i].Lag) < util.IntFromIntStr(syncCandidates[j].Lag) return syncCandidates[i].Lag < syncCandidates[j].Lag
}) })
return spec.NamespacedName{Namespace: master.Namespace, Name: syncCandidates[0].Name}, nil return spec.NamespacedName{Namespace: master.Namespace, Name: syncCandidates[0].Name}, nil
} }
if len(candidates) > 0 { if len(candidates) > 0 {
sort.Slice(candidates, func(i, j int) bool { sort.Slice(candidates, func(i, j int) bool {
return util.IntFromIntStr(candidates[i].Lag) < util.IntFromIntStr(candidates[j].Lag) return candidates[i].Lag < candidates[j].Lag
}) })
return spec.NamespacedName{Namespace: master.Namespace, Name: candidates[0].Name}, nil return spec.NamespacedName{Namespace: master.Namespace, Name: candidates[0].Name}, nil
} }

View File

@ -316,7 +316,7 @@ func (c *Cluster) annotationsSet(annotations map[string]string) map[string]strin
return nil return nil
} }
func (c *Cluster) waitForPodLabel(podEvents chan PodEvent, stopChan chan struct{}, role *PostgresRole) (*v1.Pod, error) { func (c *Cluster) waitForPodLabel(podEvents chan PodEvent, stopCh chan struct{}, role *PostgresRole) (*v1.Pod, error) {
timeout := time.After(c.OpConfig.PodLabelWaitTimeout) timeout := time.After(c.OpConfig.PodLabelWaitTimeout)
for { for {
select { select {
@ -332,7 +332,7 @@ func (c *Cluster) waitForPodLabel(podEvents chan PodEvent, stopChan chan struct{
} }
case <-timeout: case <-timeout:
return nil, fmt.Errorf("pod label wait timeout") return nil, fmt.Errorf("pod label wait timeout")
case <-stopChan: case <-stopCh:
return nil, fmt.Errorf("pod label wait cancelled") return nil, fmt.Errorf("pod label wait cancelled")
} }
} }

View File

@ -451,7 +451,7 @@ func (c *Controller) Run(stopCh <-chan struct{}, wg *sync.WaitGroup) {
panic("could not acquire initial list of clusters") panic("could not acquire initial list of clusters")
} }
wg.Add(5) wg.Add(5 + util.Bool2Int(c.opConfig.EnablePostgresTeamCRD))
go c.runPodInformer(stopCh, wg) go c.runPodInformer(stopCh, wg)
go c.runPostgresqlInformer(stopCh, wg) go c.runPostgresqlInformer(stopCh, wg)
go c.clusterResync(stopCh, wg) go c.clusterResync(stopCh, wg)

View File

@ -165,7 +165,7 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
// logical backup config // logical backup config
result.LogicalBackupSchedule = util.Coalesce(fromCRD.LogicalBackup.Schedule, "30 00 * * *") result.LogicalBackupSchedule = util.Coalesce(fromCRD.LogicalBackup.Schedule, "30 00 * * *")
result.LogicalBackupDockerImage = util.Coalesce(fromCRD.LogicalBackup.DockerImage, "registry.opensource.zalan.do/acid/logical-backup:v1.8.0") result.LogicalBackupDockerImage = util.Coalesce(fromCRD.LogicalBackup.DockerImage, "registry.opensource.zalan.do/acid/logical-backup:v1.8.1")
result.LogicalBackupProvider = util.Coalesce(fromCRD.LogicalBackup.BackupProvider, "s3") result.LogicalBackupProvider = util.Coalesce(fromCRD.LogicalBackup.BackupProvider, "s3")
result.LogicalBackupS3Bucket = fromCRD.LogicalBackup.S3Bucket result.LogicalBackupS3Bucket = fromCRD.LogicalBackup.S3Bucket
result.LogicalBackupS3Region = fromCRD.LogicalBackup.S3Region result.LogicalBackupS3Region = fromCRD.LogicalBackup.S3Region

View File

@ -225,7 +225,7 @@ func (c *Controller) processEvent(event ClusterEvent) {
switch event.EventType { switch event.EventType {
case EventAdd: case EventAdd:
if clusterFound { if clusterFound {
lg.Infof("recieved add event for already existing Postgres cluster") lg.Infof("received add event for already existing Postgres cluster")
return return
} }

View File

@ -30,7 +30,6 @@ const (
RoleOriginManifest RoleOriginManifest
RoleOriginInfrastructure RoleOriginInfrastructure
RoleOriginTeamsAPI RoleOriginTeamsAPI
RoleOriginSpilo
RoleOriginSystem RoleOriginSystem
RoleOriginBootstrap RoleOriginBootstrap
RoleConnectionPooler RoleConnectionPooler

View File

@ -122,7 +122,7 @@ type Scalyr struct {
// LogicalBackup defines configuration for logical backup // LogicalBackup defines configuration for logical backup
type LogicalBackup struct { type LogicalBackup struct {
LogicalBackupSchedule string `name:"logical_backup_schedule" default:"30 00 * * *"` LogicalBackupSchedule string `name:"logical_backup_schedule" default:"30 00 * * *"`
LogicalBackupDockerImage string `name:"logical_backup_docker_image" default:"registry.opensource.zalan.do/acid/logical-backup:v1.8.0"` LogicalBackupDockerImage string `name:"logical_backup_docker_image" default:"registry.opensource.zalan.do/acid/logical-backup:v1.8.1"`
LogicalBackupProvider string `name:"logical_backup_provider" default:"s3"` LogicalBackupProvider string `name:"logical_backup_provider" default:"s3"`
LogicalBackupS3Bucket string `name:"logical_backup_s3_bucket" default:""` LogicalBackupS3Bucket string `name:"logical_backup_s3_bucket" default:""`
LogicalBackupS3Region string `name:"logical_backup_s3_region" default:""` LogicalBackupS3Region string `name:"logical_backup_s3_region" default:""`

View File

@ -5,6 +5,7 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"math"
"net" "net"
"net/http" "net/http"
"strconv" "strconv"
@ -16,7 +17,6 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1" acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
v1 "k8s.io/api/core/v1" v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/intstr"
) )
const ( const (
@ -185,11 +185,27 @@ type ClusterMembers struct {
// ClusterMember cluster member data from Patroni API // ClusterMember cluster member data from Patroni API
type ClusterMember struct { type ClusterMember struct {
Name string `json:"name"` Name string `json:"name"`
Role string `json:"role"` Role string `json:"role"`
State string `json:"state"` State string `json:"state"`
Timeline int `json:"timeline"` Timeline int `json:"timeline"`
Lag intstr.IntOrString `json:"lag,omitempty"` Lag ReplicationLag `json:"lag,omitempty"`
}
type ReplicationLag uint64
// UnmarshalJSON converts member lag (can be int or string) into uint64
func (rl *ReplicationLag) UnmarshalJSON(data []byte) error {
var lagUInt64 uint64
if data[0] == '"' {
*rl = math.MaxUint64
return nil
}
if err := json.Unmarshal(data, &lagUInt64); err != nil {
return err
}
*rl = ReplicationLag(lagUInt64)
return nil
} }
// MemberDataPatroni child element // MemberDataPatroni child element

View File

@ -5,6 +5,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"math"
"net/http" "net/http"
"reflect" "reflect"
"testing" "testing"
@ -15,7 +16,6 @@ import (
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1" acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
v1 "k8s.io/api/core/v1" v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/intstr"
) )
var logger = logrus.New().WithField("test", "patroni") var logger = logrus.New().WithField("test", "patroni")
@ -101,16 +101,27 @@ func TestGetClusterMembers(t *testing.T) {
Role: "sync_standby", Role: "sync_standby",
State: "running", State: "running",
Timeline: 1, Timeline: 1,
Lag: intstr.IntOrString{IntVal: 0}, Lag: 0,
}, { }, {
Name: "acid-test-cluster-2", Name: "acid-test-cluster-2",
Role: "replica", Role: "replica",
State: "running", State: "running",
Timeline: 1, Timeline: 1,
Lag: intstr.IntOrString{Type: 1, StrVal: "unknown"}, Lag: math.MaxUint64,
}, {
Name: "acid-test-cluster-3",
Role: "replica",
State: "running",
Timeline: 1,
Lag: 3000000000,
}} }}
json := `{"members": [{"name": "acid-test-cluster-0", "role": "leader", "state": "running", "api_url": "http://192.168.100.1:8008/patroni", "host": "192.168.100.1", "port": 5432, "timeline": 1}, {"name": "acid-test-cluster-1", "role": "sync_standby", "state": "running", "api_url": "http://192.168.100.2:8008/patroni", "host": "192.168.100.2", "port": 5432, "timeline": 1, "lag": 0}, {"name": "acid-test-cluster-2", "role": "replica", "state": "running", "api_url": "http://192.168.100.3:8008/patroni", "host": "192.168.100.3", "port": 5432, "timeline": 1, "lag": "unknown"}]}` json := `{"members": [
{"name": "acid-test-cluster-0", "role": "leader", "state": "running", "api_url": "http://192.168.100.1:8008/patroni", "host": "192.168.100.1", "port": 5432, "timeline": 1},
{"name": "acid-test-cluster-1", "role": "sync_standby", "state": "running", "api_url": "http://192.168.100.2:8008/patroni", "host": "192.168.100.2", "port": 5432, "timeline": 1, "lag": 0},
{"name": "acid-test-cluster-2", "role": "replica", "state": "running", "api_url": "http://192.168.100.3:8008/patroni", "host": "192.168.100.3", "port": 5432, "timeline": 1, "lag": "unknown"},
{"name": "acid-test-cluster-3", "role": "replica", "state": "running", "api_url": "http://192.168.100.3:8008/patroni", "host": "192.168.100.3", "port": 5432, "timeline": 1, "lag": 3000000000}
]}`
r := ioutil.NopCloser(bytes.NewReader([]byte(json))) r := ioutil.NopCloser(bytes.NewReader([]byte(json)))
response := http.Response{ response := http.Response{

View File

@ -20,6 +20,7 @@ const (
alterRoleSetSQL = `ALTER ROLE "%s" SET %s TO %s` alterRoleSetSQL = `ALTER ROLE "%s" SET %s TO %s`
dropUserSQL = `SET LOCAL synchronous_commit = 'local'; DROP ROLE "%s";` dropUserSQL = `SET LOCAL synchronous_commit = 'local'; DROP ROLE "%s";`
grantToUserSQL = `GRANT %s TO "%s"` grantToUserSQL = `GRANT %s TO "%s"`
revokeFromUserSQL = `REVOKE "%s" FROM "%s"`
doBlockStmt = `SET LOCAL synchronous_commit = 'local'; DO $$ BEGIN %s; END;$$;` doBlockStmt = `SET LOCAL synchronous_commit = 'local'; DO $$ BEGIN %s; END;$$;`
passwordTemplate = "ENCRYPTED PASSWORD '%s'" passwordTemplate = "ENCRYPTED PASSWORD '%s'"
inRoleTemplate = `IN ROLE %s` inRoleTemplate = `IN ROLE %s`
@ -31,8 +32,9 @@ const (
// an existing roles of another role membership, nor it removes the already assigned flag // an existing roles of another role membership, nor it removes the already assigned flag
// (except for the NOLOGIN). TODO: process other NOflags, i.e. NOSUPERUSER correctly. // (except for the NOLOGIN). TODO: process other NOflags, i.e. NOSUPERUSER correctly.
type DefaultUserSyncStrategy struct { type DefaultUserSyncStrategy struct {
PasswordEncryption string PasswordEncryption string
RoleDeletionSuffix string RoleDeletionSuffix string
AdditionalOwnerRoles []string
} }
// ProduceSyncRequests figures out the types of changes that need to happen with the given users. // ProduceSyncRequests figures out the types of changes that need to happen with the given users.
@ -53,30 +55,27 @@ func (strategy DefaultUserSyncStrategy) ProduceSyncRequests(dbUsers spec.PgUserM
} }
} else { } else {
r := spec.PgSyncUserRequest{} r := spec.PgSyncUserRequest{}
r.User = dbUser
newMD5Password := util.NewEncryptor(strategy.PasswordEncryption).PGUserPassword(newUser) newMD5Password := util.NewEncryptor(strategy.PasswordEncryption).PGUserPassword(newUser)
// do not compare for roles coming from docker image // do not compare for roles coming from docker image
if newUser.Origin != spec.RoleOriginSpilo { if dbUser.Password != newMD5Password {
if dbUser.Password != newMD5Password { r.User.Password = newMD5Password
r.User.Password = newMD5Password r.Kind = spec.PGsyncUserAlter
r.Kind = spec.PGsyncUserAlter
}
if addNewFlags, equal := util.SubstractStringSlices(newUser.Flags, dbUser.Flags); !equal {
r.User.Flags = addNewFlags
r.Kind = spec.PGsyncUserAlter
}
} }
if addNewRoles, equal := util.SubstractStringSlices(newUser.MemberOf, dbUser.MemberOf); !equal { if addNewRoles, equal := util.SubstractStringSlices(newUser.MemberOf, dbUser.MemberOf); !equal {
r.User.MemberOf = addNewRoles r.User.MemberOf = addNewRoles
r.User.IsDbOwner = newUser.IsDbOwner
r.Kind = spec.PGsyncUserAlter
}
if addNewFlags, equal := util.SubstractStringSlices(newUser.Flags, dbUser.Flags); !equal {
r.User.Flags = addNewFlags
r.Kind = spec.PGsyncUserAlter r.Kind = spec.PGsyncUserAlter
} }
if r.Kind == spec.PGsyncUserAlter { if r.Kind == spec.PGsyncUserAlter {
r.User.Name = newUser.Name r.User.Name = newUser.Name
reqs = append(reqs, r) reqs = append(reqs, r)
} }
if newUser.Origin != spec.RoleOriginSpilo && if len(newUser.Parameters) > 0 &&
len(newUser.Parameters) > 0 &&
!reflect.DeepEqual(dbUser.Parameters, newUser.Parameters) { !reflect.DeepEqual(dbUser.Parameters, newUser.Parameters) {
reqs = append(reqs, spec.PgSyncUserRequest{Kind: spec.PGSyncAlterSet, User: newUser}) reqs = append(reqs, spec.PgSyncUserRequest{Kind: spec.PGSyncAlterSet, User: newUser})
} }
@ -120,6 +119,15 @@ func (strategy DefaultUserSyncStrategy) ExecuteSyncRequests(requests []spec.PgSy
if err := strategy.alterPgUser(request.User, db); err != nil { if err := strategy.alterPgUser(request.User, db); err != nil {
reqretries = append(reqretries, request) reqretries = append(reqretries, request)
errors = append(errors, fmt.Sprintf("could not alter user %q: %v", request.User.Name, err)) errors = append(errors, fmt.Sprintf("could not alter user %q: %v", request.User.Name, err))
// XXX: we do not allow additional owner roles to be members of database owners
// if ALTER fails it could be because of the wrong memberhip (check #1862 for details)
// so in any case try to revoke the database owner from the additional owner roles
// the initial ALTER statement will be retried once and should work then
if request.User.IsDbOwner && len(strategy.AdditionalOwnerRoles) > 0 {
if err := resolveOwnerMembership(request.User, strategy.AdditionalOwnerRoles, db); err != nil {
errors = append(errors, fmt.Sprintf("could not resolve owner membership for %q: %v", request.User.Name, err))
}
}
} }
case spec.PGSyncAlterSet: case spec.PGSyncAlterSet:
if err := strategy.alterPgUserSet(request.User, db); err != nil { if err := strategy.alterPgUserSet(request.User, db); err != nil {
@ -152,6 +160,21 @@ func (strategy DefaultUserSyncStrategy) ExecuteSyncRequests(requests []spec.PgSy
return nil return nil
} }
func resolveOwnerMembership(dbOwner spec.PgUser, additionalOwners []string, db *sql.DB) error {
errors := make([]string, 0)
for _, additionalOwner := range additionalOwners {
if err := revokeRole(dbOwner.Name, additionalOwner, db); err != nil {
errors = append(errors, fmt.Sprintf("could not revoke %q from %q: %v", dbOwner.Name, additionalOwner, err))
}
}
if len(errors) > 0 {
return fmt.Errorf("could not resolve membership between %q and additional owner roles: %v", dbOwner.Name, strings.Join(errors, `', '`))
}
return nil
}
func (strategy DefaultUserSyncStrategy) alterPgUserSet(user spec.PgUser, db *sql.DB) error { func (strategy DefaultUserSyncStrategy) alterPgUserSet(user spec.PgUser, db *sql.DB) error {
queries := produceAlterRoleSetStmts(user) queries := produceAlterRoleSetStmts(user)
query := fmt.Sprintf(doBlockStmt, strings.Join(queries, ";")) query := fmt.Sprintf(doBlockStmt, strings.Join(queries, ";"))
@ -272,6 +295,16 @@ func quoteMemberList(user spec.PgUser) string {
return strings.Join(memberof, ",") return strings.Join(memberof, ",")
} }
func revokeRole(groupRole, role string, db *sql.DB) error {
revokeStmt := fmt.Sprintf(revokeFromUserSQL, groupRole, role)
if _, err := db.Exec(fmt.Sprintf(doBlockStmt, revokeStmt)); err != nil {
return err
}
return nil
}
// quoteVal quotes values to be used at ALTER ROLE SET param = value if necessary // quoteVal quotes values to be used at ALTER ROLE SET param = value if necessary
func quoteParameterValue(name, val string) string { func quoteParameterValue(name, val string) string {
start := val[0] start := val[0]

View File

@ -8,7 +8,6 @@ import (
"encoding/base64" "encoding/base64"
"encoding/hex" "encoding/hex"
"fmt" "fmt"
"math"
"math/big" "math/big"
"math/rand" "math/rand"
"reflect" "reflect"
@ -324,18 +323,18 @@ func testNil(values ...*int32) bool {
return false return false
} }
// Convert int to IntOrString type // ToIntStr converts int to IntOrString type
func ToIntStr(val int) *intstr.IntOrString { func ToIntStr(val int) *intstr.IntOrString {
b := intstr.FromInt(val) b := intstr.FromInt(val)
return &b return &b
} }
// Get int from IntOrString and return max int if string // Bool2Int converts bool to int
func IntFromIntStr(intOrStr intstr.IntOrString) int { func Bool2Int(flag bool) int {
if intOrStr.Type == 1 { if flag {
return math.MaxInt return 1
} }
return intOrStr.IntValue() return 0
} }
// MaxInt32 : Return maximum of two integers provided via pointers. If one value // MaxInt32 : Return maximum of two integers provided via pointers. If one value

View File

@ -1,6 +1,6 @@
{ {
"name": "postgres-operator-ui", "name": "postgres-operator-ui",
"version": "1.8.0", "version": "1.8.1",
"description": "PostgreSQL Operator UI", "description": "PostgreSQL Operator UI",
"main": "src/app.js", "main": "src/app.js",
"config": { "config": {

View File

@ -51,7 +51,23 @@ postgresqls
th(style='width: 140px') CPU th(style='width: 140px') CPU
th(style='width: 130px') Memory th(style='width: 130px') Memory
th(style='width: 100px') Size th(style='width: 100px') Size
th(style='width: 120px') Cost/Month th(style='width: 100px') IOPS
th(style='width: 100px') Throughput
th(style='width: 120px')
.tooltip(style='width: 120px')
| Cost/Month
.tooltiptext
strong Cost = MAX(CPU, Memory) + rest
br
| 1 CPU core : 42.09$
br
| 1GB memory: 10.5225$
br
| 1GB volume: 0.0952$
br
| IOPS (-3000 baseline): 0.006$
br
| Throughput (-125 baseline): 0.0476$
th(stlye='width: 120px') th(stlye='width: 120px')
tbody tbody
@ -69,6 +85,8 @@ postgresqls
td { cpu } / { cpu_limit } td { cpu } / { cpu_limit }
td { memory } / { memory_limit } td { memory } / { memory_limit }
td { volume_size } td { volume_size }
td { iops }
td { throughput }
td { calcCosts(nodes, cpu, memory, volume_size, iops, throughput) }$ td { calcCosts(nodes, cpu, memory, volume_size, iops, throughput) }$
td td
@ -132,7 +150,23 @@ postgresqls
th(style='width: 140px') CPU th(style='width: 140px') CPU
th(style='width: 130px') Memory th(style='width: 130px') Memory
th(style='width: 100px') Size th(style='width: 100px') Size
th(style='width: 120px') Cost/Month th(style='width: 100px') IOPS
th(style='width: 100px') Throughput
th(style='width: 120px')
.tooltip(style='width: 120px')
| Cost/Month
.tooltiptext
strong Cost = MAX(CPU, Memory) + rest
br
| 1 CPU core : 42.09$
br
| 1GB memory: 10.5225$
br
| 1GB volume: 0.0952$
br
| IOPS (-3000 baseline): 0.006$
br
| Throughput (-125 baseline): 0.0476$
th(stlye='width: 120px') th(stlye='width: 120px')
tbody tbody
@ -152,6 +186,8 @@ postgresqls
td { cpu } / { cpu_limit } td { cpu } / { cpu_limit }
td { memory } / { memory_limit } td { memory } / { memory_limit }
td { volume_size } td { volume_size }
td { iops }
td { throughput }
td { calcCosts(nodes, cpu, memory, volume_size, iops, throughput) }$ td { calcCosts(nodes, cpu, memory, volume_size, iops, throughput) }$
td td
@ -229,28 +265,44 @@ postgresqls
const calcCosts = this.calcCosts = (nodes, cpu, memory, disk, iops, throughput) => { const calcCosts = this.calcCosts = (nodes, cpu, memory, disk, iops, throughput) => {
podcount = Math.max(nodes, opts.config.min_pods) podcount = Math.max(nodes, opts.config.min_pods)
corecost = toCores(cpu) * opts.config.cost_core corecost = toCores(cpu) * opts.config.cost_core * 30.5 * 24
memorycost = toMemory(memory) * opts.config.cost_memory memorycost = toMemory(memory) * opts.config.cost_memory * 30.5 * 24
diskcost = toDisk(disk) * opts.config.cost_ebs diskcost = toDisk(disk) * opts.config.cost_ebs
iopscost = 0 iopscost = 0
if (iops !== undefined && iops > 3000) { if (iops !== undefined && iops > opts.config.free_iops) {
iopscost = (iops - 3000) * opts.config.cost_iops if (iops > opts.config.limit_iops) {
iops = opts.config.limit_iops
}
iopscost = (iops - opts.config.free_iops) * opts.config.cost_iops
} }
throughputcost = 0 throughputcost = 0
if (throughput !== undefined && throughput > 125) { if (throughput !== undefined && throughput > opts.config.free_throughput) {
throughputcost = (throughput - 125) * opts.config.cost_throughput if (throughput > opts.config.limit_throughput) {
throughput = opts.config.limit_throughput
}
throughputcost = (throughput - opts.config.free_throughput) * opts.config.cost_throughput
} }
costs = podcount * (corecost + memorycost + diskcost + iopscost + throughputcost) costs = podcount * (Math.max(corecost, memorycost) + diskcost + iopscost + throughputcost)
return costs.toFixed(2) return costs.toFixed(2)
} }
const toDisk = this.toDisk = value => { const toDisk = this.toDisk = value => {
if(value.endsWith("Gi")) { if(value.endsWith("Mi")) {
value = value.substring(0, value.length-2)
value = Number(value) / 1000.
return value
}
else if(value.endsWith("Gi")) {
value = value.substring(0, value.length-2) value = value.substring(0, value.length-2)
value = Number(value) value = Number(value)
return value return value
} }
else if(value.endsWith("Ti")) {
value = value.substring(0, value.length-2)
value = Number(value) * 1000
return value
}
return value return value
} }

View File

@ -18,7 +18,7 @@ spec:
serviceAccountName: postgres-operator-ui serviceAccountName: postgres-operator-ui
containers: containers:
- name: "service" - name: "service"
image: registry.opensource.zalan.do/acid/postgres-operator-ui:v1.8.0 image: registry.opensource.zalan.do/acid/postgres-operator-ui:v1.8.1
ports: ports:
- containerPort: 8081 - containerPort: 8081
protocol: "TCP" protocol: "TCP"
@ -67,6 +67,10 @@ spec:
"cost_throughput": 0.0476, "cost_throughput": 0.0476,
"cost_core": 0.0575, "cost_core": 0.0575,
"cost_memory": 0.014375, "cost_memory": 0.014375,
"free_iops": 3000,
"free_throughput": 125,
"limit_iops": 16000,
"limit_throughput": 1000,
"postgresql_versions": [ "postgresql_versions": [
"14", "14",
"13", "13",

View File

@ -82,12 +82,16 @@ OPERATOR_CLUSTER_NAME_LABEL = getenv('OPERATOR_CLUSTER_NAME_LABEL', 'cluster-nam
OPERATOR_UI_CONFIG = getenv('OPERATOR_UI_CONFIG', '{}') OPERATOR_UI_CONFIG = getenv('OPERATOR_UI_CONFIG', '{}')
OPERATOR_UI_MAINTENANCE_CHECK = getenv('OPERATOR_UI_MAINTENANCE_CHECK', '{}') OPERATOR_UI_MAINTENANCE_CHECK = getenv('OPERATOR_UI_MAINTENANCE_CHECK', '{}')
READ_ONLY_MODE = getenv('READ_ONLY_MODE', False) in [True, 'true'] READ_ONLY_MODE = getenv('READ_ONLY_MODE', False) in [True, 'true']
RESOURCES_VISIBLE = getenv('RESOURCES_VISIBLE', True)
SPILO_S3_BACKUP_PREFIX = getenv('SPILO_S3_BACKUP_PREFIX', 'spilo/') SPILO_S3_BACKUP_PREFIX = getenv('SPILO_S3_BACKUP_PREFIX', 'spilo/')
SUPERUSER_TEAM = getenv('SUPERUSER_TEAM', 'acid') SUPERUSER_TEAM = getenv('SUPERUSER_TEAM', 'acid')
TARGET_NAMESPACE = getenv('TARGET_NAMESPACE') TARGET_NAMESPACE = getenv('TARGET_NAMESPACE')
GOOGLE_ANALYTICS = getenv('GOOGLE_ANALYTICS', False) GOOGLE_ANALYTICS = getenv('GOOGLE_ANALYTICS', False)
MIN_PODS= getenv('MIN_PODS', 2) MIN_PODS= getenv('MIN_PODS', 2)
RESOURCES_VISIBLE = getenv('RESOURCES_VISIBLE', True)
CUSTOM_MESSAGE_RED = getenv('CUSTOM_MESSAGE_RED', '')
APPLICATION_DEPLOYMENT_DOCS = getenv('APPLICATION_DEPLOYMENT_DOCS', '')
CONNECTION_DOCS = getenv('CONNECTION_DOCS', '')
# storage pricing, i.e. https://aws.amazon.com/ebs/pricing/ (e.g. Europe - Franfurt) # storage pricing, i.e. https://aws.amazon.com/ebs/pricing/ (e.g. Europe - Franfurt)
COST_EBS = float(getenv('COST_EBS', 0.0952)) # GB per month COST_EBS = float(getenv('COST_EBS', 0.0952)) # GB per month
@ -95,8 +99,19 @@ COST_IOPS = float(getenv('COST_IOPS', 0.006)) # IOPS per month above 3000 basel
COST_THROUGHPUT = float(getenv('COST_THROUGHPUT', 0.0476)) # MB/s per month above 125 MB/s baseline COST_THROUGHPUT = float(getenv('COST_THROUGHPUT', 0.0476)) # MB/s per month above 125 MB/s baseline
# compute costs, i.e. https://www.ec2instances.info/?region=eu-central-1&selected=m5.2xlarge # compute costs, i.e. https://www.ec2instances.info/?region=eu-central-1&selected=m5.2xlarge
COST_CORE = 30.5 * 24 * float(getenv('COST_CORE', 0.0575)) # Core per hour m5.2xlarge / 8. COST_CORE = float(getenv('COST_CORE', 0.0575)) # Core per hour m5.2xlarge / 8.
COST_MEMORY = 30.5 * 24 * float(getenv('COST_MEMORY', 0.014375)) # Memory GB m5.2xlarge / 32. COST_MEMORY = float(getenv('COST_MEMORY', 0.014375)) # Memory GB m5.2xlarge / 32.
# maximum and limitation of IOPS and throughput
FREE_IOPS = float(getenv('FREE_IOPS', 3000))
LIMIT_IOPS = float(getenv('LIMIT_IOPS', 16000))
FREE_THROUGHPUT = float(getenv('FREE_THROUGHPUT', 125))
LIMIT_THROUGHPUT = float(getenv('LIMIT_THROUGHPUT', 1000))
# get the default value of core and memory
DEFAULT_MEMORY = getenv('DEFAULT_MEMORY', '300Mi')
DEFAULT_MEMORY_LIMIT = getenv('DEFAULT_MEMORY_LIMIT', '300Mi')
DEFAULT_CPU = getenv('DEFAULT_CPU', '10m')
DEFAULT_CPU_LIMIT = getenv('DEFAULT_CPU_LIMIT', '300m')
WALE_S3_ENDPOINT = getenv( WALE_S3_ENDPOINT = getenv(
'WALE_S3_ENDPOINT', 'WALE_S3_ENDPOINT',
@ -304,29 +319,34 @@ DEFAULT_UI_CONFIG = {
'nat_gateways_visible': True, 'nat_gateways_visible': True,
'users_visible': True, 'users_visible': True,
'databases_visible': True, 'databases_visible': True,
'resources_visible': True, 'resources_visible': RESOURCES_VISIBLE,
'postgresql_versions': ['11','12','13'], 'postgresql_versions': ['11','12','13','14'],
'dns_format_string': '{0}.{1}.{2}', 'dns_format_string': '{0}.{1}.{2}',
'pgui_link': '', 'pgui_link': '',
'static_network_whitelist': {}, 'static_network_whitelist': {},
'read_only_mode': READ_ONLY_MODE,
'superuser_team': SUPERUSER_TEAM,
'target_namespace': TARGET_NAMESPACE,
'connection_docs': CONNECTION_DOCS,
'application_deployment_docs': APPLICATION_DEPLOYMENT_DOCS,
'cost_ebs': COST_EBS, 'cost_ebs': COST_EBS,
'cost_iops': COST_IOPS, 'cost_iops': COST_IOPS,
'cost_throughput': COST_THROUGHPUT, 'cost_throughput': COST_THROUGHPUT,
'cost_core': COST_CORE, 'cost_core': COST_CORE,
'cost_memory': COST_MEMORY, 'cost_memory': COST_MEMORY,
'min_pods': MIN_PODS 'min_pods': MIN_PODS,
'free_iops': FREE_IOPS,
'free_throughput': FREE_THROUGHPUT,
'limit_iops': LIMIT_IOPS,
'limit_throughput': LIMIT_THROUGHPUT
} }
@app.route('/config') @app.route('/config')
@authorize @authorize
def get_config(): def get_config():
config = loads(OPERATOR_UI_CONFIG) or DEFAULT_UI_CONFIG config = DEFAULT_UI_CONFIG.copy()
config['read_only_mode'] = READ_ONLY_MODE config.update(loads(OPERATOR_UI_CONFIG))
config['resources_visible'] = RESOURCES_VISIBLE
config['superuser_team'] = SUPERUSER_TEAM
config['target_namespace'] = TARGET_NAMESPACE
config['min_pods'] = MIN_PODS
config['namespaces'] = ( config['namespaces'] = (
[TARGET_NAMESPACE] [TARGET_NAMESPACE]
@ -961,11 +981,13 @@ def get_operator_get_logs(worker: int):
@app.route('/operator/clusters/<namespace>/<cluster>/logs') @app.route('/operator/clusters/<namespace>/<cluster>/logs')
@authorize @authorize
def get_operator_get_logs_per_cluster(namespace: str, cluster: str): def get_operator_get_logs_per_cluster(namespace: str, cluster: str):
team, cluster_name = cluster.split('-', 1)
# team id might contain hyphens, try to find correct team name
user_teams = get_teams_for_user(session.get('user_name', '')) user_teams = get_teams_for_user(session.get('user_name', ''))
for user_team in user_teams: for user_team in user_teams:
if cluster.find(user_team) == 0: if cluster.find(user_team + '-') == 0:
team = cluster[:len(user_team)] team = cluster[:len(user_team)]
cluster_name = cluster[len(user_team)+1:] cluster_name = cluster[len(user_team + '-'):]
break break
return proxy_operator(f'/clusters/{team}/{namespace}/{cluster_name}/logs/') return proxy_operator(f'/clusters/{team}/{namespace}/{cluster_name}/logs/')

View File

@ -64,3 +64,56 @@ label {
td { td {
vertical-align: middle !important; vertical-align: middle !important;
} }
.tooltip {
position: relative;
display: inline-block;
opacity: 1;
font-size: 14px;
font-weight: bold;
}
.tooltip:after {
content: '?';
display: inline-block;
font-family: sans-serif;
font-weight: bold;
text-align: center;
width: 16px;
height: 16px;
font-size: 12px;
line-height: 16px;
border-radius: 12px;
padding: 0px;
color: white;
background: black;
border: 1px solid black;
}
.tooltip .tooltiptext {
visibility: hidden;
width: 250px;
background-color: white;
color: #000;
text-align: justify;
border-radius: 6px;
padding: 10px 10px;
position: absolute;
z-index: 1;
bottom: 150%;
left: 50%;
margin-left: -120px;
border: 1px solid black;
font-weight: normal;
}
.tooltip .tooltiptext::after {
content: "";
position: absolute;
top: 100%;
left: 50%;
margin-left: -5px;
border-width: 5px;
border-style: solid;
border-color: black transparent transparent transparent;
}
.tooltip:hover .tooltiptext {
visibility: visible;
}