Merge branch 'master' into gh-pages
This commit is contained in:
commit
42020a473e
|
|
@ -9,7 +9,7 @@ assignees: ''
|
|||
|
||||
Please, answer some short questions which should help us to understand your problem / question better?
|
||||
|
||||
- **Which image of the operator are you using?** e.g. registry.opensource.zalan.do/acid/postgres-operator:v1.8.0
|
||||
- **Which image of the operator are you using?** e.g. registry.opensource.zalan.do/acid/postgres-operator:v1.8.1
|
||||
- **Where do you run it - cloud or metal? Kubernetes or OpenShift?** [AWS K8s | GCP ... | Bare Metal K8s]
|
||||
- **Are you running Postgres Operator in production?** [yes | no]
|
||||
- **Type of issue?** [Bug report, question, feature request, etc.]
|
||||
|
|
|
|||
|
|
@ -1,2 +1,2 @@
|
|||
# global owners
|
||||
* @sdudoladov @Jan-M @CyberDem0n @FxKu @jopadi
|
||||
* @sdudoladov @Jan-M @CyberDem0n @FxKu @jopadi @idanovinda
|
||||
|
|
|
|||
|
|
@ -2,3 +2,4 @@ Sergey Dudoladov <sergey.dudoladov@zalando.de>
|
|||
Felix Kunde <felix.kunde@zalando.de>
|
||||
Jan Mussler <jan.mussler@zalando.de>
|
||||
Jociele Padilha <jociele.padilha@zalando.de>
|
||||
Ida Novindasari <ida.novindasari@zalando.de>
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
apiVersion: v1
|
||||
apiVersion: v2
|
||||
name: postgres-operator-ui
|
||||
version: 1.8.0
|
||||
appVersion: 1.8.0
|
||||
version: 1.8.1
|
||||
appVersion: 1.8.1
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient database-as-a-service user experience
|
||||
keywords:
|
||||
|
|
|
|||
|
|
@ -1,9 +1,32 @@
|
|||
apiVersion: v1
|
||||
entries:
|
||||
postgres-operator-ui:
|
||||
- apiVersion: v2
|
||||
appVersion: 1.8.1
|
||||
created: "2022-05-19T16:03:34.70846034+02:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: d26342e385ea51a0fbfbe23477999863e9489664ae803ea5c56da8897db84d24
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
keywords:
|
||||
- postgres
|
||||
- operator
|
||||
- ui
|
||||
- cloud-native
|
||||
- patroni
|
||||
- spilo
|
||||
maintainers:
|
||||
- email: opensource@zalando.de
|
||||
name: Zalando
|
||||
name: postgres-operator-ui
|
||||
sources:
|
||||
- https://github.com/zalando/postgres-operator
|
||||
urls:
|
||||
- postgres-operator-ui-1.8.1.tgz
|
||||
version: 1.8.1
|
||||
- apiVersion: v1
|
||||
appVersion: 1.8.0
|
||||
created: "2022-04-20T15:39:16.094386569+02:00"
|
||||
created: "2022-05-19T16:03:34.707925712+02:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: d4a7b40c23fd167841cc28342afdbd5ecc809181913a5c31061c83139187f148
|
||||
|
|
@ -26,7 +49,7 @@ entries:
|
|||
version: 1.8.0
|
||||
- apiVersion: v1
|
||||
appVersion: 1.7.1
|
||||
created: "2022-04-20T15:39:16.093853803+02:00"
|
||||
created: "2022-05-19T16:03:34.707388723+02:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: 97aed1a1d37cd5f8441eea9522f38e56cc829786ad2134c437a5e6a15c995869
|
||||
|
|
@ -49,7 +72,7 @@ entries:
|
|||
version: 1.7.1
|
||||
- apiVersion: v1
|
||||
appVersion: 1.7.0
|
||||
created: "2022-04-20T15:39:16.093334397+02:00"
|
||||
created: "2022-05-19T16:03:34.706864701+02:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: 37fba1968347daad393dbd1c6ee6e5b6a24d1095f972c0102197531c62dcada8
|
||||
|
|
@ -72,7 +95,7 @@ entries:
|
|||
version: 1.7.0
|
||||
- apiVersion: v1
|
||||
appVersion: 1.6.3
|
||||
created: "2022-04-20T15:39:16.092419178+02:00"
|
||||
created: "2022-05-19T16:03:34.705931681+02:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: 08b810aa632dcc719e4785ef184e391267f7c460caa99677f2d00719075aac78
|
||||
|
|
@ -95,7 +118,7 @@ entries:
|
|||
version: 1.6.3
|
||||
- apiVersion: v1
|
||||
appVersion: 1.6.2
|
||||
created: "2022-04-20T15:39:16.091945123+02:00"
|
||||
created: "2022-05-19T16:03:34.705441492+02:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: 14d1559bb0bd1e1e828f2daaaa6f6ac9ffc268d79824592c3589b55dd39241f6
|
||||
|
|
@ -118,7 +141,7 @@ entries:
|
|||
version: 1.6.2
|
||||
- apiVersion: v1
|
||||
appVersion: 1.6.1
|
||||
created: "2022-04-20T15:39:16.0914401+02:00"
|
||||
created: "2022-05-19T16:03:34.704908895+02:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: 3d321352f2f1e7bb7450aa8876e3d818aa9f9da9bd4250507386f0490f2c1969
|
||||
|
|
@ -141,7 +164,7 @@ entries:
|
|||
version: 1.6.1
|
||||
- apiVersion: v1
|
||||
appVersion: 1.6.0
|
||||
created: "2022-04-20T15:39:16.090887513+02:00"
|
||||
created: "2022-05-19T16:03:34.704432119+02:00"
|
||||
description: Postgres Operator UI provides a graphical interface for a convenient
|
||||
database-as-a-service user experience
|
||||
digest: 1e0aa1e7db3c1daa96927ffbf6fdbcdb434562f961833cb5241ddbe132220ee4
|
||||
|
|
@ -162,4 +185,4 @@ entries:
|
|||
urls:
|
||||
- postgres-operator-ui-1.6.0.tgz
|
||||
version: 1.6.0
|
||||
generated: "2022-04-20T15:39:16.0877032+02:00"
|
||||
generated: "2022-05-19T16:03:34.70375145+02:00"
|
||||
|
|
|
|||
Binary file not shown.
|
|
@ -8,7 +8,7 @@ replicaCount: 1
|
|||
image:
|
||||
registry: registry.opensource.zalan.do
|
||||
repository: acid/postgres-operator-ui
|
||||
tag: v1.8.0
|
||||
tag: v1.8.1
|
||||
pullPolicy: "IfNotPresent"
|
||||
|
||||
# Optionally specify an array of imagePullSecrets.
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
apiVersion: v1
|
||||
apiVersion: v2
|
||||
name: postgres-operator
|
||||
version: 1.8.0
|
||||
appVersion: 1.8.0
|
||||
version: 1.8.1
|
||||
appVersion: 1.8.1
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running in Kubernetes
|
||||
keywords:
|
||||
|
|
|
|||
|
|
@ -450,7 +450,7 @@ spec:
|
|||
properties:
|
||||
logical_backup_docker_image:
|
||||
type: string
|
||||
default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0"
|
||||
default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.1"
|
||||
logical_backup_google_application_credentials:
|
||||
type: string
|
||||
logical_backup_job_prefix:
|
||||
|
|
|
|||
|
|
@ -479,7 +479,6 @@ spec:
|
|||
- standby_host
|
||||
streams:
|
||||
type: array
|
||||
nullable: true
|
||||
items:
|
||||
type: object
|
||||
required:
|
||||
|
|
@ -588,12 +587,12 @@ spec:
|
|||
- SUPERUSER
|
||||
- nosuperuser
|
||||
- NOSUPERUSER
|
||||
usersWithPasswordRotation:
|
||||
usersWithInPlaceSecretRotation:
|
||||
type: array
|
||||
nullable: true
|
||||
items:
|
||||
type: string
|
||||
usersWithInPlacePasswordRotation:
|
||||
usersWithSecretRotation:
|
||||
type: array
|
||||
nullable: true
|
||||
items:
|
||||
|
|
@ -612,17 +611,26 @@ spec:
|
|||
type: array
|
||||
items:
|
||||
type: object
|
||||
required:
|
||||
- key
|
||||
- operator
|
||||
properties:
|
||||
key:
|
||||
type: string
|
||||
operator:
|
||||
type: string
|
||||
enum:
|
||||
- DoesNotExists
|
||||
- Exists
|
||||
- In
|
||||
- NotIn
|
||||
values:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
matchLabels:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
size:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
|
|
|
|||
|
|
@ -1,9 +1,31 @@
|
|||
apiVersion: v1
|
||||
entries:
|
||||
postgres-operator:
|
||||
- apiVersion: v2
|
||||
appVersion: 1.8.1
|
||||
created: "2022-05-19T16:01:17.868770557+02:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: ee0c3bb6ba72fa4289ba3b1c6060e5b312dd023faba2a61b4cb7d9e5e2cc57a5
|
||||
home: https://github.com/zalando/postgres-operator
|
||||
keywords:
|
||||
- postgres
|
||||
- operator
|
||||
- cloud-native
|
||||
- patroni
|
||||
- spilo
|
||||
maintainers:
|
||||
- email: opensource@zalando.de
|
||||
name: Zalando
|
||||
name: postgres-operator
|
||||
sources:
|
||||
- https://github.com/zalando/postgres-operator
|
||||
urls:
|
||||
- postgres-operator-1.8.1.tgz
|
||||
version: 1.8.1
|
||||
- apiVersion: v1
|
||||
appVersion: 1.8.0
|
||||
created: "2022-04-14T15:02:07.818613578+02:00"
|
||||
created: "2022-05-19T16:01:17.866519324+02:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: 3ae232cf009e09aa2ad11c171484cd2f1b72e63c59735e58fbe2b6eb842f4c86
|
||||
|
|
@ -25,7 +47,7 @@ entries:
|
|||
version: 1.8.0
|
||||
- apiVersion: v1
|
||||
appVersion: 1.7.1
|
||||
created: "2022-04-14T15:02:07.817076674+02:00"
|
||||
created: "2022-05-19T16:01:17.863939923+02:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: 7262563bec0b058e669ae6bcff0226e33fa9ece9c41ac46a53274046afe7700c
|
||||
|
|
@ -47,7 +69,7 @@ entries:
|
|||
version: 1.7.1
|
||||
- apiVersion: v1
|
||||
appVersion: 1.7.0
|
||||
created: "2022-04-14T15:02:07.815161671+02:00"
|
||||
created: "2022-05-19T16:01:17.861563817+02:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: c3e99fb94305f81484b8b1af18eefb78681f3b5d057d5ad10565e4afb7c65ffe
|
||||
|
|
@ -69,7 +91,7 @@ entries:
|
|||
version: 1.7.0
|
||||
- apiVersion: v1
|
||||
appVersion: 1.6.3
|
||||
created: "2022-04-14T15:02:07.813087244+02:00"
|
||||
created: "2022-05-19T16:01:17.857400801+02:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: ea08f991bf23c9ad114bca98ebcbe3e2fa15beab163061399394905eaee89b35
|
||||
|
|
@ -91,7 +113,7 @@ entries:
|
|||
version: 1.6.3
|
||||
- apiVersion: v1
|
||||
appVersion: 1.6.2
|
||||
created: "2022-04-14T15:02:07.8114121+02:00"
|
||||
created: "2022-05-19T16:01:17.853990686+02:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: d886f8a0879ca07d1e5246ee7bc55710e1c872f3977280fe495db6fc2057a7f4
|
||||
|
|
@ -113,7 +135,7 @@ entries:
|
|||
version: 1.6.2
|
||||
- apiVersion: v1
|
||||
appVersion: 1.6.1
|
||||
created: "2022-04-14T15:02:07.809829808+02:00"
|
||||
created: "2022-05-19T16:01:17.851310112+02:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: 4ba5972cd486dcaa2d11c5613a6f97f6b7b831822e610fe9e10a57ea1db23556
|
||||
|
|
@ -135,7 +157,7 @@ entries:
|
|||
version: 1.6.1
|
||||
- apiVersion: v1
|
||||
appVersion: 1.6.0
|
||||
created: "2022-04-14T15:02:07.808307624+02:00"
|
||||
created: "2022-05-19T16:01:17.848853103+02:00"
|
||||
description: Postgres Operator creates and manages PostgreSQL clusters running
|
||||
in Kubernetes
|
||||
digest: f52149718ea364f46b4b9eec9a65f6253ad182bb78df541d14cd5277b9c8a8c3
|
||||
|
|
@ -155,4 +177,4 @@ entries:
|
|||
urls:
|
||||
- postgres-operator-1.6.0.tgz
|
||||
version: 1.6.0
|
||||
generated: "2022-04-14T15:02:07.806370532+02:00"
|
||||
generated: "2022-05-19T16:01:17.843701398+02:00"
|
||||
|
|
|
|||
Binary file not shown.
|
|
@ -1,7 +1,7 @@
|
|||
image:
|
||||
registry: registry.opensource.zalan.do
|
||||
repository: acid/postgres-operator
|
||||
tag: v1.8.0
|
||||
tag: v1.8.1
|
||||
pullPolicy: "IfNotPresent"
|
||||
|
||||
# Optionally specify an array of imagePullSecrets.
|
||||
|
|
|
|||
|
|
@ -1286,7 +1286,7 @@ make docker
|
|||
|
||||
# build in image in minikube docker env
|
||||
eval $(minikube docker-env)
|
||||
docker build -t registry.opensource.zalan.do/acid/postgres-operator-ui:v1.8.0 .
|
||||
docker build -t registry.opensource.zalan.do/acid/postgres-operator-ui:v1.8.1 .
|
||||
|
||||
# apply UI manifests next to a running Postgres Operator
|
||||
kubectl apply -f manifests/
|
||||
|
|
|
|||
|
|
@ -178,13 +178,18 @@ under the `users` key.
|
|||
`standby`.
|
||||
|
||||
* **additional_owner_roles**
|
||||
Specifies database roles that will become members of all database owners.
|
||||
Then owners can use `SET ROLE` to obtain privileges of these roles to e.g.
|
||||
create/update functionality from extensions as part of a migration script.
|
||||
Note, that roles listed here should be preconfigured in the docker image
|
||||
and already exist in the database cluster on startup. One such role can be
|
||||
`cron_admin` which is provided by the Spilo docker image to set up cron
|
||||
jobs inside the `postgres` database. Default is `empty`.
|
||||
Specifies database roles that will be granted to all database owners. Owners
|
||||
can then use `SET ROLE` to obtain privileges of these roles to e.g. create
|
||||
or update functionality from extensions as part of a migration script. One
|
||||
such role can be `cron_admin` which is provided by the Spilo docker image to
|
||||
set up cron jobs inside the `postgres` database. In general, roles listed
|
||||
here should be preconfigured in the docker image and already exist in the
|
||||
database cluster on startup. Otherwise, syncing roles will return an error
|
||||
on each cluster sync process. Alternatively, you have to create the role and
|
||||
do the GRANT manually. Note, the operator will not allow additional owner
|
||||
roles to be members of database owners because it should be vice versa. If
|
||||
the operator cannot set up the correct membership it tries to revoke all
|
||||
additional owner roles from database owners. Default is `empty`.
|
||||
|
||||
* **enable_password_rotation**
|
||||
For all `LOGIN` roles that are not database owners the operator can rotate
|
||||
|
|
@ -679,7 +684,7 @@ grouped under the `logical_backup` key.
|
|||
runs `pg_dumpall` on a replica if possible and uploads compressed results to
|
||||
an S3 bucket under the key `/spilo/pg_cluster_name/cluster_k8s_uuid/logical_backups`.
|
||||
The default image is the same image built with the Zalando-internal CI
|
||||
pipeline. Default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0"
|
||||
pipeline. Default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.1"
|
||||
|
||||
* **logical_backup_google_application_credentials**
|
||||
Specifies the path of the google cloud service account json file. Default is empty.
|
||||
|
|
|
|||
|
|
@ -12,8 +12,8 @@ from kubernetes import client
|
|||
from tests.k8s_api import K8s
|
||||
from kubernetes.client.rest import ApiException
|
||||
|
||||
SPILO_CURRENT = "registry.opensource.zalan.do/acid/spilo-14-e2e:0.1"
|
||||
SPILO_LAZY = "registry.opensource.zalan.do/acid/spilo-14-e2e:0.2"
|
||||
SPILO_CURRENT = "registry.opensource.zalan.do/acid/spilo-14-e2e:0.3"
|
||||
SPILO_LAZY = "registry.opensource.zalan.do/acid/spilo-14-e2e:0.4"
|
||||
|
||||
|
||||
def to_selector(labels):
|
||||
|
|
@ -161,10 +161,21 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
|
||||
def test_additional_owner_roles(self):
|
||||
'''
|
||||
Test adding additional member roles to existing database owner roles
|
||||
Test granting additional roles to existing database owners
|
||||
'''
|
||||
k8s = self.k8s
|
||||
|
||||
# first test - wait for the operator to get in sync and set everything up
|
||||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"},
|
||||
"Operator does not get in sync")
|
||||
leader = k8s.get_cluster_leader_pod()
|
||||
|
||||
# produce wrong membership for cron_admin
|
||||
grant_dbowner = """
|
||||
GRANT bar_owner TO cron_admin;
|
||||
"""
|
||||
self.query_database(leader.metadata.name, "postgres", grant_dbowner)
|
||||
|
||||
# enable PostgresTeam CRD and lower resync
|
||||
owner_roles = {
|
||||
"data": {
|
||||
|
|
@ -175,16 +186,15 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
self.eventuallyEqual(lambda: k8s.get_operator_state(), {"0": "idle"},
|
||||
"Operator does not get in sync")
|
||||
|
||||
leader = k8s.get_cluster_leader_pod()
|
||||
owner_query = """
|
||||
SELECT a2.rolname
|
||||
FROM pg_catalog.pg_authid a
|
||||
JOIN pg_catalog.pg_auth_members am
|
||||
ON a.oid = am.member
|
||||
AND a.rolname = 'cron_admin'
|
||||
AND a.rolname IN ('zalando', 'bar_owner', 'bar_data_owner')
|
||||
JOIN pg_catalog.pg_authid a2
|
||||
ON a2.oid = am.roleid
|
||||
WHERE a2.rolname IN ('zalando', 'bar_owner', 'bar_data_owner');
|
||||
WHERE a2.rolname = 'cron_admin';
|
||||
"""
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", owner_query)), 3,
|
||||
"Not all additional users found in database", 10, 5)
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ go 1.17
|
|||
require (
|
||||
github.com/spf13/cobra v1.2.1
|
||||
github.com/spf13/viper v1.9.0
|
||||
github.com/zalando/postgres-operator v1.8.0
|
||||
github.com/zalando/postgres-operator v1.8.1
|
||||
k8s.io/api v0.22.4
|
||||
k8s.io/apiextensions-apiserver v0.22.4
|
||||
k8s.io/apimachinery v0.22.4
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ data:
|
|||
# kube_iam_role: ""
|
||||
# kubernetes_use_configmaps: "false"
|
||||
# log_s3_bucket: ""
|
||||
logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0"
|
||||
logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.8.1"
|
||||
# logical_backup_google_application_credentials: ""
|
||||
logical_backup_job_prefix: "logical-backup-"
|
||||
logical_backup_provider: "s3"
|
||||
|
|
|
|||
|
|
@ -448,7 +448,7 @@ spec:
|
|||
properties:
|
||||
logical_backup_docker_image:
|
||||
type: string
|
||||
default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0"
|
||||
default: "registry.opensource.zalan.do/acid/logical-backup:v1.8.1"
|
||||
logical_backup_google_application_credentials:
|
||||
type: string
|
||||
logical_backup_job_prefix:
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ spec:
|
|||
serviceAccountName: postgres-operator
|
||||
containers:
|
||||
- name: postgres-operator
|
||||
image: registry.opensource.zalan.do/acid/postgres-operator:v1.8.0
|
||||
image: registry.opensource.zalan.do/acid/postgres-operator:v1.8.1
|
||||
imagePullPolicy: IfNotPresent
|
||||
resources:
|
||||
requests:
|
||||
|
|
|
|||
|
|
@ -144,7 +144,7 @@ configuration:
|
|||
# wal_gs_bucket: ""
|
||||
# wal_s3_bucket: ""
|
||||
logical_backup:
|
||||
logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0"
|
||||
logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.8.1"
|
||||
# logical_backup_google_application_credentials: ""
|
||||
logical_backup_job_prefix: "logical-backup-"
|
||||
logical_backup_provider: "s3"
|
||||
|
|
|
|||
|
|
@ -477,7 +477,6 @@ spec:
|
|||
- standby_host
|
||||
streams:
|
||||
type: array
|
||||
nullable: true
|
||||
items:
|
||||
type: object
|
||||
required:
|
||||
|
|
@ -586,12 +585,12 @@ spec:
|
|||
- SUPERUSER
|
||||
- nosuperuser
|
||||
- NOSUPERUSER
|
||||
usersWithPasswordRotation:
|
||||
usersWithInPlaceSecretRotation:
|
||||
type: array
|
||||
nullable: true
|
||||
items:
|
||||
type: string
|
||||
usersWithInPlacePasswordRotation:
|
||||
usersWithSecretRotation:
|
||||
type: array
|
||||
nullable: true
|
||||
items:
|
||||
|
|
@ -610,17 +609,26 @@ spec:
|
|||
type: array
|
||||
items:
|
||||
type: object
|
||||
required:
|
||||
- key
|
||||
- operator
|
||||
properties:
|
||||
key:
|
||||
type: string
|
||||
operator:
|
||||
type: string
|
||||
enum:
|
||||
- DoesNotExists
|
||||
- Exists
|
||||
- In
|
||||
- NotIn
|
||||
values:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
matchLabels:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
size:
|
||||
type: string
|
||||
pattern: '^(\d+(e\d+)?|\d+(\.\d+)?(e\d+)?[EPTGMK]i?)$'
|
||||
|
|
|
|||
|
|
@ -957,7 +957,7 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
},
|
||||
},
|
||||
},
|
||||
"usersWithSecretRotation": {
|
||||
"usersWithInPlaceSecretRotation": {
|
||||
Type: "array",
|
||||
Nullable: true,
|
||||
Items: &apiextv1.JSONSchemaPropsOrArray{
|
||||
|
|
@ -966,7 +966,7 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
},
|
||||
},
|
||||
},
|
||||
"usersWithInPlaceSecretRotation": {
|
||||
"usersWithSecretRotation": {
|
||||
Type: "array",
|
||||
Nullable: true,
|
||||
Items: &apiextv1.JSONSchemaPropsOrArray{
|
||||
|
|
@ -990,7 +990,7 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
Items: &apiextv1.JSONSchemaPropsOrArray{
|
||||
Schema: &apiextv1.JSONSchemaProps{
|
||||
Type: "object",
|
||||
Required: []string{"key", "operator", "values"},
|
||||
Required: []string{"key", "operator"},
|
||||
Properties: map[string]apiextv1.JSONSchemaProps{
|
||||
"key": {
|
||||
Type: "string",
|
||||
|
|
@ -999,16 +999,16 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
Type: "string",
|
||||
Enum: []apiextv1.JSON{
|
||||
{
|
||||
Raw: []byte(`"In"`),
|
||||
},
|
||||
{
|
||||
Raw: []byte(`"NotIn"`),
|
||||
Raw: []byte(`"DoesNotExist"`),
|
||||
},
|
||||
{
|
||||
Raw: []byte(`"Exists"`),
|
||||
},
|
||||
{
|
||||
Raw: []byte(`"DoesNotExist"`),
|
||||
Raw: []byte(`"In"`),
|
||||
},
|
||||
{
|
||||
Raw: []byte(`"NotIn"`),
|
||||
},
|
||||
},
|
||||
},
|
||||
|
|
|
|||
|
|
@ -238,6 +238,7 @@ type ConnectionPooler struct {
|
|||
*Resources `json:"resources,omitempty"`
|
||||
}
|
||||
|
||||
// Stream defines properties for creating FabricEventStream resources
|
||||
type Stream struct {
|
||||
ApplicationId string `json:"applicationId"`
|
||||
Database string `json:"database"`
|
||||
|
|
@ -246,6 +247,7 @@ type Stream struct {
|
|||
BatchSize *uint32 `json:"batchSize,omitempty"`
|
||||
}
|
||||
|
||||
// StreamTable defines properties of outbox tables for FabricEventStreams
|
||||
type StreamTable struct {
|
||||
EventType string `json:"eventType"`
|
||||
IdColumn *string `json:"idColumn,omitempty"`
|
||||
|
|
|
|||
|
|
@ -134,7 +134,9 @@ func New(cfg Config, kubeClient k8sutil.KubernetesClient, pgSpec acidv1.Postgres
|
|||
Endpoints: make(map[PostgresRole]*v1.Endpoints)},
|
||||
userSyncStrategy: users.DefaultUserSyncStrategy{
|
||||
PasswordEncryption: passwordEncryption,
|
||||
RoleDeletionSuffix: cfg.OpConfig.RoleDeletionSuffix},
|
||||
RoleDeletionSuffix: cfg.OpConfig.RoleDeletionSuffix,
|
||||
AdditionalOwnerRoles: cfg.OpConfig.AdditionalOwnerRoles,
|
||||
},
|
||||
deleteOptions: metav1.DeleteOptions{PropagationPolicy: &deletePropagationPolicy},
|
||||
podEventsQueue: podEventsQueue,
|
||||
KubeClient: kubeClient,
|
||||
|
|
@ -1030,12 +1032,20 @@ func (c *Cluster) processPodEvent(obj interface{}) error {
|
|||
return fmt.Errorf("could not cast to PodEvent")
|
||||
}
|
||||
|
||||
// can only take lock when (un)registerPodSubscriber is finshed
|
||||
c.podSubscribersMu.RLock()
|
||||
subscriber, ok := c.podSubscribers[spec.NamespacedName(event.PodName)]
|
||||
c.podSubscribersMu.RUnlock()
|
||||
if ok {
|
||||
subscriber <- event
|
||||
select {
|
||||
case subscriber <- event:
|
||||
default:
|
||||
// ending up here when there is no receiver on the channel (i.e. waitForPodLabel finished)
|
||||
// avoids blocking channel: https://gobyexample.com/non-blocking-channel-operations
|
||||
}
|
||||
}
|
||||
// hold lock for the time of processing the event to avoid race condition
|
||||
// with unregisterPodSubscriber closing the channel (see #1876)
|
||||
c.podSubscribersMu.RUnlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1308,28 +1318,15 @@ func (c *Cluster) initRobotUsers() error {
|
|||
}
|
||||
|
||||
func (c *Cluster) initAdditionalOwnerRoles() {
|
||||
for _, additionalOwner := range c.OpConfig.AdditionalOwnerRoles {
|
||||
// fetch all database owners the additional should become a member of
|
||||
memberOf := make([]string, 0)
|
||||
for username, pgUser := range c.pgUsers {
|
||||
if pgUser.IsDbOwner {
|
||||
memberOf = append(memberOf, username)
|
||||
}
|
||||
if len(c.OpConfig.AdditionalOwnerRoles) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
if len(memberOf) > 0 {
|
||||
namespace := c.Namespace
|
||||
additionalOwnerPgUser := spec.PgUser{
|
||||
Origin: spec.RoleOriginSpilo,
|
||||
MemberOf: memberOf,
|
||||
Name: additionalOwner,
|
||||
Namespace: namespace,
|
||||
}
|
||||
if currentRole, present := c.pgUsers[additionalOwner]; present {
|
||||
c.pgUsers[additionalOwner] = c.resolveNameConflict(¤tRole, &additionalOwnerPgUser)
|
||||
} else {
|
||||
c.pgUsers[additionalOwner] = additionalOwnerPgUser
|
||||
}
|
||||
// fetch database owners and assign additional owner roles
|
||||
for username, pgUser := range c.pgUsers {
|
||||
if pgUser.IsDbOwner {
|
||||
pgUser.MemberOf = append(pgUser.MemberOf, c.OpConfig.AdditionalOwnerRoles...)
|
||||
c.pgUsers[username] = pgUser
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1512,34 +1509,16 @@ func (c *Cluster) Switchover(curMaster *v1.Pod, candidate spec.NamespacedName) e
|
|||
var err error
|
||||
c.logger.Debugf("switching over from %q to %q", curMaster.Name, candidate)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switching over from %q to %q", curMaster.Name, candidate)
|
||||
|
||||
var wg sync.WaitGroup
|
||||
|
||||
podLabelErr := make(chan error)
|
||||
stopCh := make(chan struct{})
|
||||
|
||||
wg.Add(1)
|
||||
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
ch := c.registerPodSubscriber(candidate)
|
||||
defer c.unregisterPodSubscriber(candidate)
|
||||
|
||||
role := Master
|
||||
|
||||
select {
|
||||
case <-stopCh:
|
||||
case podLabelErr <- func() (err2 error) {
|
||||
_, err2 = c.waitForPodLabel(ch, stopCh, &role)
|
||||
return
|
||||
}():
|
||||
}
|
||||
}()
|
||||
defer close(stopCh)
|
||||
|
||||
if err = c.patroni.Switchover(curMaster, candidate.Name); err == nil {
|
||||
c.logger.Debugf("successfully switched over from %q to %q", curMaster.Name, candidate)
|
||||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Successfully switched over from %q to %q", curMaster.Name, candidate)
|
||||
if err = <-podLabelErr; err != nil {
|
||||
_, err = c.waitForPodLabel(ch, stopCh, nil)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("could not get master pod label: %v", err)
|
||||
}
|
||||
} else {
|
||||
|
|
@ -1547,14 +1526,6 @@ func (c *Cluster) Switchover(curMaster *v1.Pod, candidate spec.NamespacedName) e
|
|||
c.eventRecorder.Eventf(c.GetReference(), v1.EventTypeNormal, "Switchover", "Switchover from %q to %q FAILED: %v", curMaster.Name, candidate, err)
|
||||
}
|
||||
|
||||
// signal the role label waiting goroutine to close the shop and go home
|
||||
close(stopCh)
|
||||
// wait until the goroutine terminates, since unregisterPodSubscriber
|
||||
// must be called before the outer return; otherwise we risk subscribing to the same pod twice.
|
||||
wg.Wait()
|
||||
// close the label waiting channel no sooner than the waiting goroutine terminates.
|
||||
close(podLabelErr)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -148,11 +148,9 @@ func TestInitAdditionalOwnerRoles(t *testing.T) {
|
|||
|
||||
manifestUsers := map[string]acidv1.UserFlags{"foo_owner": {}, "bar_owner": {}, "app_user": {}}
|
||||
expectedUsers := map[string]spec.PgUser{
|
||||
"foo_owner": {Origin: spec.RoleOriginManifest, Name: "foo_owner", Namespace: cl.Namespace, Password: "f123", Flags: []string{"LOGIN"}, IsDbOwner: true},
|
||||
"bar_owner": {Origin: spec.RoleOriginManifest, Name: "bar_owner", Namespace: cl.Namespace, Password: "b123", Flags: []string{"LOGIN"}, IsDbOwner: true},
|
||||
"foo_owner": {Origin: spec.RoleOriginManifest, Name: "foo_owner", Namespace: cl.Namespace, Password: "f123", Flags: []string{"LOGIN"}, IsDbOwner: true, MemberOf: []string{"cron_admin", "part_man"}},
|
||||
"bar_owner": {Origin: spec.RoleOriginManifest, Name: "bar_owner", Namespace: cl.Namespace, Password: "b123", Flags: []string{"LOGIN"}, IsDbOwner: true, MemberOf: []string{"cron_admin", "part_man"}},
|
||||
"app_user": {Origin: spec.RoleOriginManifest, Name: "app_user", Namespace: cl.Namespace, Password: "a123", Flags: []string{"LOGIN"}, IsDbOwner: false},
|
||||
"cron_admin": {Origin: spec.RoleOriginSpilo, Name: "cron_admin", Namespace: cl.Namespace, MemberOf: []string{"foo_owner", "bar_owner"}},
|
||||
"part_man": {Origin: spec.RoleOriginSpilo, Name: "part_man", Namespace: cl.Namespace, MemberOf: []string{"foo_owner", "bar_owner"}},
|
||||
}
|
||||
|
||||
cl.Spec.Databases = map[string]string{"foo_db": "foo_owner", "bar_db": "bar_owner"}
|
||||
|
|
@ -163,24 +161,15 @@ func TestInitAdditionalOwnerRoles(t *testing.T) {
|
|||
t.Errorf("%s could not init manifest users", testName)
|
||||
}
|
||||
|
||||
// update passwords to compare with result
|
||||
for manifestUser := range manifestUsers {
|
||||
pgUser := cl.pgUsers[manifestUser]
|
||||
pgUser.Password = manifestUser[0:1] + "123"
|
||||
cl.pgUsers[manifestUser] = pgUser
|
||||
}
|
||||
|
||||
// now assign additional roles to owners
|
||||
cl.initAdditionalOwnerRoles()
|
||||
|
||||
for _, additionalOwnerRole := range cl.Config.OpConfig.AdditionalOwnerRoles {
|
||||
expectedPgUser := expectedUsers[additionalOwnerRole]
|
||||
existingPgUser, exists := cl.pgUsers[additionalOwnerRole]
|
||||
if !exists {
|
||||
t.Errorf("%s additional owner role %q not initilaized", testName, additionalOwnerRole)
|
||||
}
|
||||
// update passwords to compare with result
|
||||
for username, existingPgUser := range cl.pgUsers {
|
||||
expectedPgUser := expectedUsers[username]
|
||||
if !util.IsEqualIgnoreOrder(expectedPgUser.MemberOf, existingPgUser.MemberOf) {
|
||||
t.Errorf("%s unexpected membership of additional owner role %q: expected member of %#v, got member of %#v",
|
||||
testName, additionalOwnerRole, expectedPgUser.MemberOf, existingPgUser.MemberOf)
|
||||
t.Errorf("%s unexpected membership of user %q: expected member of %#v, got member of %#v",
|
||||
testName, username, expectedPgUser.MemberOf, existingPgUser.MemberOf)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -42,6 +42,7 @@ const (
|
|||
logicalBackupContainerName = "logical-backup"
|
||||
connectionPoolerContainer = "connection-pooler"
|
||||
pgPort = 5432
|
||||
operatorPort = 8080
|
||||
)
|
||||
|
||||
type pgUser struct {
|
||||
|
|
@ -567,7 +568,7 @@ func generateContainer(
|
|||
Protocol: v1.ProtocolTCP,
|
||||
},
|
||||
{
|
||||
ContainerPort: patroni.ApiPort,
|
||||
ContainerPort: operatorPort,
|
||||
Protocol: v1.ProtocolTCP,
|
||||
},
|
||||
},
|
||||
|
|
@ -939,7 +940,6 @@ func (c *Cluster) generateSpiloPodEnvVars(
|
|||
func appendEnvVars(envs []v1.EnvVar, appEnv ...v1.EnvVar) []v1.EnvVar {
|
||||
collectedEnvs := envs
|
||||
for _, env := range appEnv {
|
||||
env.Name = strings.ToUpper(env.Name)
|
||||
if !isEnvVarPresent(collectedEnvs, env.Name) {
|
||||
collectedEnvs = append(collectedEnvs, env)
|
||||
}
|
||||
|
|
@ -949,7 +949,7 @@ func appendEnvVars(envs []v1.EnvVar, appEnv ...v1.EnvVar) []v1.EnvVar {
|
|||
|
||||
func isEnvVarPresent(envs []v1.EnvVar, key string) bool {
|
||||
for _, env := range envs {
|
||||
if env.Name == key {
|
||||
if strings.EqualFold(env.Name, key) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
|
@ -1649,7 +1649,7 @@ func (c *Cluster) generateUserSecrets() map[string]*v1.Secret {
|
|||
func (c *Cluster) generateSingleUserSecret(namespace string, pgUser spec.PgUser) *v1.Secret {
|
||||
//Skip users with no password i.e. human users (they'll be authenticated using pam)
|
||||
if pgUser.Password == "" {
|
||||
if pgUser.Origin != spec.RoleOriginTeamsAPI && pgUser.Origin != spec.RoleOriginSpilo {
|
||||
if pgUser.Origin != spec.RoleOriginTeamsAPI {
|
||||
c.logger.Warningf("could not generate secret for a non-teamsAPI role %q: role has no password",
|
||||
pgUser.Name)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -504,7 +504,7 @@ func TestGenerateSpiloPodEnvVars(t *testing.T) {
|
|||
expectedS3BucketConfigMap := []ExpectedValue{
|
||||
{
|
||||
envIndex: 17,
|
||||
envVarConstant: "WAL_S3_BUCKET",
|
||||
envVarConstant: "wal_s3_bucket",
|
||||
envVarValue: "global-s3-bucket-configmap",
|
||||
},
|
||||
}
|
||||
|
|
@ -518,7 +518,7 @@ func TestGenerateSpiloPodEnvVars(t *testing.T) {
|
|||
expectedCustomVariableSecret := []ExpectedValue{
|
||||
{
|
||||
envIndex: 16,
|
||||
envVarConstant: "CUSTOM_VARIABLE",
|
||||
envVarConstant: "custom_variable",
|
||||
envVarValueRef: &v1.EnvVarSource{
|
||||
SecretKeyRef: &v1.SecretKeySelector{
|
||||
LocalObjectReference: v1.LocalObjectReference{
|
||||
|
|
@ -532,7 +532,7 @@ func TestGenerateSpiloPodEnvVars(t *testing.T) {
|
|||
expectedCustomVariableConfigMap := []ExpectedValue{
|
||||
{
|
||||
envIndex: 16,
|
||||
envVarConstant: "CUSTOM_VARIABLE",
|
||||
envVarConstant: "custom_variable",
|
||||
envVarValue: "configmap-test",
|
||||
},
|
||||
}
|
||||
|
|
@ -573,14 +573,14 @@ func TestGenerateSpiloPodEnvVars(t *testing.T) {
|
|||
},
|
||||
{
|
||||
envIndex: 20,
|
||||
envVarConstant: "CLONE_AWS_ENDPOINT",
|
||||
envVarConstant: "clone_aws_endpoint",
|
||||
envVarValue: "s3.eu-west-1.amazonaws.com",
|
||||
},
|
||||
}
|
||||
expectedCloneEnvSecret := []ExpectedValue{
|
||||
{
|
||||
envIndex: 20,
|
||||
envVarConstant: "CLONE_AWS_ACCESS_KEY_ID",
|
||||
envVarConstant: "clone_aws_access_key_id",
|
||||
envVarValueRef: &v1.EnvVarSource{
|
||||
SecretKeyRef: &v1.SecretKeySelector{
|
||||
LocalObjectReference: v1.LocalObjectReference{
|
||||
|
|
@ -599,7 +599,7 @@ func TestGenerateSpiloPodEnvVars(t *testing.T) {
|
|||
},
|
||||
{
|
||||
envIndex: 20,
|
||||
envVarConstant: "STANDBY_GOOGLE_APPLICATION_CREDENTIALS",
|
||||
envVarConstant: "standby_google_application_credentials",
|
||||
envVarValueRef: &v1.EnvVarSource{
|
||||
SecretKeyRef: &v1.SecretKeySelector{
|
||||
LocalObjectReference: v1.LocalObjectReference{
|
||||
|
|
|
|||
|
|
@ -67,7 +67,7 @@ func (c *Cluster) markRollingUpdateFlagForPod(pod *v1.Pod, msg string) error {
|
|||
return fmt.Errorf("could not form patch for pod's rolling update flag: %v", err)
|
||||
}
|
||||
|
||||
err = retryutil.Retry(c.OpConfig.PatroniAPICheckInterval, c.OpConfig.PatroniAPICheckTimeout,
|
||||
err = retryutil.Retry(1*time.Second, 5*time.Second,
|
||||
func() (bool, error) {
|
||||
_, err2 := c.KubeClient.Pods(pod.Namespace).Patch(
|
||||
context.TODO(),
|
||||
|
|
@ -151,12 +151,13 @@ func (c *Cluster) unregisterPodSubscriber(podName spec.NamespacedName) {
|
|||
c.podSubscribersMu.Lock()
|
||||
defer c.podSubscribersMu.Unlock()
|
||||
|
||||
if _, ok := c.podSubscribers[podName]; !ok {
|
||||
ch, ok := c.podSubscribers[podName]
|
||||
if !ok {
|
||||
panic("subscriber for pod '" + podName.String() + "' is not found")
|
||||
}
|
||||
|
||||
close(c.podSubscribers[podName])
|
||||
delete(c.podSubscribers, podName)
|
||||
close(ch)
|
||||
}
|
||||
|
||||
func (c *Cluster) registerPodSubscriber(podName spec.NamespacedName) chan PodEvent {
|
||||
|
|
@ -399,11 +400,12 @@ func (c *Cluster) getPatroniMemberData(pod *v1.Pod) (patroni.MemberData, error)
|
|||
}
|
||||
|
||||
func (c *Cluster) recreatePod(podName spec.NamespacedName) (*v1.Pod, error) {
|
||||
stopCh := make(chan struct{})
|
||||
ch := c.registerPodSubscriber(podName)
|
||||
defer c.unregisterPodSubscriber(podName)
|
||||
stopChan := make(chan struct{})
|
||||
defer close(stopCh)
|
||||
|
||||
err := retryutil.Retry(c.OpConfig.PatroniAPICheckInterval, c.OpConfig.PatroniAPICheckTimeout,
|
||||
err := retryutil.Retry(1*time.Second, 5*time.Second,
|
||||
func() (bool, error) {
|
||||
err2 := c.KubeClient.Pods(podName.Namespace).Delete(
|
||||
context.TODO(),
|
||||
|
|
@ -421,7 +423,7 @@ func (c *Cluster) recreatePod(podName spec.NamespacedName) (*v1.Pod, error) {
|
|||
if err := c.waitForPodDeletion(ch); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pod, err := c.waitForPodLabel(ch, stopChan, nil)
|
||||
pod, err := c.waitForPodLabel(ch, stopCh, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
@ -446,7 +448,7 @@ func (c *Cluster) recreatePods(pods []v1.Pod, switchoverCandidates []spec.Namesp
|
|||
continue
|
||||
}
|
||||
|
||||
podName := util.NameFromMeta(pod.ObjectMeta)
|
||||
podName := util.NameFromMeta(pods[i].ObjectMeta)
|
||||
newPod, err := c.recreatePod(podName)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not recreate replica pod %q: %v", util.NameFromMeta(pod.ObjectMeta), err)
|
||||
|
|
@ -520,13 +522,13 @@ func (c *Cluster) getSwitchoverCandidate(master *v1.Pod) (spec.NamespacedName, e
|
|||
// if sync_standby replicas were found assume synchronous_mode is enabled and ignore other candidates list
|
||||
if len(syncCandidates) > 0 {
|
||||
sort.Slice(syncCandidates, func(i, j int) bool {
|
||||
return util.IntFromIntStr(syncCandidates[i].Lag) < util.IntFromIntStr(syncCandidates[j].Lag)
|
||||
return syncCandidates[i].Lag < syncCandidates[j].Lag
|
||||
})
|
||||
return spec.NamespacedName{Namespace: master.Namespace, Name: syncCandidates[0].Name}, nil
|
||||
}
|
||||
if len(candidates) > 0 {
|
||||
sort.Slice(candidates, func(i, j int) bool {
|
||||
return util.IntFromIntStr(candidates[i].Lag) < util.IntFromIntStr(candidates[j].Lag)
|
||||
return candidates[i].Lag < candidates[j].Lag
|
||||
})
|
||||
return spec.NamespacedName{Namespace: master.Namespace, Name: candidates[0].Name}, nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -316,7 +316,7 @@ func (c *Cluster) annotationsSet(annotations map[string]string) map[string]strin
|
|||
return nil
|
||||
}
|
||||
|
||||
func (c *Cluster) waitForPodLabel(podEvents chan PodEvent, stopChan chan struct{}, role *PostgresRole) (*v1.Pod, error) {
|
||||
func (c *Cluster) waitForPodLabel(podEvents chan PodEvent, stopCh chan struct{}, role *PostgresRole) (*v1.Pod, error) {
|
||||
timeout := time.After(c.OpConfig.PodLabelWaitTimeout)
|
||||
for {
|
||||
select {
|
||||
|
|
@ -332,7 +332,7 @@ func (c *Cluster) waitForPodLabel(podEvents chan PodEvent, stopChan chan struct{
|
|||
}
|
||||
case <-timeout:
|
||||
return nil, fmt.Errorf("pod label wait timeout")
|
||||
case <-stopChan:
|
||||
case <-stopCh:
|
||||
return nil, fmt.Errorf("pod label wait cancelled")
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -451,7 +451,7 @@ func (c *Controller) Run(stopCh <-chan struct{}, wg *sync.WaitGroup) {
|
|||
panic("could not acquire initial list of clusters")
|
||||
}
|
||||
|
||||
wg.Add(5)
|
||||
wg.Add(5 + util.Bool2Int(c.opConfig.EnablePostgresTeamCRD))
|
||||
go c.runPodInformer(stopCh, wg)
|
||||
go c.runPostgresqlInformer(stopCh, wg)
|
||||
go c.clusterResync(stopCh, wg)
|
||||
|
|
|
|||
|
|
@ -165,7 +165,7 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
|
|||
|
||||
// logical backup config
|
||||
result.LogicalBackupSchedule = util.Coalesce(fromCRD.LogicalBackup.Schedule, "30 00 * * *")
|
||||
result.LogicalBackupDockerImage = util.Coalesce(fromCRD.LogicalBackup.DockerImage, "registry.opensource.zalan.do/acid/logical-backup:v1.8.0")
|
||||
result.LogicalBackupDockerImage = util.Coalesce(fromCRD.LogicalBackup.DockerImage, "registry.opensource.zalan.do/acid/logical-backup:v1.8.1")
|
||||
result.LogicalBackupProvider = util.Coalesce(fromCRD.LogicalBackup.BackupProvider, "s3")
|
||||
result.LogicalBackupS3Bucket = fromCRD.LogicalBackup.S3Bucket
|
||||
result.LogicalBackupS3Region = fromCRD.LogicalBackup.S3Region
|
||||
|
|
|
|||
|
|
@ -225,7 +225,7 @@ func (c *Controller) processEvent(event ClusterEvent) {
|
|||
switch event.EventType {
|
||||
case EventAdd:
|
||||
if clusterFound {
|
||||
lg.Infof("recieved add event for already existing Postgres cluster")
|
||||
lg.Infof("received add event for already existing Postgres cluster")
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -30,7 +30,6 @@ const (
|
|||
RoleOriginManifest
|
||||
RoleOriginInfrastructure
|
||||
RoleOriginTeamsAPI
|
||||
RoleOriginSpilo
|
||||
RoleOriginSystem
|
||||
RoleOriginBootstrap
|
||||
RoleConnectionPooler
|
||||
|
|
|
|||
|
|
@ -122,7 +122,7 @@ type Scalyr struct {
|
|||
// LogicalBackup defines configuration for logical backup
|
||||
type LogicalBackup struct {
|
||||
LogicalBackupSchedule string `name:"logical_backup_schedule" default:"30 00 * * *"`
|
||||
LogicalBackupDockerImage string `name:"logical_backup_docker_image" default:"registry.opensource.zalan.do/acid/logical-backup:v1.8.0"`
|
||||
LogicalBackupDockerImage string `name:"logical_backup_docker_image" default:"registry.opensource.zalan.do/acid/logical-backup:v1.8.1"`
|
||||
LogicalBackupProvider string `name:"logical_backup_provider" default:"s3"`
|
||||
LogicalBackupS3Bucket string `name:"logical_backup_s3_bucket" default:""`
|
||||
LogicalBackupS3Region string `name:"logical_backup_s3_region" default:""`
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ import (
|
|||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"math"
|
||||
"net"
|
||||
"net/http"
|
||||
"strconv"
|
||||
|
|
@ -16,7 +17,6 @@ import (
|
|||
"github.com/sirupsen/logrus"
|
||||
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/util/intstr"
|
||||
)
|
||||
|
||||
const (
|
||||
|
|
@ -189,7 +189,23 @@ type ClusterMember struct {
|
|||
Role string `json:"role"`
|
||||
State string `json:"state"`
|
||||
Timeline int `json:"timeline"`
|
||||
Lag intstr.IntOrString `json:"lag,omitempty"`
|
||||
Lag ReplicationLag `json:"lag,omitempty"`
|
||||
}
|
||||
|
||||
type ReplicationLag uint64
|
||||
|
||||
// UnmarshalJSON converts member lag (can be int or string) into uint64
|
||||
func (rl *ReplicationLag) UnmarshalJSON(data []byte) error {
|
||||
var lagUInt64 uint64
|
||||
if data[0] == '"' {
|
||||
*rl = math.MaxUint64
|
||||
return nil
|
||||
}
|
||||
if err := json.Unmarshal(data, &lagUInt64); err != nil {
|
||||
return err
|
||||
}
|
||||
*rl = ReplicationLag(lagUInt64)
|
||||
return nil
|
||||
}
|
||||
|
||||
// MemberDataPatroni child element
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ import (
|
|||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"math"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
|
@ -15,7 +16,6 @@ import (
|
|||
|
||||
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/util/intstr"
|
||||
)
|
||||
|
||||
var logger = logrus.New().WithField("test", "patroni")
|
||||
|
|
@ -101,16 +101,27 @@ func TestGetClusterMembers(t *testing.T) {
|
|||
Role: "sync_standby",
|
||||
State: "running",
|
||||
Timeline: 1,
|
||||
Lag: intstr.IntOrString{IntVal: 0},
|
||||
Lag: 0,
|
||||
}, {
|
||||
Name: "acid-test-cluster-2",
|
||||
Role: "replica",
|
||||
State: "running",
|
||||
Timeline: 1,
|
||||
Lag: intstr.IntOrString{Type: 1, StrVal: "unknown"},
|
||||
Lag: math.MaxUint64,
|
||||
}, {
|
||||
Name: "acid-test-cluster-3",
|
||||
Role: "replica",
|
||||
State: "running",
|
||||
Timeline: 1,
|
||||
Lag: 3000000000,
|
||||
}}
|
||||
|
||||
json := `{"members": [{"name": "acid-test-cluster-0", "role": "leader", "state": "running", "api_url": "http://192.168.100.1:8008/patroni", "host": "192.168.100.1", "port": 5432, "timeline": 1}, {"name": "acid-test-cluster-1", "role": "sync_standby", "state": "running", "api_url": "http://192.168.100.2:8008/patroni", "host": "192.168.100.2", "port": 5432, "timeline": 1, "lag": 0}, {"name": "acid-test-cluster-2", "role": "replica", "state": "running", "api_url": "http://192.168.100.3:8008/patroni", "host": "192.168.100.3", "port": 5432, "timeline": 1, "lag": "unknown"}]}`
|
||||
json := `{"members": [
|
||||
{"name": "acid-test-cluster-0", "role": "leader", "state": "running", "api_url": "http://192.168.100.1:8008/patroni", "host": "192.168.100.1", "port": 5432, "timeline": 1},
|
||||
{"name": "acid-test-cluster-1", "role": "sync_standby", "state": "running", "api_url": "http://192.168.100.2:8008/patroni", "host": "192.168.100.2", "port": 5432, "timeline": 1, "lag": 0},
|
||||
{"name": "acid-test-cluster-2", "role": "replica", "state": "running", "api_url": "http://192.168.100.3:8008/patroni", "host": "192.168.100.3", "port": 5432, "timeline": 1, "lag": "unknown"},
|
||||
{"name": "acid-test-cluster-3", "role": "replica", "state": "running", "api_url": "http://192.168.100.3:8008/patroni", "host": "192.168.100.3", "port": 5432, "timeline": 1, "lag": 3000000000}
|
||||
]}`
|
||||
r := ioutil.NopCloser(bytes.NewReader([]byte(json)))
|
||||
|
||||
response := http.Response{
|
||||
|
|
|
|||
|
|
@ -20,6 +20,7 @@ const (
|
|||
alterRoleSetSQL = `ALTER ROLE "%s" SET %s TO %s`
|
||||
dropUserSQL = `SET LOCAL synchronous_commit = 'local'; DROP ROLE "%s";`
|
||||
grantToUserSQL = `GRANT %s TO "%s"`
|
||||
revokeFromUserSQL = `REVOKE "%s" FROM "%s"`
|
||||
doBlockStmt = `SET LOCAL synchronous_commit = 'local'; DO $$ BEGIN %s; END;$$;`
|
||||
passwordTemplate = "ENCRYPTED PASSWORD '%s'"
|
||||
inRoleTemplate = `IN ROLE %s`
|
||||
|
|
@ -33,6 +34,7 @@ const (
|
|||
type DefaultUserSyncStrategy struct {
|
||||
PasswordEncryption string
|
||||
RoleDeletionSuffix string
|
||||
AdditionalOwnerRoles []string
|
||||
}
|
||||
|
||||
// ProduceSyncRequests figures out the types of changes that need to happen with the given users.
|
||||
|
|
@ -53,30 +55,27 @@ func (strategy DefaultUserSyncStrategy) ProduceSyncRequests(dbUsers spec.PgUserM
|
|||
}
|
||||
} else {
|
||||
r := spec.PgSyncUserRequest{}
|
||||
r.User = dbUser
|
||||
newMD5Password := util.NewEncryptor(strategy.PasswordEncryption).PGUserPassword(newUser)
|
||||
|
||||
// do not compare for roles coming from docker image
|
||||
if newUser.Origin != spec.RoleOriginSpilo {
|
||||
if dbUser.Password != newMD5Password {
|
||||
r.User.Password = newMD5Password
|
||||
r.Kind = spec.PGsyncUserAlter
|
||||
}
|
||||
if addNewRoles, equal := util.SubstractStringSlices(newUser.MemberOf, dbUser.MemberOf); !equal {
|
||||
r.User.MemberOf = addNewRoles
|
||||
r.User.IsDbOwner = newUser.IsDbOwner
|
||||
r.Kind = spec.PGsyncUserAlter
|
||||
}
|
||||
if addNewFlags, equal := util.SubstractStringSlices(newUser.Flags, dbUser.Flags); !equal {
|
||||
r.User.Flags = addNewFlags
|
||||
r.Kind = spec.PGsyncUserAlter
|
||||
}
|
||||
}
|
||||
if addNewRoles, equal := util.SubstractStringSlices(newUser.MemberOf, dbUser.MemberOf); !equal {
|
||||
r.User.MemberOf = addNewRoles
|
||||
r.Kind = spec.PGsyncUserAlter
|
||||
}
|
||||
if r.Kind == spec.PGsyncUserAlter {
|
||||
r.User.Name = newUser.Name
|
||||
reqs = append(reqs, r)
|
||||
}
|
||||
if newUser.Origin != spec.RoleOriginSpilo &&
|
||||
len(newUser.Parameters) > 0 &&
|
||||
if len(newUser.Parameters) > 0 &&
|
||||
!reflect.DeepEqual(dbUser.Parameters, newUser.Parameters) {
|
||||
reqs = append(reqs, spec.PgSyncUserRequest{Kind: spec.PGSyncAlterSet, User: newUser})
|
||||
}
|
||||
|
|
@ -120,6 +119,15 @@ func (strategy DefaultUserSyncStrategy) ExecuteSyncRequests(requests []spec.PgSy
|
|||
if err := strategy.alterPgUser(request.User, db); err != nil {
|
||||
reqretries = append(reqretries, request)
|
||||
errors = append(errors, fmt.Sprintf("could not alter user %q: %v", request.User.Name, err))
|
||||
// XXX: we do not allow additional owner roles to be members of database owners
|
||||
// if ALTER fails it could be because of the wrong memberhip (check #1862 for details)
|
||||
// so in any case try to revoke the database owner from the additional owner roles
|
||||
// the initial ALTER statement will be retried once and should work then
|
||||
if request.User.IsDbOwner && len(strategy.AdditionalOwnerRoles) > 0 {
|
||||
if err := resolveOwnerMembership(request.User, strategy.AdditionalOwnerRoles, db); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("could not resolve owner membership for %q: %v", request.User.Name, err))
|
||||
}
|
||||
}
|
||||
}
|
||||
case spec.PGSyncAlterSet:
|
||||
if err := strategy.alterPgUserSet(request.User, db); err != nil {
|
||||
|
|
@ -152,6 +160,21 @@ func (strategy DefaultUserSyncStrategy) ExecuteSyncRequests(requests []spec.PgSy
|
|||
return nil
|
||||
}
|
||||
|
||||
func resolveOwnerMembership(dbOwner spec.PgUser, additionalOwners []string, db *sql.DB) error {
|
||||
errors := make([]string, 0)
|
||||
for _, additionalOwner := range additionalOwners {
|
||||
if err := revokeRole(dbOwner.Name, additionalOwner, db); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("could not revoke %q from %q: %v", dbOwner.Name, additionalOwner, err))
|
||||
}
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("could not resolve membership between %q and additional owner roles: %v", dbOwner.Name, strings.Join(errors, `', '`))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (strategy DefaultUserSyncStrategy) alterPgUserSet(user spec.PgUser, db *sql.DB) error {
|
||||
queries := produceAlterRoleSetStmts(user)
|
||||
query := fmt.Sprintf(doBlockStmt, strings.Join(queries, ";"))
|
||||
|
|
@ -272,6 +295,16 @@ func quoteMemberList(user spec.PgUser) string {
|
|||
return strings.Join(memberof, ",")
|
||||
}
|
||||
|
||||
func revokeRole(groupRole, role string, db *sql.DB) error {
|
||||
revokeStmt := fmt.Sprintf(revokeFromUserSQL, groupRole, role)
|
||||
|
||||
if _, err := db.Exec(fmt.Sprintf(doBlockStmt, revokeStmt)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// quoteVal quotes values to be used at ALTER ROLE SET param = value if necessary
|
||||
func quoteParameterValue(name, val string) string {
|
||||
start := val[0]
|
||||
|
|
|
|||
|
|
@ -8,7 +8,6 @@ import (
|
|||
"encoding/base64"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"math"
|
||||
"math/big"
|
||||
"math/rand"
|
||||
"reflect"
|
||||
|
|
@ -324,18 +323,18 @@ func testNil(values ...*int32) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// Convert int to IntOrString type
|
||||
// ToIntStr converts int to IntOrString type
|
||||
func ToIntStr(val int) *intstr.IntOrString {
|
||||
b := intstr.FromInt(val)
|
||||
return &b
|
||||
}
|
||||
|
||||
// Get int from IntOrString and return max int if string
|
||||
func IntFromIntStr(intOrStr intstr.IntOrString) int {
|
||||
if intOrStr.Type == 1 {
|
||||
return math.MaxInt
|
||||
// Bool2Int converts bool to int
|
||||
func Bool2Int(flag bool) int {
|
||||
if flag {
|
||||
return 1
|
||||
}
|
||||
return intOrStr.IntValue()
|
||||
return 0
|
||||
}
|
||||
|
||||
// MaxInt32 : Return maximum of two integers provided via pointers. If one value
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "postgres-operator-ui",
|
||||
"version": "1.8.0",
|
||||
"version": "1.8.1",
|
||||
"description": "PostgreSQL Operator UI",
|
||||
"main": "src/app.js",
|
||||
"config": {
|
||||
|
|
|
|||
|
|
@ -51,7 +51,23 @@ postgresqls
|
|||
th(style='width: 140px') CPU
|
||||
th(style='width: 130px') Memory
|
||||
th(style='width: 100px') Size
|
||||
th(style='width: 120px') Cost/Month
|
||||
th(style='width: 100px') IOPS
|
||||
th(style='width: 100px') Throughput
|
||||
th(style='width: 120px')
|
||||
.tooltip(style='width: 120px')
|
||||
| Cost/Month
|
||||
.tooltiptext
|
||||
strong Cost = MAX(CPU, Memory) + rest
|
||||
br
|
||||
| 1 CPU core : 42.09$
|
||||
br
|
||||
| 1GB memory: 10.5225$
|
||||
br
|
||||
| 1GB volume: 0.0952$
|
||||
br
|
||||
| IOPS (-3000 baseline): 0.006$
|
||||
br
|
||||
| Throughput (-125 baseline): 0.0476$
|
||||
th(stlye='width: 120px')
|
||||
|
||||
tbody
|
||||
|
|
@ -69,6 +85,8 @@ postgresqls
|
|||
td { cpu } / { cpu_limit }
|
||||
td { memory } / { memory_limit }
|
||||
td { volume_size }
|
||||
td { iops }
|
||||
td { throughput }
|
||||
td { calcCosts(nodes, cpu, memory, volume_size, iops, throughput) }$
|
||||
|
||||
td
|
||||
|
|
@ -132,7 +150,23 @@ postgresqls
|
|||
th(style='width: 140px') CPU
|
||||
th(style='width: 130px') Memory
|
||||
th(style='width: 100px') Size
|
||||
th(style='width: 120px') Cost/Month
|
||||
th(style='width: 100px') IOPS
|
||||
th(style='width: 100px') Throughput
|
||||
th(style='width: 120px')
|
||||
.tooltip(style='width: 120px')
|
||||
| Cost/Month
|
||||
.tooltiptext
|
||||
strong Cost = MAX(CPU, Memory) + rest
|
||||
br
|
||||
| 1 CPU core : 42.09$
|
||||
br
|
||||
| 1GB memory: 10.5225$
|
||||
br
|
||||
| 1GB volume: 0.0952$
|
||||
br
|
||||
| IOPS (-3000 baseline): 0.006$
|
||||
br
|
||||
| Throughput (-125 baseline): 0.0476$
|
||||
th(stlye='width: 120px')
|
||||
|
||||
tbody
|
||||
|
|
@ -152,6 +186,8 @@ postgresqls
|
|||
td { cpu } / { cpu_limit }
|
||||
td { memory } / { memory_limit }
|
||||
td { volume_size }
|
||||
td { iops }
|
||||
td { throughput }
|
||||
td { calcCosts(nodes, cpu, memory, volume_size, iops, throughput) }$
|
||||
|
||||
td
|
||||
|
|
@ -229,28 +265,44 @@ postgresqls
|
|||
|
||||
const calcCosts = this.calcCosts = (nodes, cpu, memory, disk, iops, throughput) => {
|
||||
podcount = Math.max(nodes, opts.config.min_pods)
|
||||
corecost = toCores(cpu) * opts.config.cost_core
|
||||
memorycost = toMemory(memory) * opts.config.cost_memory
|
||||
corecost = toCores(cpu) * opts.config.cost_core * 30.5 * 24
|
||||
memorycost = toMemory(memory) * opts.config.cost_memory * 30.5 * 24
|
||||
diskcost = toDisk(disk) * opts.config.cost_ebs
|
||||
iopscost = 0
|
||||
if (iops !== undefined && iops > 3000) {
|
||||
iopscost = (iops - 3000) * opts.config.cost_iops
|
||||
if (iops !== undefined && iops > opts.config.free_iops) {
|
||||
if (iops > opts.config.limit_iops) {
|
||||
iops = opts.config.limit_iops
|
||||
}
|
||||
iopscost = (iops - opts.config.free_iops) * opts.config.cost_iops
|
||||
}
|
||||
throughputcost = 0
|
||||
if (throughput !== undefined && throughput > 125) {
|
||||
throughputcost = (throughput - 125) * opts.config.cost_throughput
|
||||
if (throughput !== undefined && throughput > opts.config.free_throughput) {
|
||||
if (throughput > opts.config.limit_throughput) {
|
||||
throughput = opts.config.limit_throughput
|
||||
}
|
||||
throughputcost = (throughput - opts.config.free_throughput) * opts.config.cost_throughput
|
||||
}
|
||||
|
||||
costs = podcount * (corecost + memorycost + diskcost + iopscost + throughputcost)
|
||||
costs = podcount * (Math.max(corecost, memorycost) + diskcost + iopscost + throughputcost)
|
||||
return costs.toFixed(2)
|
||||
}
|
||||
|
||||
const toDisk = this.toDisk = value => {
|
||||
if(value.endsWith("Gi")) {
|
||||
if(value.endsWith("Mi")) {
|
||||
value = value.substring(0, value.length-2)
|
||||
value = Number(value) / 1000.
|
||||
return value
|
||||
}
|
||||
else if(value.endsWith("Gi")) {
|
||||
value = value.substring(0, value.length-2)
|
||||
value = Number(value)
|
||||
return value
|
||||
}
|
||||
else if(value.endsWith("Ti")) {
|
||||
value = value.substring(0, value.length-2)
|
||||
value = Number(value) * 1000
|
||||
return value
|
||||
}
|
||||
|
||||
return value
|
||||
}
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ spec:
|
|||
serviceAccountName: postgres-operator-ui
|
||||
containers:
|
||||
- name: "service"
|
||||
image: registry.opensource.zalan.do/acid/postgres-operator-ui:v1.8.0
|
||||
image: registry.opensource.zalan.do/acid/postgres-operator-ui:v1.8.1
|
||||
ports:
|
||||
- containerPort: 8081
|
||||
protocol: "TCP"
|
||||
|
|
@ -67,6 +67,10 @@ spec:
|
|||
"cost_throughput": 0.0476,
|
||||
"cost_core": 0.0575,
|
||||
"cost_memory": 0.014375,
|
||||
"free_iops": 3000,
|
||||
"free_throughput": 125,
|
||||
"limit_iops": 16000,
|
||||
"limit_throughput": 1000,
|
||||
"postgresql_versions": [
|
||||
"14",
|
||||
"13",
|
||||
|
|
|
|||
|
|
@ -82,12 +82,16 @@ OPERATOR_CLUSTER_NAME_LABEL = getenv('OPERATOR_CLUSTER_NAME_LABEL', 'cluster-nam
|
|||
OPERATOR_UI_CONFIG = getenv('OPERATOR_UI_CONFIG', '{}')
|
||||
OPERATOR_UI_MAINTENANCE_CHECK = getenv('OPERATOR_UI_MAINTENANCE_CHECK', '{}')
|
||||
READ_ONLY_MODE = getenv('READ_ONLY_MODE', False) in [True, 'true']
|
||||
RESOURCES_VISIBLE = getenv('RESOURCES_VISIBLE', True)
|
||||
SPILO_S3_BACKUP_PREFIX = getenv('SPILO_S3_BACKUP_PREFIX', 'spilo/')
|
||||
SUPERUSER_TEAM = getenv('SUPERUSER_TEAM', 'acid')
|
||||
TARGET_NAMESPACE = getenv('TARGET_NAMESPACE')
|
||||
GOOGLE_ANALYTICS = getenv('GOOGLE_ANALYTICS', False)
|
||||
MIN_PODS= getenv('MIN_PODS', 2)
|
||||
RESOURCES_VISIBLE = getenv('RESOURCES_VISIBLE', True)
|
||||
CUSTOM_MESSAGE_RED = getenv('CUSTOM_MESSAGE_RED', '')
|
||||
|
||||
APPLICATION_DEPLOYMENT_DOCS = getenv('APPLICATION_DEPLOYMENT_DOCS', '')
|
||||
CONNECTION_DOCS = getenv('CONNECTION_DOCS', '')
|
||||
|
||||
# storage pricing, i.e. https://aws.amazon.com/ebs/pricing/ (e.g. Europe - Franfurt)
|
||||
COST_EBS = float(getenv('COST_EBS', 0.0952)) # GB per month
|
||||
|
|
@ -95,8 +99,19 @@ COST_IOPS = float(getenv('COST_IOPS', 0.006)) # IOPS per month above 3000 basel
|
|||
COST_THROUGHPUT = float(getenv('COST_THROUGHPUT', 0.0476)) # MB/s per month above 125 MB/s baseline
|
||||
|
||||
# compute costs, i.e. https://www.ec2instances.info/?region=eu-central-1&selected=m5.2xlarge
|
||||
COST_CORE = 30.5 * 24 * float(getenv('COST_CORE', 0.0575)) # Core per hour m5.2xlarge / 8.
|
||||
COST_MEMORY = 30.5 * 24 * float(getenv('COST_MEMORY', 0.014375)) # Memory GB m5.2xlarge / 32.
|
||||
COST_CORE = float(getenv('COST_CORE', 0.0575)) # Core per hour m5.2xlarge / 8.
|
||||
COST_MEMORY = float(getenv('COST_MEMORY', 0.014375)) # Memory GB m5.2xlarge / 32.
|
||||
|
||||
# maximum and limitation of IOPS and throughput
|
||||
FREE_IOPS = float(getenv('FREE_IOPS', 3000))
|
||||
LIMIT_IOPS = float(getenv('LIMIT_IOPS', 16000))
|
||||
FREE_THROUGHPUT = float(getenv('FREE_THROUGHPUT', 125))
|
||||
LIMIT_THROUGHPUT = float(getenv('LIMIT_THROUGHPUT', 1000))
|
||||
# get the default value of core and memory
|
||||
DEFAULT_MEMORY = getenv('DEFAULT_MEMORY', '300Mi')
|
||||
DEFAULT_MEMORY_LIMIT = getenv('DEFAULT_MEMORY_LIMIT', '300Mi')
|
||||
DEFAULT_CPU = getenv('DEFAULT_CPU', '10m')
|
||||
DEFAULT_CPU_LIMIT = getenv('DEFAULT_CPU_LIMIT', '300m')
|
||||
|
||||
WALE_S3_ENDPOINT = getenv(
|
||||
'WALE_S3_ENDPOINT',
|
||||
|
|
@ -304,29 +319,34 @@ DEFAULT_UI_CONFIG = {
|
|||
'nat_gateways_visible': True,
|
||||
'users_visible': True,
|
||||
'databases_visible': True,
|
||||
'resources_visible': True,
|
||||
'postgresql_versions': ['11','12','13'],
|
||||
'resources_visible': RESOURCES_VISIBLE,
|
||||
'postgresql_versions': ['11','12','13','14'],
|
||||
'dns_format_string': '{0}.{1}.{2}',
|
||||
'pgui_link': '',
|
||||
'static_network_whitelist': {},
|
||||
'read_only_mode': READ_ONLY_MODE,
|
||||
'superuser_team': SUPERUSER_TEAM,
|
||||
'target_namespace': TARGET_NAMESPACE,
|
||||
'connection_docs': CONNECTION_DOCS,
|
||||
'application_deployment_docs': APPLICATION_DEPLOYMENT_DOCS,
|
||||
'cost_ebs': COST_EBS,
|
||||
'cost_iops': COST_IOPS,
|
||||
'cost_throughput': COST_THROUGHPUT,
|
||||
'cost_core': COST_CORE,
|
||||
'cost_memory': COST_MEMORY,
|
||||
'min_pods': MIN_PODS
|
||||
'min_pods': MIN_PODS,
|
||||
'free_iops': FREE_IOPS,
|
||||
'free_throughput': FREE_THROUGHPUT,
|
||||
'limit_iops': LIMIT_IOPS,
|
||||
'limit_throughput': LIMIT_THROUGHPUT
|
||||
}
|
||||
|
||||
|
||||
@app.route('/config')
|
||||
@authorize
|
||||
def get_config():
|
||||
config = loads(OPERATOR_UI_CONFIG) or DEFAULT_UI_CONFIG
|
||||
config['read_only_mode'] = READ_ONLY_MODE
|
||||
config['resources_visible'] = RESOURCES_VISIBLE
|
||||
config['superuser_team'] = SUPERUSER_TEAM
|
||||
config['target_namespace'] = TARGET_NAMESPACE
|
||||
config['min_pods'] = MIN_PODS
|
||||
config = DEFAULT_UI_CONFIG.copy()
|
||||
config.update(loads(OPERATOR_UI_CONFIG))
|
||||
|
||||
config['namespaces'] = (
|
||||
[TARGET_NAMESPACE]
|
||||
|
|
@ -961,11 +981,13 @@ def get_operator_get_logs(worker: int):
|
|||
@app.route('/operator/clusters/<namespace>/<cluster>/logs')
|
||||
@authorize
|
||||
def get_operator_get_logs_per_cluster(namespace: str, cluster: str):
|
||||
team, cluster_name = cluster.split('-', 1)
|
||||
# team id might contain hyphens, try to find correct team name
|
||||
user_teams = get_teams_for_user(session.get('user_name', ''))
|
||||
for user_team in user_teams:
|
||||
if cluster.find(user_team) == 0:
|
||||
if cluster.find(user_team + '-') == 0:
|
||||
team = cluster[:len(user_team)]
|
||||
cluster_name = cluster[len(user_team)+1:]
|
||||
cluster_name = cluster[len(user_team + '-'):]
|
||||
break
|
||||
return proxy_operator(f'/clusters/{team}/{namespace}/{cluster_name}/logs/')
|
||||
|
||||
|
|
|
|||
|
|
@ -64,3 +64,56 @@ label {
|
|||
td {
|
||||
vertical-align: middle !important;
|
||||
}
|
||||
|
||||
.tooltip {
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
opacity: 1;
|
||||
font-size: 14px;
|
||||
font-weight: bold;
|
||||
}
|
||||
.tooltip:after {
|
||||
content: '?';
|
||||
display: inline-block;
|
||||
font-family: sans-serif;
|
||||
font-weight: bold;
|
||||
text-align: center;
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
font-size: 12px;
|
||||
line-height: 16px;
|
||||
border-radius: 12px;
|
||||
padding: 0px;
|
||||
color: white;
|
||||
background: black;
|
||||
border: 1px solid black;
|
||||
}
|
||||
.tooltip .tooltiptext {
|
||||
visibility: hidden;
|
||||
width: 250px;
|
||||
background-color: white;
|
||||
color: #000;
|
||||
text-align: justify;
|
||||
border-radius: 6px;
|
||||
padding: 10px 10px;
|
||||
position: absolute;
|
||||
z-index: 1;
|
||||
bottom: 150%;
|
||||
left: 50%;
|
||||
margin-left: -120px;
|
||||
border: 1px solid black;
|
||||
font-weight: normal;
|
||||
}
|
||||
.tooltip .tooltiptext::after {
|
||||
content: "";
|
||||
position: absolute;
|
||||
top: 100%;
|
||||
left: 50%;
|
||||
margin-left: -5px;
|
||||
border-width: 5px;
|
||||
border-style: solid;
|
||||
border-color: black transparent transparent transparent;
|
||||
}
|
||||
.tooltip:hover .tooltiptext {
|
||||
visibility: visible;
|
||||
}
|
||||
|
|
|
|||
Loading…
Reference in New Issue