Fix val docs (#901)

* missing quotes in pooler configmap in values.yaml

* missing quotes in pooler configmap in values-crd.yaml

* docs clarifications

* helm3 --skip-crds

* Update docs/user.md

Co-Authored-By: Felix Kunde <felix-kunde@gmx.de>

* details moved in docs

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
This commit is contained in:
ReSearchITEng 2020-04-09 10:16:45 +03:00 committed by GitHub
parent 4dee8918bd
commit 7232326159
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 17 additions and 7 deletions

View File

@ -277,11 +277,11 @@ configConnectionPooler:
# docker image
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer"
# max db connections the pooler should hold
connection_pooler_max_db_connections: 60
connection_pooler_max_db_connections: "60"
# default pooling mode
connection_pooler_mode: "transaction"
# number of pooler instances
connection_pooler_number_of_instances: 2
connection_pooler_number_of_instances: "2"
# default resources
connection_pooler_default_cpu_request: 500m
connection_pooler_default_memory_request: 100Mi
@ -294,6 +294,7 @@ rbac:
crd:
# Specifies whether custom resource definitions should be created
# When using helm3, this is ignored; instead use "--skip-crds" to skip.
create: true
serviceAccount:

View File

@ -254,11 +254,11 @@ configConnectionPooler:
# docker image
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer"
# max db connections the pooler should hold
connection_pooler_max_db_connections: 60
connection_pooler_max_db_connections: "60"
# default pooling mode
connection_pooler_mode: "transaction"
# number of pooler instances
connection_pooler_number_of_instances: 2
connection_pooler_number_of_instances: "2"
# default resources
connection_pooler_default_cpu_request: 500m
connection_pooler_default_memory_request: 100Mi
@ -271,6 +271,7 @@ rbac:
crd:
# Specifies whether custom resource definitions should be created
# When using helm3, this is ignored; instead use "--skip-crds" to skip.
create: true
serviceAccount:

View File

@ -419,5 +419,9 @@ Those parameters are grouped under the `tls` top-level key.
Filename of the private key. Defaults to "tls.key".
* **caFile**
Optional filename to the CA certificate. Useful when the client connects
with `sslmode=verify-ca` or `sslmode=verify-full`. Default is empty.
Optional filename to the CA certificate (e.g. "ca.crt"). Useful when the
client connects with `sslmode=verify-ca` or `sslmode=verify-full`.
Default is empty.
Optionally one can provide full path for any of them. By default it is
relative to the "/tls/", which is mount path of the tls secret.

View File

@ -584,7 +584,8 @@ don't know the value, use `103` which is the GID from the default spilo image
OpenShift allocates the users and groups dynamically (based on scc), and their
range is different in every namespace. Due to this dynamic behaviour, it's not
trivial to know at deploy time the uid/gid of the user in the cluster.
This way, in OpenShift, you may want to skip the spilo_fsgroup setting.
Therefore, instead of using a global `spilo_fsgroup` setting, use the `spiloFSGroup` field
per Postgres cluster.```
Upload the cert as a kubernetes secret:
```sh

View File

@ -125,5 +125,8 @@ spec:
certificateFile: "tls.crt"
privateKeyFile: "tls.key"
caFile: "" # optionally configure Postgres with a CA certificate
# file names can be also defined with absolute path, and will no longer be relative
# to the "/tls/" path where the secret is being mounted by default.
# When TLS is enabled, also set spiloFSGroup parameter above to the relevant value.
# if unknown, set it to 103 which is the usual value in the default spilo images.
# In Openshift, there is no need to set spiloFSGroup/spilo_fsgroup.