Merge branch 'master' into patch-1
This commit is contained in:
commit
da23207c5d
|
|
@ -62,8 +62,6 @@ podAnnotations:
|
|||
extraEnvs:
|
||||
[]
|
||||
# Exemple of settings to make snapshot view working in the ui when using AWS
|
||||
# - name: WALE_S3_ENDPOINT
|
||||
# value: https+path://s3.us-east-1.amazonaws.com:443
|
||||
# - name: SPILO_S3_BACKUP_PREFIX
|
||||
# value: spilo/
|
||||
# - name: AWS_ACCESS_KEY_ID
|
||||
|
|
@ -83,8 +81,6 @@ extraEnvs:
|
|||
# key: AWS_DEFAULT_REGION
|
||||
# - name: SPILO_S3_BACKUP_BUCKET
|
||||
# value: <s3 bucket used by the operator>
|
||||
# - name: "USE_AWS_INSTANCE_PROFILE"
|
||||
# value: "true"
|
||||
|
||||
# configure UI service
|
||||
service:
|
||||
|
|
|
|||
|
|
@ -384,7 +384,7 @@ exceptions:
|
|||
The interval of days can be set with `password_rotation_interval` (default
|
||||
`90` = 90 days, minimum 1). On each rotation the user name and password values
|
||||
are replaced in the K8s secret. They belong to a newly created user named after
|
||||
the original role plus rotation date in YYMMDD format. All priviliges are
|
||||
the original role plus rotation date in YYMMDD format. All privileges are
|
||||
inherited meaning that migration scripts should still grant and revoke rights
|
||||
against the original role. The timestamp of the next rotation (in RFC 3339
|
||||
format, UTC timezone) is written to the secret as well. Note, if the rotation
|
||||
|
|
@ -564,7 +564,7 @@ manifest affinity.
|
|||
```
|
||||
|
||||
If `node_readiness_label_merge` is set to `"OR"` (default) the readiness label
|
||||
affinty will be appended with its own expressions block:
|
||||
affinity will be appended with its own expressions block:
|
||||
|
||||
```yaml
|
||||
affinity:
|
||||
|
|
@ -1140,7 +1140,7 @@ metadata:
|
|||
iam.gke.io/gcp-service-account: <GCP_SERVICE_ACCOUNT_NAME>@<GCP_PROJECT_ID>.iam.gserviceaccount.com
|
||||
```
|
||||
|
||||
2. Specify the new custom service account in your [operator paramaters](./reference/operator_parameters.md)
|
||||
2. Specify the new custom service account in your [operator parameters](./reference/operator_parameters.md)
|
||||
|
||||
If using manual deployment or kustomize, this is done by setting
|
||||
`pod_service_account_name` in your configuration file specified in the
|
||||
|
|
|
|||
|
|
@ -247,7 +247,7 @@ These parameters are grouped directly under the `spec` key in the manifest.
|
|||
[kubernetes volumeSource](https://godoc.org/k8s.io/api/core/v1#VolumeSource).
|
||||
It allows you to mount existing PersistentVolumeClaims, ConfigMaps and Secrets inside the StatefulSet.
|
||||
Also an `emptyDir` volume can be shared between initContainer and statefulSet.
|
||||
Additionaly, you can provide a `SubPath` for volume mount (a file in a configMap source volume, for example).
|
||||
Additionally, you can provide a `SubPath` for volume mount (a file in a configMap source volume, for example).
|
||||
Set `isSubPathExpr` to true if you want to include [API environment variables](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment).
|
||||
You can also specify in which container the additional Volumes will be mounted with the `targetContainers` array option.
|
||||
If `targetContainers` is empty, additional volumes will be mounted only in the `postgres` container.
|
||||
|
|
@ -257,7 +257,7 @@ These parameters are grouped directly under the `spec` key in the manifest.
|
|||
## Prepared Databases
|
||||
|
||||
The operator can create databases with default owner, reader and writer roles
|
||||
without the need to specifiy them under `users` or `databases` sections. Those
|
||||
without the need to specify them under `users` or `databases` sections. Those
|
||||
parameters are grouped under the `preparedDatabases` top-level key. For more
|
||||
information, see [user docs](../user.md#prepared-databases-with-roles-and-default-privileges).
|
||||
|
||||
|
|
|
|||
|
|
@ -209,7 +209,7 @@ under the `users` key.
|
|||
For all `LOGIN` roles that are not database owners the operator can rotate
|
||||
credentials in the corresponding K8s secrets by replacing the username and
|
||||
password. This means, new users will be added on each rotation inheriting
|
||||
all priviliges from the original roles. The rotation date (in YYMMDD format)
|
||||
all privileges from the original roles. The rotation date (in YYMMDD format)
|
||||
is appended to the names of the new user. The timestamp of the next rotation
|
||||
is written to the secret. The default is `false`.
|
||||
|
||||
|
|
@ -552,7 +552,7 @@ configuration they are grouped under the `kubernetes` key.
|
|||
pods with `InitialDelaySeconds: 6`, `PeriodSeconds: 10`, `TimeoutSeconds: 5`,
|
||||
`SuccessThreshold: 1` and `FailureThreshold: 3`. When enabling readiness
|
||||
probes it is recommended to switch the `pod_management_policy` to `parallel`
|
||||
to avoid unneccesary waiting times in case of multiple instances failing.
|
||||
to avoid unnecessary waiting times in case of multiple instances failing.
|
||||
The default is `false`.
|
||||
|
||||
* **storage_resize_mode**
|
||||
|
|
@ -701,7 +701,7 @@ In the CRD-based configuration they are grouped under the `load_balancer` key.
|
|||
replaced by the cluster name, `{namespace}` is replaced with the namespace
|
||||
and `{hostedzone}` is replaced with the hosted zone (the value of the
|
||||
`db_hosted_zone` parameter). The `{team}` placeholder can still be used,
|
||||
although it is not recommened because the team of a cluster can change.
|
||||
although it is not recommended because the team of a cluster can change.
|
||||
If the cluster name starts with the `teamId` it will also be part of the
|
||||
DNS, aynway. No other placeholders are allowed!
|
||||
|
||||
|
|
@ -720,7 +720,7 @@ In the CRD-based configuration they are grouped under the `load_balancer` key.
|
|||
is replaced by the cluster name, `{namespace}` is replaced with the
|
||||
namespace and `{hostedzone}` is replaced with the hosted zone (the value of
|
||||
the `db_hosted_zone` parameter). The `{team}` placeholder can still be used,
|
||||
although it is not recommened because the team of a cluster can change.
|
||||
although it is not recommended because the team of a cluster can change.
|
||||
If the cluster name starts with the `teamId` it will also be part of the
|
||||
DNS, aynway. No other placeholders are allowed!
|
||||
|
||||
|
|
|
|||
|
|
@ -900,7 +900,7 @@ the PostgreSQL version between source and target cluster has to be the same.
|
|||
|
||||
To start a cluster as standby, add the following `standby` section in the YAML
|
||||
file. You can stream changes from archived WAL files (AWS S3 or Google Cloud
|
||||
Storage) or from a remote primary. Only one option can be specfied in the
|
||||
Storage) or from a remote primary. Only one option can be specified in the
|
||||
manifest:
|
||||
|
||||
```yaml
|
||||
|
|
@ -911,7 +911,7 @@ spec:
|
|||
|
||||
For GCS, you have to define STANDBY_GOOGLE_APPLICATION_CREDENTIALS as a
|
||||
[custom pod environment variable](administrator.md#custom-pod-environment-variables).
|
||||
It is not set from the config to allow for overridding.
|
||||
It is not set from the config to allow for overriding.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
|
|
@ -1282,7 +1282,7 @@ minutes if the certificates have changed and reloads postgres accordingly.
|
|||
### TLS certificates for connection pooler
|
||||
|
||||
By default, the pgBouncer image generates its own TLS certificate like Spilo.
|
||||
When the `tls` section is specfied in the manifest it will be used for the
|
||||
When the `tls` section is specified in the manifest it will be used for the
|
||||
connection pooler pod(s) as well. The security context options are hard coded
|
||||
to `runAsUser: 100` and `runAsGroup: 101`. The `fsGroup` will be the same
|
||||
like for Spilo.
|
||||
|
|
|
|||
|
|
@ -81,8 +81,6 @@ spec:
|
|||
]
|
||||
}
|
||||
# Exemple of settings to make snapshot view working in the ui when using AWS
|
||||
# - name: WALE_S3_ENDPOINT
|
||||
# value: https+path://s3.us-east-1.amazonaws.com:443
|
||||
# - name: SPILO_S3_BACKUP_PREFIX
|
||||
# value: spilo/
|
||||
# - name: AWS_ACCESS_KEY_ID
|
||||
|
|
@ -102,5 +100,3 @@ spec:
|
|||
# key: AWS_DEFAULT_REGION
|
||||
# - name: SPILO_S3_BACKUP_BUCKET
|
||||
# value: <s3 bucket used by the operator>
|
||||
# - name: "USE_AWS_INSTANCE_PROFILE"
|
||||
# value: "true"
|
||||
|
|
|
|||
|
|
@ -95,14 +95,6 @@ DEFAULT_MEMORY_LIMIT = getenv('DEFAULT_MEMORY_LIMIT', '300Mi')
|
|||
DEFAULT_CPU = getenv('DEFAULT_CPU', '10m')
|
||||
DEFAULT_CPU_LIMIT = getenv('DEFAULT_CPU_LIMIT', '300m')
|
||||
|
||||
WALE_S3_ENDPOINT = getenv(
|
||||
'WALE_S3_ENDPOINT',
|
||||
'https+path://s3.eu-central-1.amazonaws.com:443',
|
||||
)
|
||||
|
||||
USE_AWS_INSTANCE_PROFILE = (
|
||||
getenv('USE_AWS_INSTANCE_PROFILE', 'false').lower() != 'false'
|
||||
)
|
||||
|
||||
AWS_ENDPOINT = getenv('AWS_ENDPOINT')
|
||||
|
||||
|
|
@ -784,8 +776,6 @@ def get_versions(pg_cluster: str):
|
|||
bucket=SPILO_S3_BACKUP_BUCKET,
|
||||
pg_cluster=pg_cluster,
|
||||
prefix=SPILO_S3_BACKUP_PREFIX,
|
||||
s3_endpoint=WALE_S3_ENDPOINT,
|
||||
use_aws_instance_profile=USE_AWS_INSTANCE_PROFILE,
|
||||
),
|
||||
)
|
||||
|
||||
|
|
@ -797,9 +787,8 @@ def get_basebackups(pg_cluster: str, uid: str):
|
|||
bucket=SPILO_S3_BACKUP_BUCKET,
|
||||
pg_cluster=pg_cluster,
|
||||
prefix=SPILO_S3_BACKUP_PREFIX,
|
||||
s3_endpoint=WALE_S3_ENDPOINT,
|
||||
uid=uid,
|
||||
use_aws_instance_profile=USE_AWS_INSTANCE_PROFILE,
|
||||
postgresql_versions=OPERATOR_UI_CONFIG.get('postgresql_versions', DEFAULT_UI_CONFIG['postgresql_versions']),
|
||||
),
|
||||
)
|
||||
|
||||
|
|
@ -991,8 +980,6 @@ def main(port, debug, clusters: list):
|
|||
logger.info(f'Superuser team: {SUPERUSER_TEAM}')
|
||||
logger.info(f'Target namespace: {TARGET_NAMESPACE}')
|
||||
logger.info(f'Teamservice URL: {TEAM_SERVICE_URL}')
|
||||
logger.info(f'Use AWS instance_profile: {USE_AWS_INSTANCE_PROFILE}')
|
||||
logger.info(f'WAL-E S3 endpoint: {WALE_S3_ENDPOINT}')
|
||||
logger.info(f'AWS S3 endpoint: {AWS_ENDPOINT}')
|
||||
|
||||
if TARGET_NAMESPACE is None:
|
||||
|
|
|
|||
|
|
@ -6,9 +6,8 @@ from os import environ, getenv
|
|||
from requests import Session
|
||||
from urllib.parse import urljoin
|
||||
from uuid import UUID
|
||||
from wal_e.cmd import configure_backup_cxt
|
||||
|
||||
from .utils import Attrs, defaulting, these
|
||||
from .utils import defaulting, these
|
||||
from operator_ui.adapters.logger import logger
|
||||
|
||||
session = Session()
|
||||
|
|
@ -284,10 +283,8 @@ def read_stored_clusters(bucket, prefix, delimiter='/'):
|
|||
def read_versions(
|
||||
pg_cluster,
|
||||
bucket,
|
||||
s3_endpoint,
|
||||
prefix,
|
||||
delimiter='/',
|
||||
use_aws_instance_profile=False,
|
||||
):
|
||||
return [
|
||||
'base' if uid == 'wal' else uid
|
||||
|
|
@ -305,35 +302,72 @@ def read_versions(
|
|||
if uid == 'wal' or defaulting(lambda: UUID(uid))
|
||||
]
|
||||
|
||||
BACKUP_VERSION_PREFIXES = ['', '10/', '11/', '12/', '13/', '14/', '15/', '16/', '17/']
|
||||
def lsn_to_wal_segment_stop(finish_lsn, start_segment, wal_segment_size=16 * 1024 * 1024):
|
||||
timeline = int(start_segment[:8], 16)
|
||||
log_id = finish_lsn >> 32
|
||||
seg_id = (finish_lsn & 0xFFFFFFFF) // wal_segment_size
|
||||
return f"{timeline:08X}{log_id:08X}{seg_id:08X}"
|
||||
|
||||
def lsn_to_offset_hex(lsn, wal_segment_size=16 * 1024 * 1024):
|
||||
return f"{lsn % wal_segment_size:08X}"
|
||||
|
||||
def read_basebackups(
|
||||
pg_cluster,
|
||||
uid,
|
||||
bucket,
|
||||
s3_endpoint,
|
||||
prefix,
|
||||
delimiter='/',
|
||||
use_aws_instance_profile=False,
|
||||
postgresql_versions,
|
||||
):
|
||||
environ['WALE_S3_ENDPOINT'] = s3_endpoint
|
||||
suffix = '' if uid == 'base' else '/' + uid
|
||||
backups = []
|
||||
|
||||
for vp in BACKUP_VERSION_PREFIXES:
|
||||
for vp in postgresql_versions:
|
||||
backup_prefix = f'{prefix}{pg_cluster}{suffix}/wal/{vp}/basebackups_005/'
|
||||
logger.info(f"{bucket}/{backup_prefix}")
|
||||
|
||||
backups = backups + [
|
||||
{
|
||||
key: value
|
||||
for key, value in basebackup.__dict__.items()
|
||||
if isinstance(value, str) or isinstance(value, int)
|
||||
}
|
||||
for basebackup in Attrs.call(
|
||||
f=configure_backup_cxt,
|
||||
aws_instance_profile=use_aws_instance_profile,
|
||||
s3_prefix=f's3://{bucket}/{prefix}{pg_cluster}{suffix}/wal/{vp}',
|
||||
)._backup_list(detail=True)
|
||||
]
|
||||
paginator = client('s3').get_paginator('list_objects_v2')
|
||||
pages = paginator.paginate(Bucket=bucket, Prefix=backup_prefix)
|
||||
|
||||
for page in pages:
|
||||
for obj in page.get("Contents", []):
|
||||
key = obj["Key"]
|
||||
if not key.endswith("backup_stop_sentinel.json"):
|
||||
continue
|
||||
|
||||
response = client('s3').get_object(Bucket=bucket, Key=key)
|
||||
backup_info = loads(response["Body"].read().decode("utf-8"))
|
||||
last_modified = response["LastModified"].astimezone(timezone.utc).isoformat()
|
||||
|
||||
backup_name = key.split("/")[-1].replace("_backup_stop_sentinel.json", "")
|
||||
start_seg, start_offset = backup_name.split("_")[1], backup_name.split("_")[-1] if "_" in backup_name else None
|
||||
|
||||
if "LSN" in backup_info and "FinishLSN" in backup_info:
|
||||
# WAL-G
|
||||
lsn = backup_info["LSN"]
|
||||
finish_lsn = backup_info["FinishLSN"]
|
||||
backups.append({
|
||||
"expanded_size_bytes": backup_info.get("UncompressedSize"),
|
||||
"last_modified": last_modified,
|
||||
"name": backup_name,
|
||||
"wal_segment_backup_start": start_seg,
|
||||
"wal_segment_backup_stop": lsn_to_wal_segment_stop(finish_lsn, start_seg),
|
||||
"wal_segment_offset_backup_start": lsn_to_offset_hex(lsn),
|
||||
"wal_segment_offset_backup_stop": lsn_to_offset_hex(finish_lsn),
|
||||
})
|
||||
elif "wal_segment_backup_stop" in backup_info:
|
||||
# WAL-E
|
||||
stop_seg = backup_info["wal_segment_backup_stop"]
|
||||
stop_offset = backup_info["wal_segment_offset_backup_stop"]
|
||||
|
||||
backups.append({
|
||||
"expanded_size_bytes": backup_info.get("expanded_size_bytes"),
|
||||
"last_modified": last_modified,
|
||||
"name": backup_name,
|
||||
"wal_segment_backup_start": start_seg,
|
||||
"wal_segment_backup_stop": stop_seg,
|
||||
"wal_segment_offset_backup_start": start_offset,
|
||||
"wal_segment_offset_backup_stop": stop_offset,
|
||||
})
|
||||
|
||||
return backups
|
||||
|
||||
|
|
|
|||
|
|
@ -11,5 +11,4 @@ kubernetes==11.0.0
|
|||
python-json-logger==2.0.7
|
||||
requests==2.32.2
|
||||
stups-tokens>=1.1.19
|
||||
wal_e==1.1.1
|
||||
werkzeug==3.0.6
|
||||
|
|
|
|||
Loading…
Reference in New Issue