* Change error computation on JSON Unmarshall
The [Unmarshall function][1] on the encoding/JSON default library returns
different errors for different go versions. On Go 1.12, the version used
currently on the CI system it returns `json: cannot unmarshal number into
Go struct field PostgresSpec.teamId of type string`. On Go 1.13.5 it
returns `json: cannot unmarshal number into Go struct field
PostgresSpec.spec.teamId of type string`. The new version includes more
details of the whole structure being unmarshelled.
This commit introduces the same error but one level deeper on the JSON
structure. It creates consistency across different Go versions.
[1]: https://godoc.org/encoding/json#Unmarshal
* Create subtests on table test scenarios
The Run method of T allows defining subtests creating hierarchical tests.
It provides better visibility of tests in case of failure. More
details on https://golang.org/pkg/testing/.
This commit converts each test scenario on
pkg/apis/acid.zalan.do/v1/util_test.go to subtests, providing a better
visibility and the debugging environment when working with tests. The
following code snippet shows an error during test execution with
subtests:
```
--- FAIL: TestUnmarshalMaintenanceWindow (0.00s)
--- FAIL: TestUnmarshalMaintenanceWindow/expect_error_as_'From'_is_later_than_'To' (0.00s)
```
It included a `about` field on test scenarios describing the test
purpose and/or it expected output. When a description was provided with
comments it was moved to the about field.
* add validation for PG resources and volume size
* check resource requests also on UPDATE and SYNC + update docs
* if cluster was running don't error on sync
* add CRD manifests with validation
* update documentation
* patroni slots is not an array but a nested hash map
* make deps call tools
* cover validation in docs and export it in crds.go
* add toggle to disable creation of CRD validation and document it
* use templated service account also for CRD-configured helm deployment
* Added possibility to add custom annotations to LoadBalancer service.
* Added parameters for custom endpoint, access and secret key for logical backup.
* Modified dump.sh so it knows how to handle new features. Configurable S3 SSE
For optimization purposes operator was creating a cache map to remember
if service accounts and role binding was deployed to a namespace. This
could lead to a problem, when a namespace was deleted, since this
cache was not synchronized. For the sake of correctness remove the
cache, and check every time if required service account and rbac is
present. In the normal case this introduces an overhead of two API calls
per an event (one to get a service accounts, one to get a role binding),
which should not be a problem, unless proven otherwise.
* And attempt to build with modules and remove glide
* new tools.go file to get code-generator dependency + updated codegen + remove Glide files and update docs
* align config map, operator config, helm chart values and templates
* follow helm chart conventions also in CRD templates
* split up values files and add comments
* avoid yaml confusion in postgres manifests
* bump spilo version and use example for logical_backup_s3_bucket
* add ConfigTarget switch to values
This will set up a continuous wal streaming cluster, by adding the corresponding section in postgres manifest. Instead of having a full-fledged standby cluster as in Patroni, here we use only the wal path of the source cluster and stream from there.
Since, standby cluster is streaming from the master and does not require to create or use databases of it's own. Hence, it bypasses the creation of users or databases.
There is a separate sample manifest added to set up a standby-cluster.
* StatefulSet fsGroup config option to allow non-root spilo
* Allow Postgres CRD to overide SpiloFSGroup of the Operator.
* Document FSGroup of a Pod cannot be changed after creation.
* database.go: substitute hardcoded .svc.cluster.local dns suffix with config parameter
Use the pod's configured dns search path, for clusters where .svc.cluster.local is not correct.
Override clone s3 bucket path
Add possibility to use a custom s3 bucket path for cloning a cluster
from an arbitrary bucket (e.g. from another k8s cluster). For that
a new config options is introduced `s3_wal_path`, that should point
to a location that spilo would understand.