Create cross namespace secrets (#1490)
* Create cross namespace secrets * add test cases * fixes * Fixes - include namespace in secret name only when namespace is provided - use username.namespace as key to pgUsers only when namespace is provided - avoid conflict in the role creation in db by checking namespace alongwith the username * Update unit tests * Fix test case * Fixes - update regular expression for usernames - add test to allow check for valid usernames - create pg roles with namespace (if any) appended in rolename * add more test cases for valid usernames * update docs * fixes as per review comments * update e2e * fixes * Add toggle to allow namespaced secrets * update docs * comment update * Update e2e/tests/test_e2e.py * few minor fixes * fix unit tests * fix e2e * fix e2e attempt 2 * fix e2e Co-authored-by: Rafia Sabih <rafia.sabih@zalando.de> Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
This commit is contained in:
parent
9668ac21a3
commit
75a9e2be38
|
|
@ -515,6 +515,8 @@ spec:
|
|||
type: integer
|
||||
useLoadBalancer: # deprecated
|
||||
type: boolean
|
||||
enableNamespacedSecret:
|
||||
type: boolean
|
||||
users:
|
||||
type: object
|
||||
additionalProperties:
|
||||
|
|
|
|||
|
|
@ -148,7 +148,10 @@ configKubernetes:
|
|||
|
||||
# Postgres pods are terminated forcefully after this timeout
|
||||
pod_terminate_grace_period: 5m
|
||||
# template for database user secrets generated by the operator
|
||||
# template for database user secrets generated by the operator,
|
||||
# here username contains the namespace in the format namespace.username
|
||||
# if the user is in different namespace than cluster and cross namespace secrets
|
||||
# are enabled via EnableNamespacedSecret flag.
|
||||
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
|
||||
# set user and group for the spilo container (required to run Spilo as non-root process)
|
||||
# spilo_runasuser: 101
|
||||
|
|
|
|||
|
|
@ -172,11 +172,11 @@ under the `users` key.
|
|||
|
||||
## Major version upgrades
|
||||
|
||||
Parameters configuring automatic major version upgrades. In a
|
||||
Parameters configuring automatic major version upgrades. In a
|
||||
CRD-configuration, they are grouped under the `major_version_upgrade` key.
|
||||
|
||||
* **major_version_upgrade_mode**
|
||||
Postgres Operator supports [in-place major version upgrade](../administrator.md#in-place-major-version-upgrade)
|
||||
Postgres Operator supports [in-place major version upgrade](../administrator.md#in-place-major-version-upgrade)
|
||||
with three different modes:
|
||||
`"off"` = no upgrade by the operator,
|
||||
`"manual"` = manifest triggers action,
|
||||
|
|
@ -275,11 +275,14 @@ configuration they are grouped under the `kubernetes` key.
|
|||
|
||||
* **secret_name_template**
|
||||
a template for the name of the database user secrets generated by the
|
||||
operator. `{username}` is replaced with name of the secret, `{cluster}` with
|
||||
the name of the cluster, `{tprkind}` with the kind of CRD (formerly known as
|
||||
TPR) and `{tprgroup}` with the group of the CRD. No other placeholders are
|
||||
allowed. The default is
|
||||
`{username}.{cluster}.credentials.{tprkind}.{tprgroup}`.
|
||||
operator. `{namespace}` is replaced with name of the namespace (if cross
|
||||
namespace secrets are enabled via EnableNamespacedSecret flag, otherwise the
|
||||
secret is in cluster's namespace and in that case it is not present in secret
|
||||
name), `{username}` is replaced with name of the secret, `{cluster}` with the
|
||||
name of the cluster, `{tprkind}` with the kind of CRD (formerly known as TPR)
|
||||
and `{tprgroup}` with the group of the CRD. No other placeholders are allowed.
|
||||
The default is
|
||||
`{namespace}.{username}.{cluster}.credentials.{tprkind}.{tprgroup}`.
|
||||
|
||||
* **cluster_domain**
|
||||
defines the default DNS domain for the kubernetes cluster the operator is
|
||||
|
|
|
|||
21
docs/user.md
21
docs/user.md
|
|
@ -139,6 +139,25 @@ secret, without ever sharing it outside of the cluster.
|
|||
At the moment it is not possible to define membership of the manifest role in
|
||||
other roles.
|
||||
|
||||
To define the secrets for the users in a different namespace than that of the cluster,
|
||||
one can use the flag `EnableNamespacedSecret` and declare the namespace for the
|
||||
secrets in the manifest in the following manner,
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
users:
|
||||
#users with secret in dfferent namespace
|
||||
appspace.db_user:
|
||||
- createdb
|
||||
```
|
||||
Here, anything before the first dot is taken as the namespace and the text after
|
||||
the first dot is the username. Also, the postgres roles of these usernames would
|
||||
be in the form of `namespace.username`.
|
||||
|
||||
For such usernames, the secret is created in the given namespace and its name is
|
||||
of the following form,
|
||||
`{namespace}.{username}.{team}-{clustername}.credentials.postgresql.acid.zalan.do`
|
||||
|
||||
### Infrastructure roles
|
||||
|
||||
An infrastructure role is a role that should be present on every PostgreSQL
|
||||
|
|
@ -330,7 +349,7 @@ spec:
|
|||
|
||||
This creates roles for members of the `c-team` team not only in all clusters
|
||||
owned by `a-team`, but as well in cluster owned by `b-team`, as `a-team` is
|
||||
an `additionalTeam` to `b-team`
|
||||
an `additionalTeam` to `b-team`
|
||||
|
||||
Not, you can also define `additionalSuperuserTeams` in the `PostgresTeam`
|
||||
manifest. By default, this option is disabled and must be configured with
|
||||
|
|
|
|||
|
|
@ -197,6 +197,16 @@ class K8s:
|
|||
pod_phase = pods[0].status.phase
|
||||
time.sleep(self.RETRY_TIMEOUT_SEC)
|
||||
|
||||
def wait_for_namespace_creation(self, namespace='default'):
|
||||
ns_found = False
|
||||
while ns_found != True:
|
||||
ns = self.api.core_v1.list_namespace().items
|
||||
for n in ns:
|
||||
if n.metadata.name == namespace:
|
||||
ns_found = True
|
||||
break
|
||||
time.sleep(self.RETRY_TIMEOUT_SEC)
|
||||
|
||||
def get_logical_backup_job(self, namespace='default'):
|
||||
return self.api.batch_v1_beta1.list_namespaced_cron_job(namespace, label_selector="application=spilo")
|
||||
|
||||
|
|
|
|||
|
|
@ -253,7 +253,7 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
WHERE (rolname = 'tester' AND rolcanlogin)
|
||||
OR (rolname = 'kind_delete_me' AND NOT rolcanlogin);
|
||||
"""
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", user_query)), 2,
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", user_query)), 2,
|
||||
"Database role of replaced member in PostgresTeam not renamed", 10, 5)
|
||||
|
||||
# re-add additional member and check if the role is renamed back
|
||||
|
|
@ -276,7 +276,7 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
WHERE (rolname = 'kind' AND rolcanlogin)
|
||||
OR (rolname = 'tester_delete_me' AND NOT rolcanlogin);
|
||||
"""
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", user_query)), 2,
|
||||
self.eventuallyEqual(lambda: len(self.query_database(leader.metadata.name, "postgres", user_query)), 2,
|
||||
"Database role of recreated member in PostgresTeam not renamed back to original name", 10, 5)
|
||||
|
||||
# revert config change
|
||||
|
|
@ -322,7 +322,6 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
self.eventuallyEqual(lambda: self.k8s.count_running_pods("connection-pooler=acid-minimal-cluster-pooler"),
|
||||
0, "Pooler pods not scaled down")
|
||||
|
||||
|
||||
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
|
||||
def test_enable_disable_connection_pooler(self):
|
||||
'''
|
||||
|
|
@ -568,6 +567,7 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
role.pop("Password", None)
|
||||
self.assertDictEqual(role, {
|
||||
"Name": "robot_zmon_acid_monitoring_new",
|
||||
"Namespace":"",
|
||||
"Flags": None,
|
||||
"MemberOf": ["robot_zmon"],
|
||||
"Parameters": None,
|
||||
|
|
@ -587,6 +587,41 @@ class EndToEndTestCase(unittest.TestCase):
|
|||
print('Operator log: {}'.format(k8s.get_operator_log()))
|
||||
raise
|
||||
|
||||
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
|
||||
def test_zz_cross_namespace_secrets(self):
|
||||
'''
|
||||
Test secrets in different namespace
|
||||
'''
|
||||
app_namespace = "appspace"
|
||||
|
||||
v1_appnamespace = client.V1Namespace(metadata=client.V1ObjectMeta(name=app_namespace))
|
||||
self.k8s.api.core_v1.create_namespace(v1_appnamespace)
|
||||
self.k8s.wait_for_namespace_creation(app_namespace)
|
||||
|
||||
self.k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
'acid.zalan.do', 'v1', 'default',
|
||||
'postgresqls', 'acid-minimal-cluster',
|
||||
{
|
||||
'spec': {
|
||||
'enableNamespacedSecret': True,
|
||||
'users':{
|
||||
'appspace.db_user': [],
|
||||
}
|
||||
}
|
||||
})
|
||||
self.eventuallyEqual(lambda: self.k8s.count_secrets_with_label("cluster-name=acid-minimal-cluster,application=spilo", app_namespace),
|
||||
1, "Secret not created for user in namespace")
|
||||
|
||||
#reset the flag
|
||||
self.k8s.api.custom_objects_api.patch_namespaced_custom_object(
|
||||
'acid.zalan.do', 'v1', 'default',
|
||||
'postgresqls', 'acid-minimal-cluster',
|
||||
{
|
||||
'spec': {
|
||||
'enableNamespacedSecret': False,
|
||||
}
|
||||
})
|
||||
|
||||
@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
|
||||
def test_lazy_spilo_upgrade(self):
|
||||
'''
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ spec:
|
|||
dockerImage: registry.opensource.zalan.do/acid/spilo-13:2.0-p7
|
||||
teamId: "acid"
|
||||
numberOfInstances: 2
|
||||
enableNamespacedSecret: False
|
||||
users: # Application/Robot users
|
||||
zalando:
|
||||
- superuser
|
||||
|
|
|
|||
|
|
@ -730,6 +730,9 @@ var PostgresCRDResourceValidation = apiextv1.CustomResourceValidation{
|
|||
Type: "boolean",
|
||||
Description: "Deprecated",
|
||||
},
|
||||
"enableNamespacedSecret": {
|
||||
Type: "boolean",
|
||||
},
|
||||
"users": {
|
||||
Type: "object",
|
||||
AdditionalProperties: &apiextv1.JSONSchemaPropsOrBool{
|
||||
|
|
|
|||
|
|
@ -53,27 +53,28 @@ type PostgresSpec struct {
|
|||
// load balancers' source ranges are the same for master and replica services
|
||||
AllowedSourceRanges []string `json:"allowedSourceRanges"`
|
||||
|
||||
NumberOfInstances int32 `json:"numberOfInstances"`
|
||||
Users map[string]UserFlags `json:"users,omitempty"`
|
||||
MaintenanceWindows []MaintenanceWindow `json:"maintenanceWindows,omitempty"`
|
||||
Clone *CloneDescription `json:"clone,omitempty"`
|
||||
ClusterName string `json:"-"`
|
||||
Databases map[string]string `json:"databases,omitempty"`
|
||||
PreparedDatabases map[string]PreparedDatabase `json:"preparedDatabases,omitempty"`
|
||||
SchedulerName *string `json:"schedulerName,omitempty"`
|
||||
NodeAffinity *v1.NodeAffinity `json:"nodeAffinity,omitempty"`
|
||||
Tolerations []v1.Toleration `json:"tolerations,omitempty"`
|
||||
Sidecars []Sidecar `json:"sidecars,omitempty"`
|
||||
InitContainers []v1.Container `json:"initContainers,omitempty"`
|
||||
PodPriorityClassName string `json:"podPriorityClassName,omitempty"`
|
||||
ShmVolume *bool `json:"enableShmVolume,omitempty"`
|
||||
EnableLogicalBackup bool `json:"enableLogicalBackup,omitempty"`
|
||||
LogicalBackupSchedule string `json:"logicalBackupSchedule,omitempty"`
|
||||
StandbyCluster *StandbyDescription `json:"standby,omitempty"`
|
||||
PodAnnotations map[string]string `json:"podAnnotations,omitempty"`
|
||||
ServiceAnnotations map[string]string `json:"serviceAnnotations,omitempty"`
|
||||
TLS *TLSDescription `json:"tls,omitempty"`
|
||||
AdditionalVolumes []AdditionalVolume `json:"additionalVolumes,omitempty"`
|
||||
NumberOfInstances int32 `json:"numberOfInstances"`
|
||||
EnableNamespacedSecret *bool `json:"enableNamespacedSecret,omitempty"`
|
||||
Users map[string]UserFlags `json:"users,omitempty"`
|
||||
MaintenanceWindows []MaintenanceWindow `json:"maintenanceWindows,omitempty"`
|
||||
Clone *CloneDescription `json:"clone,omitempty"`
|
||||
ClusterName string `json:"-"`
|
||||
Databases map[string]string `json:"databases,omitempty"`
|
||||
PreparedDatabases map[string]PreparedDatabase `json:"preparedDatabases,omitempty"`
|
||||
SchedulerName *string `json:"schedulerName,omitempty"`
|
||||
NodeAffinity *v1.NodeAffinity `json:"nodeAffinity,omitempty"`
|
||||
Tolerations []v1.Toleration `json:"tolerations,omitempty"`
|
||||
Sidecars []Sidecar `json:"sidecars,omitempty"`
|
||||
InitContainers []v1.Container `json:"initContainers,omitempty"`
|
||||
PodPriorityClassName string `json:"podPriorityClassName,omitempty"`
|
||||
ShmVolume *bool `json:"enableShmVolume,omitempty"`
|
||||
EnableLogicalBackup bool `json:"enableLogicalBackup,omitempty"`
|
||||
LogicalBackupSchedule string `json:"logicalBackupSchedule,omitempty"`
|
||||
StandbyCluster *StandbyDescription `json:"standby,omitempty"`
|
||||
PodAnnotations map[string]string `json:"podAnnotations,omitempty"`
|
||||
ServiceAnnotations map[string]string `json:"serviceAnnotations,omitempty"`
|
||||
TLS *TLSDescription `json:"tls,omitempty"`
|
||||
AdditionalVolumes []AdditionalVolume `json:"additionalVolumes,omitempty"`
|
||||
|
||||
// deprecated json tags
|
||||
InitContainersOld []v1.Container `json:"init_containers,omitempty"`
|
||||
|
|
|
|||
|
|
@ -614,6 +614,11 @@ func (in *PostgresSpec) DeepCopyInto(out *PostgresSpec) {
|
|||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.EnableNamespacedSecret != nil {
|
||||
in, out := &in.EnableNamespacedSecret, &out.EnableNamespacedSecret
|
||||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
if in.Users != nil {
|
||||
in, out := &in.Users, &out.Users
|
||||
*out = make(map[string]UserFlags, len(*in))
|
||||
|
|
|
|||
|
|
@ -940,14 +940,16 @@ func (c *Cluster) initSystemUsers() {
|
|||
// secrets, therefore, setting flags like SUPERUSER or REPLICATION
|
||||
// is not necessary here
|
||||
c.systemUsers[constants.SuperuserKeyName] = spec.PgUser{
|
||||
Origin: spec.RoleOriginSystem,
|
||||
Name: c.OpConfig.SuperUsername,
|
||||
Password: util.RandomPassword(constants.PasswordLength),
|
||||
Origin: spec.RoleOriginSystem,
|
||||
Name: c.OpConfig.SuperUsername,
|
||||
Namespace: c.Namespace,
|
||||
Password: util.RandomPassword(constants.PasswordLength),
|
||||
}
|
||||
c.systemUsers[constants.ReplicationUserKeyName] = spec.PgUser{
|
||||
Origin: spec.RoleOriginSystem,
|
||||
Name: c.OpConfig.ReplicationUsername,
|
||||
Password: util.RandomPassword(constants.PasswordLength),
|
||||
Origin: spec.RoleOriginSystem,
|
||||
Name: c.OpConfig.ReplicationUsername,
|
||||
Namespace: c.Namespace,
|
||||
Password: util.RandomPassword(constants.PasswordLength),
|
||||
}
|
||||
|
||||
// Connection pooler user is an exception, if requested it's going to be
|
||||
|
|
@ -975,10 +977,11 @@ func (c *Cluster) initSystemUsers() {
|
|||
|
||||
// connection pooler application should be able to login with this role
|
||||
connectionPoolerUser := spec.PgUser{
|
||||
Origin: spec.RoleConnectionPooler,
|
||||
Name: username,
|
||||
Flags: []string{constants.RoleFlagLogin},
|
||||
Password: util.RandomPassword(constants.PasswordLength),
|
||||
Origin: spec.RoleConnectionPooler,
|
||||
Name: username,
|
||||
Namespace: c.Namespace,
|
||||
Flags: []string{constants.RoleFlagLogin},
|
||||
Password: util.RandomPassword(constants.PasswordLength),
|
||||
}
|
||||
|
||||
if _, exists := c.pgUsers[username]; !exists {
|
||||
|
|
@ -1081,6 +1084,7 @@ func (c *Cluster) initDefaultRoles(defaultRoles map[string]string, admin, prefix
|
|||
newRole := spec.PgUser{
|
||||
Origin: spec.RoleOriginBootstrap,
|
||||
Name: roleName,
|
||||
Namespace: c.Namespace,
|
||||
Password: util.RandomPassword(constants.PasswordLength),
|
||||
Flags: flags,
|
||||
MemberOf: memberOf,
|
||||
|
|
@ -1105,6 +1109,17 @@ func (c *Cluster) initRobotUsers() error {
|
|||
if c.shouldAvoidProtectedOrSystemRole(username, "manifest robot role") {
|
||||
continue
|
||||
}
|
||||
namespace := c.Namespace
|
||||
|
||||
//if namespaced secrets are allowed
|
||||
if c.Postgresql.Spec.EnableNamespacedSecret != nil &&
|
||||
*c.Postgresql.Spec.EnableNamespacedSecret {
|
||||
if strings.Contains(username, ".") {
|
||||
splits := strings.Split(username, ".")
|
||||
namespace = splits[0]
|
||||
}
|
||||
}
|
||||
|
||||
flags, err := normalizeUserFlags(userFlags)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid flags for user %q: %v", username, err)
|
||||
|
|
@ -1116,6 +1131,7 @@ func (c *Cluster) initRobotUsers() error {
|
|||
newRole := spec.PgUser{
|
||||
Origin: spec.RoleOriginManifest,
|
||||
Name: username,
|
||||
Namespace: namespace,
|
||||
Password: util.RandomPassword(constants.PasswordLength),
|
||||
Flags: flags,
|
||||
AdminRole: adminRole,
|
||||
|
|
@ -1233,6 +1249,7 @@ func (c *Cluster) initInfrastructureRoles() error {
|
|||
return fmt.Errorf("invalid flags for user '%v': %v", username, err)
|
||||
}
|
||||
newRole.Flags = flags
|
||||
newRole.Namespace = c.Namespace
|
||||
|
||||
if currentRole, present := c.pgUsers[username]; present {
|
||||
c.pgUsers[username] = c.resolveNameConflict(¤tRole, &newRole)
|
||||
|
|
|
|||
|
|
@ -7,12 +7,14 @@ import (
|
|||
|
||||
"github.com/sirupsen/logrus"
|
||||
acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
|
||||
fakeacidv1 "github.com/zalando/postgres-operator/pkg/generated/clientset/versioned/fake"
|
||||
"github.com/zalando/postgres-operator/pkg/spec"
|
||||
"github.com/zalando/postgres-operator/pkg/util/config"
|
||||
"github.com/zalando/postgres-operator/pkg/util/constants"
|
||||
"github.com/zalando/postgres-operator/pkg/util/k8sutil"
|
||||
"github.com/zalando/postgres-operator/pkg/util/teams"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/client-go/kubernetes/fake"
|
||||
"k8s.io/client-go/tools/record"
|
||||
)
|
||||
|
||||
|
|
@ -79,8 +81,8 @@ func TestInitRobotUsers(t *testing.T) {
|
|||
}{
|
||||
{
|
||||
manifestUsers: map[string]acidv1.UserFlags{"foo": {"superuser", "createdb"}},
|
||||
infraRoles: map[string]spec.PgUser{"foo": {Origin: spec.RoleOriginInfrastructure, Name: "foo", Password: "bar"}},
|
||||
result: map[string]spec.PgUser{"foo": {Origin: spec.RoleOriginInfrastructure, Name: "foo", Password: "bar"}},
|
||||
infraRoles: map[string]spec.PgUser{"foo": {Origin: spec.RoleOriginInfrastructure, Name: "foo", Namespace: cl.Namespace, Password: "bar"}},
|
||||
result: map[string]spec.PgUser{"foo": {Origin: spec.RoleOriginInfrastructure, Name: "foo", Namespace: cl.Namespace, Password: "bar"}},
|
||||
err: nil,
|
||||
},
|
||||
{
|
||||
|
|
@ -845,3 +847,90 @@ func TestPreparedDatabases(t *testing.T) {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCrossNamespacedSecrets(t *testing.T) {
|
||||
testName := "test secrets in different namespace"
|
||||
clientSet := fake.NewSimpleClientset()
|
||||
acidClientSet := fakeacidv1.NewSimpleClientset()
|
||||
namespace := "default"
|
||||
|
||||
client := k8sutil.KubernetesClient{
|
||||
StatefulSetsGetter: clientSet.AppsV1(),
|
||||
ServicesGetter: clientSet.CoreV1(),
|
||||
DeploymentsGetter: clientSet.AppsV1(),
|
||||
PostgresqlsGetter: acidClientSet.AcidV1(),
|
||||
SecretsGetter: clientSet.CoreV1(),
|
||||
}
|
||||
pg := acidv1.Postgresql{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "acid-fake-cluster",
|
||||
Namespace: namespace,
|
||||
},
|
||||
Spec: acidv1.PostgresSpec{
|
||||
Volume: acidv1.Volume{
|
||||
Size: "1Gi",
|
||||
},
|
||||
EnableNamespacedSecret: boolToPointer(true),
|
||||
Users: map[string]acidv1.UserFlags{
|
||||
"appspace.db_user": {},
|
||||
"db_user": {},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
var cluster = New(
|
||||
Config{
|
||||
OpConfig: config.Config{
|
||||
ConnectionPooler: config.ConnectionPooler{
|
||||
ConnectionPoolerDefaultCPURequest: "100m",
|
||||
ConnectionPoolerDefaultCPULimit: "100m",
|
||||
ConnectionPoolerDefaultMemoryRequest: "100Mi",
|
||||
ConnectionPoolerDefaultMemoryLimit: "100Mi",
|
||||
NumberOfInstances: int32ToPointer(1),
|
||||
},
|
||||
PodManagementPolicy: "ordered_ready",
|
||||
Resources: config.Resources{
|
||||
ClusterLabels: map[string]string{"application": "spilo"},
|
||||
ClusterNameLabel: "cluster-name",
|
||||
DefaultCPURequest: "300m",
|
||||
DefaultCPULimit: "300m",
|
||||
DefaultMemoryRequest: "300Mi",
|
||||
DefaultMemoryLimit: "300Mi",
|
||||
PodRoleLabel: "spilo-role",
|
||||
},
|
||||
},
|
||||
}, client, pg, logger, eventRecorder)
|
||||
|
||||
userNamespaceMap := map[string]string{
|
||||
cluster.Namespace: "db_user",
|
||||
"appspace": "appspace.db_user",
|
||||
}
|
||||
|
||||
err := cluster.initRobotUsers()
|
||||
if err != nil {
|
||||
t.Errorf("Could not create secret for namespaced users with error: %s", err)
|
||||
}
|
||||
|
||||
for _, u := range cluster.pgUsers {
|
||||
if u.Name != userNamespaceMap[u.Namespace] {
|
||||
t.Errorf("%s: Could not create namespaced user in its correct namespaces for user %s in namespace %s", testName, u.Name, u.Namespace)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidUsernames(t *testing.T) {
|
||||
testName := "test username validity"
|
||||
|
||||
invalidUsernames := []string{"_", ".", ".user", "appspace.", "user_", "_user", "-user", "user-", ",", "-", ",user", "user,", "namespace,user"}
|
||||
validUsernames := []string{"user", "appspace.user", "appspace.dot.user", "user_name", "app_space.user_name"}
|
||||
for _, username := range invalidUsernames {
|
||||
if isValidUsername(username) {
|
||||
t.Errorf("%s Invalid username is allowed: %s", testName, username)
|
||||
}
|
||||
}
|
||||
for _, username := range validUsernames {
|
||||
if !isValidUsername(username) {
|
||||
t.Errorf("%s Valid username is not allowed: %s", testName, username)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1547,10 +1547,11 @@ func (c *Cluster) generateUserSecrets() map[string]*v1.Secret {
|
|||
namespace := c.Namespace
|
||||
for username, pgUser := range c.pgUsers {
|
||||
//Skip users with no password i.e. human users (they'll be authenticated using pam)
|
||||
secret := c.generateSingleUserSecret(namespace, pgUser)
|
||||
secret := c.generateSingleUserSecret(pgUser.Namespace, pgUser)
|
||||
if secret != nil {
|
||||
secrets[username] = secret
|
||||
}
|
||||
namespace = pgUser.Namespace
|
||||
}
|
||||
/* special case for the system user */
|
||||
for _, systemUser := range c.systemUsers {
|
||||
|
|
@ -1590,7 +1591,7 @@ func (c *Cluster) generateSingleUserSecret(namespace string, pgUser spec.PgUser)
|
|||
secret := v1.Secret{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: c.credentialSecretName(username),
|
||||
Namespace: namespace,
|
||||
Namespace: pgUser.Namespace,
|
||||
Labels: lbls,
|
||||
Annotations: c.annotationsSet(nil),
|
||||
},
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ func (c *Cluster) listResources() error {
|
|||
}
|
||||
|
||||
for _, obj := range c.Secrets {
|
||||
c.logger.Infof("found secret: %q (uid: %q)", util.NameFromMeta(obj.ObjectMeta), obj.UID)
|
||||
c.logger.Infof("found secret: %q (uid: %q) namesapce: %s", util.NameFromMeta(obj.ObjectMeta), obj.UID, obj.ObjectMeta.Namespace)
|
||||
}
|
||||
|
||||
for role, endpoint := range c.Endpoints {
|
||||
|
|
|
|||
|
|
@ -483,7 +483,7 @@ func (c *Cluster) syncSecrets() error {
|
|||
for secretUsername, secretSpec := range secrets {
|
||||
if secret, err = c.KubeClient.Secrets(secretSpec.Namespace).Create(context.TODO(), secretSpec, metav1.CreateOptions{}); err == nil {
|
||||
c.Secrets[secret.UID] = secret
|
||||
c.logger.Debugf("created new secret %s, uid: %s", util.NameFromMeta(secret.ObjectMeta), secret.UID)
|
||||
c.logger.Debugf("created new secret %s, namespace: %s, uid: %s", util.NameFromMeta(secret.ObjectMeta), secretSpec.Namespace, secret.UID)
|
||||
continue
|
||||
}
|
||||
if k8sutil.ResourceAlreadyExists(err) {
|
||||
|
|
@ -521,7 +521,7 @@ func (c *Cluster) syncSecrets() error {
|
|||
userMap[secretUsername] = pwdUser
|
||||
}
|
||||
} else {
|
||||
return fmt.Errorf("could not create secret for user %s: %v", secretUsername, err)
|
||||
return fmt.Errorf("could not create secret for user %s: in namespace %s: %v", secretUsername, secretSpec.Namespace, err)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -556,11 +556,17 @@ func (c *Cluster) syncRoles() (err error) {
|
|||
|
||||
// create list of database roles to query
|
||||
for _, u := range c.pgUsers {
|
||||
userNames = append(userNames, u.Name)
|
||||
pgRole := u.Name
|
||||
if u.Namespace != c.Namespace && u.Namespace != "" {
|
||||
// to avoid the conflict of having multiple users of same name
|
||||
// but each in different namespace.
|
||||
pgRole = fmt.Sprintf("%s.%s", u.Name, u.Namespace)
|
||||
}
|
||||
userNames = append(userNames, pgRole)
|
||||
// add team member role name with rename suffix in case we need to rename it back
|
||||
if u.Origin == spec.RoleOriginTeamsAPI && c.OpConfig.EnableTeamMemberDeprecation {
|
||||
deletedUsers[u.Name+c.OpConfig.RoleDeletionSuffix] = u.Name
|
||||
userNames = append(userNames, u.Name+c.OpConfig.RoleDeletionSuffix)
|
||||
deletedUsers[pgRole+c.OpConfig.RoleDeletionSuffix] = pgRole
|
||||
userNames = append(userNames, pgRole+c.OpConfig.RoleDeletionSuffix)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -49,6 +49,7 @@ const (
|
|||
type PgUser struct {
|
||||
Origin RoleOrigin `yaml:"-"`
|
||||
Name string `yaml:"-"`
|
||||
Namespace string `yaml:"-"`
|
||||
Password string `yaml:"-"`
|
||||
Flags []string `yaml:"user_flags"`
|
||||
MemberOf []string `yaml:"inrole"`
|
||||
|
|
|
|||
Loading…
Reference in New Issue