Allow configuring pod priority globally and per cluster. (#353)

* Allow configuring pod priority globally and per cluster.

Allow to specify pod priority class for all pods managed by the operator,
as well as for those belonging to individual clusters.

Controlled by the pod_priority_class_name operator configuration
parameter and the podPriorityClassName manifest option.

See https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
for the explanation on how to define priority classes since Kubernetes 1.8.

Some import order changes are due to go fmt.
Removal of OrphanDependents deprecated field.

Code review by @zerg-junior
This commit is contained in:
Oleksii Kliukin 2018-08-03 14:03:37 +02:00 committed by GitHub
parent ac7b132314
commit 59f0c5551e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
21 changed files with 96 additions and 68 deletions

View File

@ -89,7 +89,14 @@ Those are parameters grouped directly under the `spec` key in the manifest.
examples](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) examples](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)
for details on tolerations and possible values of those keys. When set, this for details on tolerations and possible values of those keys. When set, this
value overrides the `pod_toleration` setting from the operator. Optional. value overrides the `pod_toleration` setting from the operator. Optional.
* **podPriorityClassName**
a name of the [priority
class](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass)
that should be assigned to the cluster pods. When not specified, the value
is taken from the `pod_priority_class_name` operator parameter, if not set
then the default priority class is taken. The priority class itself must be defined in advance.
## Postgres parameters ## Postgres parameters
Those parameters are grouped under the `postgresql` top-level key. Those parameters are grouped under the `postgresql` top-level key.

View File

@ -116,10 +116,15 @@ configuration they are grouped under the `kubernetes` key.
option. If not defined, a simple definition that contains only the name will be used. The default is empty. option. If not defined, a simple definition that contains only the name will be used. The default is empty.
* **pod_service_account_role_binding_definition** * **pod_service_account_role_binding_definition**
This definition must bind pod service account to a role with permission sufficient for the pods to start and for Patroni to access k8s endpoints; service account on its own lacks any such rights starting with k8s v1.8. If not excplicitly defined by the user, a simple definition that binds the account to the operator's own 'zalando-postgres-operator' cluster role will be used. The default is empty. This definition must bind pod service account to a role with permission
sufficient for the pods to start and for Patroni to access k8s endpoints;
service account on its own lacks any such rights starting with k8s v1.8. If
not excplicitly defined by the user, a simple definition that binds the
account to the operator's own 'zalando-postgres-operator' cluster role will
be used. The default is empty.
* **pod_terminate_grace_period** * **pod_terminate_grace_period**
Patroni pods are [terminated Postgres pods are [terminated
forcefully](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) forcefully](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods)
after this timeout. The default is `5m`. after this timeout. The default is `5m`.
@ -151,7 +156,7 @@ configuration they are grouped under the `kubernetes` key.
name of the secret containing infrastructure roles names and passwords. name of the secret containing infrastructure roles names and passwords.
* **pod_role_label** * **pod_role_label**
name of the label assigned to the postgres pods (and services/endpoints) by name of the label assigned to the Postgres pods (and services/endpoints) by
the operator. The default is `spilo-role`. the operator. The default is `spilo-role`.
* **cluster_labels** * **cluster_labels**
@ -168,7 +173,7 @@ configuration they are grouped under the `kubernetes` key.
considered `ready`. The operator uses values of those labels to detect the considered `ready`. The operator uses values of those labels to detect the
start of the Kubernetes cluster upgrade procedure and move master pods off start of the Kubernetes cluster upgrade procedure and move master pods off
the nodes to be decommissioned. When the set is not empty, the operator also the nodes to be decommissioned. When the set is not empty, the operator also
assigns the `Affinity` clause to the postgres pods to be scheduled only on assigns the `Affinity` clause to the Postgres pods to be scheduled only on
`ready` nodes. The default is empty. `ready` nodes. The default is empty.
* **toleration** * **toleration**
@ -184,6 +189,13 @@ configuration they are grouped under the `kubernetes` key.
All variables from that ConfigMap are injected to the pod's environment, on All variables from that ConfigMap are injected to the pod's environment, on
conflicts they are overridden by the environment variables generated by the conflicts they are overridden by the environment variables generated by the
operator. The default is empty. operator. The default is empty.
* **pod_priority_class_name**
a name of the [priority
class](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass)
that should be assigned to the Postgres pods. The priority class itself must be defined in advance.
Default is empty (use the default priority class).
## Kubernetes resource requests ## Kubernetes resource requests
@ -231,8 +243,8 @@ CRD-based configuration.
possible issues faster. The default is `10m`. possible issues faster. The default is `10m`.
* **pod_deletion_wait_timeout** * **pod_deletion_wait_timeout**
timeout when waiting for the pods to be deleted when removing the cluster or timeout when waiting for the Postgres pods to be deleted when removing the
recreating pods. The default is `10m`. cluster or recreating pods. The default is `10m`.
* **ready_wait_interval** * **ready_wait_interval**
the interval between consecutive attempts waiting for the postgres CRD to be the interval between consecutive attempts waiting for the postgres CRD to be
@ -285,18 +297,19 @@ either. In the CRD-based configuration those options are grouped under the
* **wal_s3_bucket** * **wal_s3_bucket**
S3 bucket to use for shipping WAL segments with WAL-E. A bucket has to be S3 bucket to use for shipping WAL segments with WAL-E. A bucket has to be
present and accessible by Patroni managed pods. At the moment, supported present and accessible by Postgres pods. At the moment, supported services by
services by Spilo are S3 and GCS. The default is empty. Spilo are S3 and GCS. The default is empty.
* **log_s3_bucket** * **log_s3_bucket**
S3 bucket to use for shipping postgres daily logs. Works only with S3 on AWS. S3 bucket to use for shipping postgres daily logs. Works only with S3 on AWS.
The bucket has to be present and accessible by Patroni managed pods. At the The bucket has to be present and accessible by Postgres pods. At the moment
moment Spilo does not yet support this. The default is empty. Spilo does not yet support this. The default is empty.
* **kube_iam_role** * **kube_iam_role**
AWS IAM role to supply in the `iam.amazonaws.com/role` annotation of Patroni AWS IAM role to supply in the `iam.amazonaws.com/role` annotation of Postgres
pods. Only used when combined with pods. Only used when combined with
[kube2iam](https://github.com/jtblin/kube2iam) project on AWS. The default is empty. [kube2iam](https://github.com/jtblin/kube2iam) project on AWS. The default is
empty.
* **aws_region** * **aws_region**
AWS region used to store ESB volumes. The default is `eu-central-1`. AWS region used to store ESB volumes. The default is `eu-central-1`.

View File

@ -12,11 +12,11 @@ import (
"time" "time"
"github.com/Sirupsen/logrus" "github.com/Sirupsen/logrus"
"k8s.io/api/apps/v1beta1"
"k8s.io/api/core/v1"
policybeta1 "k8s.io/api/policy/v1beta1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"k8s.io/api/core/v1"
"k8s.io/api/apps/v1beta1"
policybeta1 "k8s.io/api/policy/v1beta1"
"k8s.io/client-go/rest" "k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache" "k8s.io/client-go/tools/cache"
@ -91,7 +91,7 @@ type compareStatefulsetResult struct {
// New creates a new cluster. This function should be called from a controller. // New creates a new cluster. This function should be called from a controller.
func New(cfg Config, kubeClient k8sutil.KubernetesClient, pgSpec spec.Postgresql, logger *logrus.Entry) *Cluster { func New(cfg Config, kubeClient k8sutil.KubernetesClient, pgSpec spec.Postgresql, logger *logrus.Entry) *Cluster {
orphanDependents := true deletePropagationPolicy := metav1.DeletePropagationOrphan
podEventsQueue := cache.NewFIFO(func(obj interface{}) (string, error) { podEventsQueue := cache.NewFIFO(func(obj interface{}) (string, error) {
e, ok := obj.(spec.PodEvent) e, ok := obj.(spec.PodEvent)
@ -113,7 +113,7 @@ func New(cfg Config, kubeClient k8sutil.KubernetesClient, pgSpec spec.Postgresql
Services: make(map[PostgresRole]*v1.Service), Services: make(map[PostgresRole]*v1.Service),
Endpoints: make(map[PostgresRole]*v1.Endpoints)}, Endpoints: make(map[PostgresRole]*v1.Endpoints)},
userSyncStrategy: users.DefaultUserSyncStrategy{}, userSyncStrategy: users.DefaultUserSyncStrategy{},
deleteOptions: &metav1.DeleteOptions{OrphanDependents: &orphanDependents}, deleteOptions: &metav1.DeleteOptions{PropagationPolicy: &deletePropagationPolicy},
podEventsQueue: podEventsQueue, podEventsQueue: podEventsQueue,
KubeClient: kubeClient, KubeClient: kubeClient,
} }
@ -601,7 +601,7 @@ func (c *Cluster) Delete() {
} }
for _, obj := range c.Secrets { for _, obj := range c.Secrets {
if delete, user := c.shouldDeleteSecret(obj); !delete { if doDelete, user := c.shouldDeleteSecret(obj); !doDelete {
c.logger.Warningf("not removing secret %q for the system user %q", obj.GetName(), user) c.logger.Warningf("not removing secret %q for the system user %q", obj.GetName(), user)
continue continue
} }
@ -951,11 +951,11 @@ func (c *Cluster) deletePatroniClusterEndpoints() error {
return util.NameFromMeta(ep.ObjectMeta), err return util.NameFromMeta(ep.ObjectMeta), err
} }
delete := func(name string) error { deleteEndpointFn := func(name string) error {
return c.KubeClient.Endpoints(c.Namespace).Delete(name, c.deleteOptions) return c.KubeClient.Endpoints(c.Namespace).Delete(name, c.deleteOptions)
} }
return c.deleteClusterObject(get, delete, "endpoint") return c.deleteClusterObject(get, deleteEndpointFn, "endpoint")
} }
func (c *Cluster) deletePatroniClusterConfigMaps() error { func (c *Cluster) deletePatroniClusterConfigMaps() error {
@ -964,9 +964,9 @@ func (c *Cluster) deletePatroniClusterConfigMaps() error {
return util.NameFromMeta(cm.ObjectMeta), err return util.NameFromMeta(cm.ObjectMeta), err
} }
delete := func(name string) error { deleteConfigMapFn := func(name string) error {
return c.KubeClient.ConfigMaps(c.Namespace).Delete(name, c.deleteOptions) return c.KubeClient.ConfigMaps(c.Namespace).Delete(name, c.deleteOptions)
} }
return c.deleteClusterObject(get, delete, "configmap") return c.deleteClusterObject(get, deleteConfigMapFn, "configmap")
} }

View File

@ -5,9 +5,9 @@ import (
"fmt" "fmt"
"strings" "strings"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/scheme" "k8s.io/client-go/kubernetes/scheme"
"k8s.io/api/core/v1"
"k8s.io/client-go/tools/remotecommand" "k8s.io/client-go/tools/remotecommand"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"
@ -59,9 +59,9 @@ func (c *Cluster) ExecCommand(podName *spec.NamespacedName, command ...string) (
} }
err = exec.Stream(remotecommand.StreamOptions{ err = exec.Stream(remotecommand.StreamOptions{
Stdout: &execOut, Stdout: &execOut,
Stderr: &execErr, Stderr: &execErr,
Tty: false, Tty: false,
}) })
if err != nil { if err != nil {

View File

@ -6,6 +6,7 @@ import (
"sort" "sort"
"github.com/Sirupsen/logrus" "github.com/Sirupsen/logrus"
"k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
@ -15,6 +16,7 @@ import (
policybeta1 "k8s.io/api/policy/v1beta1" policybeta1 "k8s.io/api/policy/v1beta1"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"
"github.com/zalando-incubator/postgres-operator/pkg/util"
"github.com/zalando-incubator/postgres-operator/pkg/util/constants" "github.com/zalando-incubator/postgres-operator/pkg/util/constants"
"k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/labels"
) )
@ -399,6 +401,7 @@ func generatePodTemplate(
terminateGracePeriod int64, terminateGracePeriod int64,
podServiceAccountName string, podServiceAccountName string,
kubeIAMRole string, kubeIAMRole string,
priorityClassName string,
) (*v1.PodTemplateSpec, error) { ) (*v1.PodTemplateSpec, error) {
terminateGracePeriodSeconds := terminateGracePeriod terminateGracePeriodSeconds := terminateGracePeriod
@ -416,6 +419,10 @@ func generatePodTemplate(
podSpec.Affinity = nodeAffinity podSpec.Affinity = nodeAffinity
} }
if priorityClassName != "" {
podSpec.PriorityClassName = priorityClassName
}
template := v1.PodTemplateSpec{ template := v1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Labels: labels, Labels: labels,
@ -662,7 +669,7 @@ func (c *Cluster) generateStatefulSet(spec *spec.PostgresSpec) (*v1beta1.Statefu
c.containerName(), c.logger) c.containerName(), c.logger)
// pickup the docker image for the spilo container // pickup the docker image for the spilo container
effectiveDockerImage := getEffectiveDockerImage(c.OpConfig.DockerImage, spec.DockerImage) effectiveDockerImage := util.Coalesce(spec.DockerImage, c.OpConfig.DockerImage)
volumeMounts := generateVolumeMounts() volumeMounts := generateVolumeMounts()
@ -696,6 +703,7 @@ func (c *Cluster) generateStatefulSet(spec *spec.PostgresSpec) (*v1beta1.Statefu
} }
tolerationSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration) tolerationSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration)
effectivePodPriorityClassName := util.Coalesce(spec.PodPriorityClassName, c.OpConfig.PodPriorityClassName)
// generate pod template for the statefulset, based on the spilo container and sidecards // generate pod template for the statefulset, based on the spilo container and sidecards
if podTemplate, err = generatePodTemplate( if podTemplate, err = generatePodTemplate(
@ -707,8 +715,13 @@ func (c *Cluster) generateStatefulSet(spec *spec.PostgresSpec) (*v1beta1.Statefu
nodeAffinity(c.OpConfig.NodeReadinessLabel), nodeAffinity(c.OpConfig.NodeReadinessLabel),
int64(c.OpConfig.PodTerminateGracePeriod.Seconds()), int64(c.OpConfig.PodTerminateGracePeriod.Seconds()),
c.OpConfig.PodServiceAccountName, c.OpConfig.PodServiceAccountName,
c.OpConfig.KubeIAMRole); err != nil { c.OpConfig.KubeIAMRole,
return nil, fmt.Errorf("could not generate pod template: %v", err) effectivePodPriorityClassName); err != nil{
return nil, fmt.Errorf("could not generate pod template: %v", err)
}
if err != nil {
return nil, fmt.Errorf("could not generate pod template: %v", err)
} }
if volumeClaimTemplate, err = generatePersistentVolumeClaimTemplate(spec.Volume.Size, if volumeClaimTemplate, err = generatePersistentVolumeClaimTemplate(spec.Volume.Size,
@ -737,13 +750,6 @@ func (c *Cluster) generateStatefulSet(spec *spec.PostgresSpec) (*v1beta1.Statefu
return statefulSet, nil return statefulSet, nil
} }
func getEffectiveDockerImage(globalDockerImage, clusterDockerImage string) string {
if clusterDockerImage == "" {
return globalDockerImage
}
return clusterDockerImage
}
func generateScalyrSidecarSpec(clusterName, APIKey, serverURL, dockerImage string, func generateScalyrSidecarSpec(clusterName, APIKey, serverURL, dockerImage string,
containerResources *spec.Resources, logger *logrus.Entry) *spec.Sidecar { containerResources *spec.Resources, logger *logrus.Entry) *spec.Sidecar {
if APIKey == "" || dockerImage == "" { if APIKey == "" || dockerImage == "" {

View File

@ -4,8 +4,8 @@ import (
"fmt" "fmt"
"math/rand" "math/rand"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/api/core/v1" "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"
"github.com/zalando-incubator/postgres-operator/pkg/util" "github.com/zalando-incubator/postgres-operator/pkg/util"

View File

@ -5,11 +5,11 @@ import (
"strconv" "strconv"
"strings" "strings"
"k8s.io/api/apps/v1beta1"
"k8s.io/api/core/v1"
policybeta1 "k8s.io/api/policy/v1beta1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"k8s.io/api/core/v1"
"k8s.io/api/apps/v1beta1"
policybeta1 "k8s.io/api/policy/v1beta1"
"github.com/zalando-incubator/postgres-operator/pkg/util" "github.com/zalando-incubator/postgres-operator/pkg/util"
"github.com/zalando-incubator/postgres-operator/pkg/util/constants" "github.com/zalando-incubator/postgres-operator/pkg/util/constants"
@ -272,10 +272,10 @@ func (c *Cluster) replaceStatefulSet(newStatefulSet *v1beta1.StatefulSet) error
c.logger.Debugf("replacing statefulset") c.logger.Debugf("replacing statefulset")
// Delete the current statefulset without deleting the pods // Delete the current statefulset without deleting the pods
orphanDepencies := true deletePropagationPolicy := metav1.DeletePropagationOrphan
oldStatefulset := c.Statefulset oldStatefulset := c.Statefulset
options := metav1.DeleteOptions{OrphanDependents: &orphanDepencies} options := metav1.DeleteOptions{PropagationPolicy: &deletePropagationPolicy}
if err := c.KubeClient.StatefulSets(oldStatefulset.Namespace).Delete(oldStatefulset.Name, &options); err != nil { if err := c.KubeClient.StatefulSets(oldStatefulset.Namespace).Delete(oldStatefulset.Name, &options); err != nil {
return fmt.Errorf("could not delete statefulset %q: %v", statefulSetName, err) return fmt.Errorf("could not delete statefulset %q: %v", statefulSetName, err)
} }

View File

@ -6,7 +6,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
policybeta1 "k8s.io/api/policy/v1beta1" policybeta1 "k8s.io/api/policy/v1beta1"
"k8s.io/api/policy/v1beta1"
"k8s.io/api/core/v1" "k8s.io/api/core/v1"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"
@ -188,7 +187,7 @@ func (c *Cluster) syncEndpoint(role PostgresRole) error {
func (c *Cluster) syncPodDisruptionBudget(isUpdate bool) error { func (c *Cluster) syncPodDisruptionBudget(isUpdate bool) error {
var ( var (
pdb *v1beta1.PodDisruptionBudget pdb *policybeta1.PodDisruptionBudget
err error err error
) )
if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(c.podDisruptionBudgetName(), metav1.GetOptions{}); err == nil { if pdb, err = c.KubeClient.PodDisruptionBudgets(c.Namespace).Get(c.podDisruptionBudgetName(), metav1.GetOptions{}); err == nil {

View File

@ -11,11 +11,11 @@ import (
"strings" "strings"
"time" "time"
"k8s.io/api/apps/v1beta1"
"k8s.io/api/core/v1"
policybeta1 "k8s.io/api/policy/v1beta1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/labels"
"k8s.io/api/core/v1"
"k8s.io/api/apps/v1beta1"
policybeta1 "k8s.io/api/policy/v1beta1"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"
"github.com/zalando-incubator/postgres-operator/pkg/util" "github.com/zalando-incubator/postgres-operator/pkg/util"

View File

@ -5,9 +5,9 @@ import (
"strconv" "strconv"
"strings" "strings"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/api/core/v1"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"
"github.com/zalando-incubator/postgres-operator/pkg/util" "github.com/zalando-incubator/postgres-operator/pkg/util"

View File

@ -6,11 +6,11 @@ import (
"sync" "sync"
"github.com/Sirupsen/logrus" "github.com/Sirupsen/logrus"
"k8s.io/api/core/v1"
rbacv1beta1 "k8s.io/api/rbac/v1beta1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes/scheme" "k8s.io/client-go/kubernetes/scheme"
"k8s.io/api/core/v1"
rbacv1beta1 "k8s.io/api/rbac/v1beta1"
"k8s.io/client-go/tools/cache" "k8s.io/client-go/tools/cache"
"github.com/zalando-incubator/postgres-operator/pkg/apiserver" "github.com/zalando-incubator/postgres-operator/pkg/apiserver"

View File

@ -1,11 +1,11 @@
package controller package controller
import ( import (
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/watch" "k8s.io/apimachinery/pkg/watch"
"k8s.io/api/core/v1"
"github.com/zalando-incubator/postgres-operator/pkg/cluster" "github.com/zalando-incubator/postgres-operator/pkg/cluster"
"github.com/zalando-incubator/postgres-operator/pkg/util" "github.com/zalando-incubator/postgres-operator/pkg/util"

View File

@ -4,8 +4,8 @@ import (
"testing" "testing"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/api/core/v1" "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
) )
const ( const (

View File

@ -62,6 +62,7 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *config.OperatorConfigur
result.ClusterLabels = fromCRD.Kubernetes.ClusterLabels result.ClusterLabels = fromCRD.Kubernetes.ClusterLabels
result.ClusterNameLabel = fromCRD.Kubernetes.ClusterNameLabel result.ClusterNameLabel = fromCRD.Kubernetes.ClusterNameLabel
result.NodeReadinessLabel = fromCRD.Kubernetes.NodeReadinessLabel result.NodeReadinessLabel = fromCRD.Kubernetes.NodeReadinessLabel
result.PodPriorityClassName = fromCRD.Kubernetes.PodPriorityClassName
result.DefaultCPURequest = fromCRD.PostgresPodResources.DefaultCPURequest result.DefaultCPURequest = fromCRD.PostgresPodResources.DefaultCPURequest
result.DefaultMemoryRequest = fromCRD.PostgresPodResources.DefaultMemoryRequest result.DefaultMemoryRequest = fromCRD.PostgresPodResources.DefaultMemoryRequest

View File

@ -1,10 +1,10 @@
package controller package controller
import ( import (
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/watch" "k8s.io/apimachinery/pkg/watch"
"k8s.io/api/core/v1"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"
"github.com/zalando-incubator/postgres-operator/pkg/util" "github.com/zalando-incubator/postgres-operator/pkg/util"

View File

@ -3,10 +3,10 @@ package controller
import ( import (
"fmt" "fmt"
"k8s.io/api/core/v1"
apiextv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1" apiextv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait" "k8s.io/apimachinery/pkg/util/wait"
"k8s.io/api/core/v1"
"github.com/zalando-incubator/postgres-operator/pkg/cluster" "github.com/zalando-incubator/postgres-operator/pkg/cluster"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"

View File

@ -6,9 +6,9 @@ import (
"testing" "testing"
b64 "encoding/base64" b64 "encoding/base64"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1core "k8s.io/client-go/kubernetes/typed/core/v1" v1core "k8s.io/client-go/kubernetes/typed/core/v1"
"k8s.io/api/core/v1"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"
"github.com/zalando-incubator/postgres-operator/pkg/util/k8sutil" "github.com/zalando-incubator/postgres-operator/pkg/util/k8sutil"

View File

@ -8,8 +8,8 @@ import (
"strings" "strings"
"time" "time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/api/core/v1" "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime"
) )
@ -127,14 +127,15 @@ type PostgresSpec struct {
// load balancers' source ranges are the same for master and replica services // load balancers' source ranges are the same for master and replica services
AllowedSourceRanges []string `json:"allowedSourceRanges"` AllowedSourceRanges []string `json:"allowedSourceRanges"`
NumberOfInstances int32 `json:"numberOfInstances"` NumberOfInstances int32 `json:"numberOfInstances"`
Users map[string]UserFlags `json:"users"` Users map[string]UserFlags `json:"users"`
MaintenanceWindows []MaintenanceWindow `json:"maintenanceWindows,omitempty"` MaintenanceWindows []MaintenanceWindow `json:"maintenanceWindows,omitempty"`
Clone CloneDescription `json:"clone"` Clone CloneDescription `json:"clone"`
ClusterName string `json:"-"` ClusterName string `json:"-"`
Databases map[string]string `json:"databases,omitempty"` Databases map[string]string `json:"databases,omitempty"`
Tolerations []v1.Toleration `json:"tolerations,omitempty"` Tolerations []v1.Toleration `json:"tolerations,omitempty"`
Sidecars []Sidecar `json:"sidecars,omitempty"` Sidecars []Sidecar `json:"sidecars,omitempty"`
PodPriorityClassName string `json:"pod_priority_class_name,omitempty"`
} }
// PostgresqlList defines a list of PostgreSQL clusters. // PostgresqlList defines a list of PostgreSQL clusters.
@ -182,7 +183,6 @@ func (p *Postgresql) DeepCopyObject() runtime.Object {
return nil return nil
} }
func parseTime(s string) (time.Time, error) { func parseTime(s string) (time.Time, error) {
parts := strings.Split(s, ":") parts := strings.Split(s, ":")
if len(parts) != 2 { if len(parts) != 2 {

View File

@ -11,10 +11,10 @@ import (
"time" "time"
"github.com/Sirupsen/logrus" "github.com/Sirupsen/logrus"
"k8s.io/apimachinery/pkg/types"
"k8s.io/api/core/v1"
"k8s.io/api/apps/v1beta1" "k8s.io/api/apps/v1beta1"
"k8s.io/api/core/v1"
policyv1beta1 "k8s.io/api/policy/v1beta1" policyv1beta1 "k8s.io/api/policy/v1beta1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/rest" "k8s.io/client-go/rest"
) )

View File

@ -25,6 +25,7 @@ type Resources struct {
PodLabelWaitTimeout time.Duration `name:"pod_label_wait_timeout" default:"10m"` PodLabelWaitTimeout time.Duration `name:"pod_label_wait_timeout" default:"10m"`
PodDeletionWaitTimeout time.Duration `name:"pod_deletion_wait_timeout" default:"10m"` PodDeletionWaitTimeout time.Duration `name:"pod_deletion_wait_timeout" default:"10m"`
PodTerminateGracePeriod time.Duration `name:"pod_terminate_grace_period" default:"5m"` PodTerminateGracePeriod time.Duration `name:"pod_terminate_grace_period" default:"5m"`
PodPriorityClassName string `name:"pod_priority_class_name"`
ClusterLabels map[string]string `name:"cluster_labels" default:"application:spilo"` ClusterLabels map[string]string `name:"cluster_labels" default:"application:spilo"`
ClusterNameLabel string `name:"cluster_name_label" default:"cluster-name"` ClusterNameLabel string `name:"cluster_name_label" default:"cluster-name"`
PodRoleLabel string `name:"pod_role_label" default:"spilo-role"` PodRoleLabel string `name:"pod_role_label" default:"spilo-role"`

View File

@ -49,6 +49,7 @@ type KubernetesMetaConfiguration struct {
PodToleration map[string]string `json:"toleration,omitempty"` PodToleration map[string]string `json:"toleration,omitempty"`
// TODO: use namespacedname // TODO: use namespacedname
PodEnvironmentConfigMap string `json:"pod_environment_configmap,omitempty"` PodEnvironmentConfigMap string `json:"pod_environment_configmap,omitempty"`
PodPriorityClassName string `json:"pod_priority_class_name,omitempty"`
} }
type PostgresPodResourcesDefaults struct { type PostgresPodResourcesDefaults struct {