6.7 KiB
		
	
	
	
	
	
			
		
		
	
	Quickstart
This guide aims to give you a quick look and feel for using the Postgres Operator on a local Kubernetes environment.
Prerequisites
The Postgres Operator runs on Kubernetes (K8s) which you have to setup first. For local tests we recommend to use one of the following solutions:
- minikube, which creates a single-node K8s cluster inside a VM (requires KVM or VirtualBox),
- kind, which allows creating multi-nodes K8s clusters running on Docker (requires Docker)
To interact with the K8s infrastructure install it's CLI runtime kubectl.
This quickstart assumes that you have started minikube or created a local kind
cluster. Note that you can also use built-in K8s support in the Docker Desktop
for Mac to follow the steps of this tutorial. You would have to replace
minikube start and minikube delete with your launch actions for the Docker
built-in K8s support.
Configuration Options
If you want to configure the Postgres Operator it must happen before deploying
a Postgres cluster. This can work in two ways: Via a ConfigMap or a custom
OperatorConfiguration object. More details on configuration can be found
here.
Deployment options
The Postgres Operator can be deployed in different ways:
- Manual deployment
- Helm chart
- Operator Lifecycle Manager (OLM)
Manual deployment setup
The Postgres Operator can be installed simply by applying yaml manifests. Note,
we provide the /manifests directory as an example only; you should consider
adjusting the manifests to your K8s environment (e.g. namespaces).
# First, clone the repository and change to the directory
git clone https://github.com/zalando/postgres-operator.git
cd postgres-operator
# apply the manifests in the following order
kubectl create -f manifests/configmap.yaml  # configuration
kubectl create -f manifests/operator-service-account-rbac.yaml  # identity and permissions
kubectl create -f manifests/postgres-operator.yaml  # deployment
When using kubectl 1.14 or newer the mentioned manifests could be also be bundled in a Kustomization so that one command is enough.
For convenience, we have automated starting the operator with minikube and
submitting the acid-minimal-cluster.
From inside the cloned repository execute the run_operator_locally shell
script.
./run_operator_locally.sh
Helm chart
Alternatively, the operator can be installed by using the provided Helm chart which saves you the manual steps. Therefore, you would need to install the helm CLI on your machine. After initializing helm (and its server component Tiller) in your local cluster you can install the operator chart. You can define a release name that is prepended to the operator resource's names.
Use --name zalando to match with the default service account name as older
operator versions do not support custom names for service accounts. To use
CRD-based configuration use the values-crd yaml file.
# 1) initialize helm
helm init
# 2) install postgres-operator chart
helm install --name zalando ./charts/postgres-operator
Operator Lifecycle Manager (OLM)
The Operator Lifecycle Manager (OLM) has been designed to facilitate the management of K8s operators. You have to install it in your K8s environment. When OLM is set up you can simply download and deploy the Postgres Operator with the following command:
kubectl create -f https://operatorhub.io/install/postgres-operator.yaml
This installs the operator in the operators namespace. More information can be
found on operatorhub.io.
Create a Postgres cluster
Starting the operator may take a few seconds. Check if the operator pod is running before applying a Postgres cluster manifest.
# if you've created the operator using yaml manifests
kubectl get pod -l name=postgres-operator
# if you've created the operator using helm chart
kubectl get pod -l app.kubernetes.io/name=postgres-operator
# create a Postgres cluster
kubectl create -f manifests/minimal-postgres-manifest.yaml
After the cluster manifest is submitted the operator will create Service and
Endpoint resources and a StatefulSet which spins up new Pod(s) given the number
of instances specified in the manifest. All resources are named like the
cluster. The database pods can be identified by their number suffix, starting
from -0. They run the Spilo container
image by Zalando. As for the services and endpoints, there will be one for the
master pod and another one for all the replicas (-repl suffix). Check if all
components are coming up. Use the label application=spilo to filter and list
the label spilo-role to see who is currently the master.
# check the deployed cluster
kubectl get postgresql
# check created database pods
kubectl get pods -l application=spilo -L spilo-role
# check created service resources
kubectl get svc -l application=spilo -L spilo-role
Connect to the Postgres cluster via psql
You can create a port-forward on a database pod to connect to Postgres. See the user guide for instructions. With minikube it's also easy to retrieve the connections string from the K8s service that is pointing to the master pod:
export HOST_PORT=$(minikube service acid-minimal-cluster --url | sed 's,.*/,,')
export PGHOST=$(echo $HOST_PORT | cut -d: -f 1)
export PGPORT=$(echo $HOST_PORT | cut -d: -f 2)
Retrieve the password from the K8s Secret that is created in your cluster.
export PGPASSWORD=$(kubectl get secret postgres.acid-minimal-cluster.credentials -o 'jsonpath={.data.password}' | base64 -d)
psql -U postgres
Delete a Postgres cluster
To delete a Postgres cluster simply delete the postgresql custom resource.
kubectl delete postgresql acid-minimal-cluster
This should remove the associated StatefulSet, database Pods, Services and
Endpoints. The PersistentVolumes are released and the PodDisruptionBudget is
deleted. Secrets however are not deleted. When deleting a cluster while it is
still starting up it can happen
the postgresql resource is deleted leaving orphaned components behind. This
can cause troubles when creating a new Postgres cluster. For a fresh setup you
can delete your local minikube or kind cluster and start again.