ELMA365 On-Premises > Prepare infrastructure > Databases > High availability infrastructure / Hot Standby PostgreSQL cluster

Hot Standby PostgreSQL cluster

As an addition to a PostgreSQL cluster, you can use a geographically separate standby PostgreSQL cluster for disaster tolerance. It maximizes availability and ensures emergency recovery. The standby PostgreSQL cluster only includes backup nodes replicated from a remote primary. Read more: https://patroni.readthedocs.io/en/latest/replica_bootstrap.html#standby-cluster.

This type of clusters has the following elements:

  • A backup leader acting the same way as a regular leader of the cluster, except that it is replicated from a remote primary.
  • Cascading replicas replicated from the backup leader.

Step 1: Configure PostgreSQL

  1. On each database servers, add information about all nodes of each cluster to the pg_hba.conf file. This will allow for connections from different servers:

#Patroni
host all all 0.0.0.0/0 md5
#Cluster hosts 1
host replication replicator 192.168.1.1/32 md5
host replication replicator 192.168.1.2/32 md5
host replication replicator 192.168.1.3/32 md5
#Standby сluster hosts 2
host replication replicator 192.168.2.1/32 md5
host replication replicator 192.168.2.2/32 md5
host replication replicator 192.168.2.3/32 md5

  1. Restart PostgreSQL on all the nodes:

systemctl restart postgresql

Step 2: Configure Patroni

  1. On the standby cluster’s servers, in the /etc/patroni/config.yml file, change the values of scope and name, as they shouldn’t be the same as in the main cluster. Edit the addresses of all the servers:

scope: postgres-cluster2 # The same value for all the nodes of the standby cluster 
name: postgresql-server4 # Different values on each node of the standby cluster 

  1. Add standby_cluster to the bootstrap section. Add dcs to the current configuration file /etc/patroni/config.yml. Specify the remote primary. In this example, it is defined using HAproxy:

bootstrap:
dcs:
  standby_cluster:
      host: haproxy-server.your_domain
      port: 5000
      create_replica_methods:
      - basebackup

  1. On the main cluster and the standby cluster, add the addresses of all servers used in them to pg_hba in the current configuration file etc/patroni/config.yml:

pg_hba:
- host all all 0.0.0.0/0 md5
- host replication replicator localhost trust
#Cluster hosts 1
- host replication replicator 192.168.1.1/32 md5
- host replication replicator 192.168.1.2/32 md5
- host replication replicator 192.168.1.3/32 md5
#Cluster hosts 2
- host replication replicator 192.168.2.1/32 md5
- host replication replicator 192.168.2.2/32 md5
- host replication replicator 192.168.2.3/32 md5

  1. Run the Patroni service:

sudo systemctl enable --now patroni.service

If the Patroni service has been previously started on the standby cluster, stop the service on each of the standby cluster’s nodes. Delete the postgresql data folder and clear information about the cluster. Run the Patroni service again:

sudo systemctl stop patroni
sudo rm -rf /var/lib/postgresql/10/main
sudo etcdctl rm --recursive /service/postgres-cluster2
sudo systemctl restart patroni

  1. Check the standby cluster’s state:

patronictl -c /etc/patroni/config.yml list

The standby cluster’s ID has to be the same as the main cluster’s. The leader’s role has to be Standby Leader.

  1. To make the standby cluster become the main one if the original cluster is not available, delete standby_cluster from the current Patroni configuration by running the following command:

patronictl edit-config --force -s standby_cluster.host='' -s standby_cluster.port='' -s standby_cluster.create_replica_methods=''

It is important to avoid two primaries working at the same time. When the original cluster is recovered, the backup cluster will still be considered main.

  1. To make the new main cluster become a standby cluster again after the original cluster is recovered, add standby_cluster to the current Patroni configuration by running the following command:

patronictl edit-config --force -s standby_cluster.host=haproxy-server.your_domain -s standby_cluster.port=5000 -s standby_cluster.create_replica_methods='- basebackup'

Step 3: Configure HAProxy (postgres)

Complete the configuration of HAProxy in the postgres section of the file by specifying the addresses of all the servers used in the clusters:

listen postgres_master
    bind haproxy-server.your_domain:5000
    option tcplog
    option httpchk OPTIONS /master
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 4 on-marked-down shutdown-sessions
    server postgres-server1 postgres-server1.your_domain:5432 check port 8008
    server postgres-server2 postgres-server2.your_domain:5432 check port 8008
    server postgres-server3 postgres-server3.your_domain:5432 check port 8008
    server postgres-server4 postgres-server4.your_domain:5432 check port 8008
    server postgres-server5 postgres-server5.your_domain:5432 check port 8008
    server postgres-server6 postgres-server6.your_domain:5432 check port 8008
 
listen postgres_replicas
    bind haproxy-server.your_domain:5001
    option tcplog
    option httpchk OPTIONS /replica
    balance roundrobin
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 2 on-marked-down shutdown-sessions
    server postgres-server1 postgres-server1.your_domain:5432 check port 8008
    server postgres-server2 postgres-server2.your_domain:5432 check port 8008
    server postgres-server3 postgres-server3.your_domain:5432 check port 8008
    server postgres-server4 postgres-server4.your_domain:5432 check port 8008
    server postgres-server5 postgres-server5.your_domain:5432 check port 8008
    server postgres-server6 postgres-server6.your_domain:5432 check port 8008

Found a typo? Highlight the text, press ctrl + enter and notify us