ELMA365 On-Premises > Other > ELMA365 high availability cluster / PostgreSQL cluster

PostgreSQL cluster

Servers are combined into a Primary/Replica(s) cluster. One server acts as the main one and sends its data to the others. Read more about setting up and working with a PostgreSQL cluster in the official documentation.

high-avail-kluster-1

Basic information

начало внимание

You need at least 2 servers to create the cluster.

конец внимание

In the example below, we are working with 3 nodes with the following hostnames and IP addresses:

  • postgres-server-1, 192.168.1.1
  • postgres-server-2, 192.168.1.2
  • postgres-server-3, 192.168.1.3

Step 1: Install the etcd cluster

  1. Install etcd to all nodes:

sudo apt-get install etcd -y

  1. Move the default configuration file:

sudo mv /etc/default/etcd{,.original}

  1. Create a new configuration file and open it for editing:

sudo nano /etc/default/etcd

  1. Add a configuration example to the file for node 1:

ETCD_NAME="postgres-server1"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.1:2379,http://127.0.0.1:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.1:2379"
ETCD_LISTEN_PEER_URLS="http://192.168.1.1:12380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.1:12380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-postgres-cluster"
ETCD_INITIAL_CLUSTER="postgres-server1=http://192.168.1.1:12380,postgres-server2=http://192.168.1.2:12380,postgres-server3=http://192.168.1.3:12380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_ELECTION_TIMEOUT="5000"
ETCD_HEARTBEAT_INTERVAL="1000"
ETCD_INITIAL_ELECTION_TICK_ADVANCE="false"
ETCD_ENABLE_V2="true"

  1. Add a configuration example to the file for node 2:

ETCD_NAME="postgres-server2"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.2:2379,http://127.0.0.1:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.2:2379"
ETCD_LISTEN_PEER_URLS="http://192.168.1.2:12380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.2:12380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-postgres-cluster"
ETCD_INITIAL_CLUSTER="postgres-server1=http://192.168.1.1:12380,postgres-server2=http://192.168.1.2:12380,postgres-server3=http://192.168.1.3:12380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_ELECTION_TIMEOUT="5000"
ETCD_HEARTBEAT_INTERVAL="1000"
ETCD_INITIAL_ELECTION_TICK_ADVANCE="false"
ETCD_ENABLE_V2="true"

  1. Add a configuration example to the file for node 3:

ETCD_NAME="postgres-server3"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.3:2379,http://127.0.0.1:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.3:2379"
ETCD_LISTEN_PEER_URLS="http://192.168.1.3:12380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.3:12380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-postgres-cluster"
ETCD_INITIAL_CLUSTER="postgres-server1=http://192.168.1.1:12380,postgres-server2=http://192.168.1.2:12380,postgres-server3=http://192.168.1.3:12380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_ELECTION_TIMEOUT="5000"
ETCD_HEARTBEAT_INTERVAL="1000"
ETCD_INITIAL_ELECTION_TICK_ADVANCE="false"
ETCD_ENABLE_V2="true"

Description of the parameters used:

  • ETCD_NAME. The name of the node in the cluster. It has to be unique within the cluster.
  • ETCD_LISTEN_CLIENT_URLS. The connection point for the cluster’s clients.
  • ETCD_ADVERTISE_CLIENT_URLS. The list of URL addresses that the other nodes of the cluster can use to find this node.
  • ETCD_LISTEN_PEER_URLS. The connection point for other cluster nodes.
  • ETCD_INITIAL_ADVERTISE_PEER_URLS. The initial list of URL addresses that the other nodes of the cluster can use to find this node.
  • ETCD_INITIAL_CLUSTER_TOKEN. The cluster’s token. It has to be the same for all the cluster’s nodes.
  • ETCD_INITIAL_CLUSTER. The list of nodes in the cluster at the time pf launch.
  • ETCD_INITIAL_CLUSTER_STATE. The cluster’s state. This parameter has two possible values: “new” and “existing”.
  • ETCD_DATA_DIR. The location of the catalog with the cluster’s data.
  • ETCD_ELECTION_TIMEOUT. This timeout set in milliseconds is how long a follower node goes without hearing a heartbeat message from the leader of the cluster before attempting to become a leader itself.
  • ETCD_HEARTBEAT_INTERVAL. This is the frequency in milliseconds with which the leader notifies followers that it is still the leader.
  1. Restart etcd on all nodes:

sudo systemctl restart etcd

  1. Check the state of the cluster:

sudo etcdctl cluster-health

Step 2: Install PostgreSQL

начало внимание

For correct operation of the system, PostgreSQL 10 is required.

конец внимание

  1. To install PostgreSQL, add the postgresql official repository:

sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

  1. Update the packages’ cache:

sudo apt update

  1. Install PostgreSQL to all nodes:

sudo apt install postgresql-10 -y

Step 3: Configure PostgreSQL

Actions to be performed on the primary node

  1. Create the elma365 role with SecretPassword as password:

sudo -u postgres psql -c \
"CREATE ROLE elma365 WITH login password 'SecretPassword';"

  1. Create the elma365 database with elma365 as the owner:

sudo -u postgres psql -c \
"CREATE DATABASE elma365 WITH OWNER elma365;"

  1. Add the necessary extensions for the elma365 database:

sudo -u postgres psql -d elma365 -c \
"CREATE EXTENSION \"uuid-ossp\"; CREATE EXTENSION pg_trgm;"

  1. Create the replicator role with ReplicatorPassword as password. It will be used to work with replicas. This has to correspond with the settings of patroni from postgresql - authentication - replication and the list of allowed postgresql hosts from the pg_hba.conf file:

sudo -u postgres psql -c \
"CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'ReplicatorPassword';"

  1. Set a password for the postgres user:

sudo -u postgres psql -c "ALTER USER postgres PASSWORD 'PostgresPassword';"

  1. The pg_hba.conf file has to be configured on all the database’s servers to allow access from different servers.

Add information about all the nodes to the pg_hba.conf file:

#Patroni
host all all 0.0.0.0/0 md5
host replication replicator 192.168.1.1/32 md5
host replication replicator 192.168.1.2/32 md5
host replication replicator 192.168.1.3/32 md5

  1. Restart PostgreSQL on all nodes:

systemctl restart postgresql

Step 4: Install Patroni

  1. Install Patroni and PIP to all the nodes:

sudo apt-get install python3-pip python3-dev libpq-dev -y
sudo apt-get install patroni -y

  1. Set dependencies for Patroni on all the nodes:

pip3 install psycopg2-binary
pip3 install wheel
pip3 install python-etcd

Actions to be first performed on the primary node

  1. Create a configuration file:

sudo nano /etc/patroni/config.yml

  1. Place the initial configuration example in the /etc/patroni/config.yml file that you created. You need to replace the IP addresses to yours on each of the cluster’s nodes. See comments in this file:

Initial configuration example

  1. Make the postgres user the owner of the settings catalog:

sudo chown postgres:postgres -R /etc/patroni
sudo chmod 700 /etc/patroni

  1. Run the Patroni service:

sudo systemctl enable --now patroni.service

Add a new Patroni node

To add a new node to the Patroni cluster, perform the same actions as for the primary node.

  1. Edit the /etc/patroni/config.yml file using the values of the current server.
  1. If the service has been started before, delete the data catalog to make the replica work:

rm -rf /var/lib/postgresql/10/main

  1. Run the Patroni service:

sudo systemctl enable --now patroni.service

  1. Check the state of the cluster:

patronictl -c /etc/patroni/config.yml list

Step 5: HAProxy configuration (postgres)

Example of a configuration intended for load balancing using HAProxy:

listen postgres_master
    bind haproxy-server.your_domain:5000
    option tcplog
    option httpchk OPTIONS /master
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 4 on-marked-down shutdown-sessions
    server postgres-server1 postgres-server1.your_domain:5432 check port 8008
    server postgres-server2 postgres-server2.your_domain:5432 check port 8008
    server postgres-server3 postgres-server3.your_domain:5432 check port 8008
 
listen postgres_replicas
    bind haproxy-server.your_domain:5001
    option tcplog
    option httpchk OPTIONS /replica
    balance roundrobin
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 2 on-marked-down shutdown-sessions
    server postgres-server1 postgres-server1.your_domain:5432 check port 8008
    server postgres-server2 postgres-server2.your_domain:5432 check port 8008
    server postgres-server3 postgres-server3.your_domain:5432 check port 8008
 
listen postgres_replicas_sync
    bind haproxy-server.your_domain:5002
    option tcplog
    option httpchk OPTIONS /sync
    balance roundrobin
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 2 on-marked-down shutdown-sessions
    server postgres-server1 postgres-server1.your_domain:5432 check port 8008
    server postgres-server2 postgres-server2.your_domain:5432 check port 8008
    server postgres-server3 postgres-server3.your_domain:5432 check port 8008
 
listen postgres_replicas_async
    bind haproxy-server.your_domain:5003
    option tcplog
    option httpchk OPTIONS /async
    balance roundrobin
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 2 on-marked-down shutdown-sessions
    server postgres-server1 postgres-server1.your_domain:5432 check port 8008
    server postgres-server2 postgres-server2.your_domain:5432 check port 8008
    server postgres-server3 postgres-server3.your_domain:5432 check port 8008

Step 6: Connect to ELMA365

  1. To set up a connection with the Postgres cluster, run:

psqlUrl: postgresql://elma365:SecretPassword@haproxy-server.your_domain:5000/elma365?sslmode=disable

  1. To set up a read-only connection with the Postgres cluster, run:

roPsqlUrl: postgresql://elma365:SecretPassword@haproxy-server.your_domain:5001/elma365?sslmode=disable

 

Found a typo? Highlight the text, press ctrl + enter and notify us