Friday, May 17, 2013

Postgresql 9.1 Streaming replication on ubuntu 12.04 part 1

Setting up PostgreSQL Replication

I have a need to set up a PostgreSQL Replication Database.   In Postgres 9.0+  they have the ability to have a read only replicated database that mirrors the live database, it can lag slightly behind the live database (although in my simple test it was spot on).  The Replication DB can be read from but not written to.  In my particular case I just want to use it as a failover DB that can be brought up as a live DB if ever needed.

Here is set up.  I have two Ubuntu 12.04 LTS 64 bit servers with postgresl 9.1 installed.  One of these servers will be the primary (master) PostgreSQL server and the other will be the replication (slave) postgres server.  

Set up a test Database :
To do a full test I am going to log into my PRIMARY server and create a new database.  The purpose of the database is for testing. Useful in the last phase.

Log into the DB


        > sudo su postgres
        >   psql -d postgres -U postgres


Create a test user


        >    CREATE USER navneet WITH PASSWORD ‘rathi’;

Create the Database


        >    CREATE DATBASE nanddb;

Make sure it was created by running \l  

Grant navneet privileges to this database


        >    GRANT ALL PRIVILEGES ON DATABASE nand to navneet;


Now let’s test it quit out of postgres and try


        >    \q
        >    psql -d nand -U navneet

And I get an error 
 The authentication is peer not (md5/trust)

I am trying to log in as navneet but I am logged in as the postgres user. It’s a setting in “/etc/postgresql/9.1/main/pg_hba.conf “. I failed to edit

Edit the file


        >    sudo vi   /etc/postgresql/9.1/main/pg_hba.conf

Change


# "local" is for Unix domain socket connections only
local   all             all                                     peer

To


# "local" is for Unix domain socket connections only
local   all             all                                     trust


It will allow all local user to login without password .if you try to login from outside it will ask for password if you create a ssh tunnel no password is needed.

And restart PostgreSQL to take the effect of new settings.


        >    sudo /etc/init.d/postgresql  restart  or  >  service postgresql restart

And then log back in


        >    psql  -U navneet nand
This time it worked!

Now connect to the database.


        >    \c nand


Create a simple table that I can send updates to.  The purpose of this table is to test the set-up of the replication server.  The live server should be able to continue getting updates while the replication server is being set up, then it will “catch up”



        >    CREATE TABLE data(
        >      id serial primary key not null,
        >      time timestamp not null default CURRENT_TIMESTAMP,
        >      number integer not null
        >  );


This creates a table with a primary key that auto increments, and a time field which will default to the current_timestamp

Run the following command to confirm the table has been created and is working as intended.


        >    \dt
        >    select * from data;
        >    INSERT INTO data (number)
        >    VALUES (34);
        >    select * from data;

OK now that I have set up I need to create a simple program to insert data into this table on some kind of loop.

Writing a simple python program

First I had to install some libraries that python depends on for talking to PostgreSQL


        >    sudo apt-get install python-psycopg2
        >    vi insertDB.py

Here is my python code


#!/usr/bin/python
#
#
#  Simple Script to insert data
#

import psycopg2
import sys
import time

con = None

try:
    # First get the current maximum
    con = psycopg2.connect(database='nand', user='navneet', password='rathi')
    cur = con.cursor()
    cur.execute('select MAX(number) from data')
    x = cur.fetchone()[0]
    if x is None:
       x = 0


    while (True):
        x = x + 1
        cur.execute('INSERT INTO data(number) VALUES (' + str(x) + ')')
        con.commit()
        print 'inserting ' + str(x)
        time.sleep(1)

except psycopg2.DatabaseError, e:
    print 'Error %s' % e
    sys.exit(1)


finally:

    if con:
        con.close()




It will connect, locally to my PostgreSQL database and query the data table to find the largest integer in the number field. The program then increments it by one and inserts the new values (always pausing 1 second and incrementing by 1).

The idea is this is constantly adding data to the database, so that I can confirm that the replication I create works correctly.

WAL (write ahead log) Files

Postgres uses write ahead logging

WAL's central concept is that changes to data files (where tables and indexes reside) must be written only after those changes have been logged, that is, after log records describing the changes have been flushed to permanent storage.”

First let’s go find the WAL files. /etc/postgresql/9.1/main/postgresql.conf file under the data_directory field.  In 9.1 its typically /var/lib/postgresql/9.1/main At any rate the WAL files are located in the pg_xlog folder within the DB folder



        >    cd  /var/lib/postgresql/9.1/main/pg_xlog
        >    ls -alh

Here you should see several files that are 16MiB in size with numbers for names.


These are the WAL files.   In the /etc/postgresql/9.1/main/postgresql.conf file there is a wal_keep_segments field which is typically set to 0.  If it is set to 0 it means that at least 0 WAL files must be kept in the queue.  More typically are here as the DB needed.   This setting must be increased to assure that the replication database has access to them.





Primary (Master) DB settings

First let’s set up the master DB to be able to allow replications DBs to get the updates they need

postgresql.conf

First we need to edit /etc/postgresql/9.1/main/postgresql.conf


        >    sudo vi  /etc/postgresql/9.1/main/postgresql.conf


The following needs to be edited (the approx @line is to show where the typical line number is for this setting)


@line 61
listen_addresses = '*'          # what IP address(es) to listen on;

@line 155
wal_level = hot_standby                  # minimal, archive, or hot_standby

@line 198
max_wal_senders = 5                # max number of walsender processes

@line 201
wal_keep_segments = 128         # in logfile segments, 16MB each;


Here is the “why” for each as best as I have researched.

listen_addresses = '*'       

Listen to any incoming connection. 
wal_level = hot_standby

This must be set to archive or hot_standy .  The default minimal does not put enough information in the WAL files to truly reconstruct the Database.

max_wal_senders = 5              

Sets number of concurrent streaming backup connections.   I think you could get by with 1 if you only have 1 failover server; I just set to 5 because all the other were doing it. 

wal_keep_segments = 128
This will keep at a minimum 128 WAL files.  At the default size of 16MiB that totals 2 GiB.  This is a lot but I want to make sure I do not lose anything on the transfer.

pga_hba.conf

Now edit /etc/postgresql/9.1/main/pga_hba.conf


        >    sudo vi  /etc/postgresql/9.1/main/pg_hba.conf


Add this line to the bottom



host     replication    postgres        172.16.2.208/32        trust


This just says to trust the server at 172.16.2.208, which is my replication server.


Replication Role (First check & do it if Required)

I made a mistake when I first did this I did not realize there is a replication role in the database that must be designated.

When I simply tried to use the postgres user I got the following error.

2012-04-21 08:26:01 MDT FATAL:  could not connect to the primary server: FATAL:  must be replication role to start walsender

I am using the postgres user as the replication user.   To see what the postgres user has run the following command from the postgres database.


        >    \du


Here you can see that my postgres user does not have the “Replication” Role.

So I added the role to my postgres user, most sites I found suggest creating a new user that only handles replication.   I decided to just go with reusing the postgres user.

From the postgres database run this command


        >    ALTER ROLE postgres WITH REPLICATION;


Now run the \du command again


        >    \du postgres

And you should see this


Now it has the Replication role!

Restart the Primary database

From the command line



        >    sudo /etc/init.d/postgresql restart or service postgresql restart


After it has been restarted I kick off my python script. 


        >    sudo /etc/init.d/postgresql restart

I left it running in its own terminal

Then I logged into the Database and did a few quick checks to make sure that data is being constantly loaded into the database.


        >    psql -d nand –U navneet


Then from psql


        >    select count(*) from data;

I ran this a few times to confirm its growing in size.



Copy the Database over to the Replication (Slave) server

My first go at this I screwed it up! .  I thought I could do a simple pg_dumpall from the primary server to the replication server.  I logged into the Replication server and ran these commands.


         > sudo su postgres
         > time pg_dumpall -h 172.16.2.207 -U postgres | psql -d postgres -U postgres

(I added the time command to see how long it runs)

And then after setting up the replication server and restarting it (which I will get into later) the postgres would not start up and I got this error.

FATAL:  hot standby is not possible because wal_level was not set to "hot_standby" on the master server

But I did set it!  So what is going on?

So here is the correct way of doing it.

First shut off postgres on the replication server


        >    sudo su /etc/init.d/postgresql stop


Next log into the primary server and run the following commands


        >    sudo su postgres
        >    psql -d postgres -U postgres


You are logged into the database
Now tell the database you are going to start a backup.  You can still keep using your database as you normally would, there will be no interruption to incoming data.


        >  SELECT pg_start_backup('mybackup_label', true);




From the command line copy your data directory folder from the primary server to the replication server.  In my case the data directory is “/var/lib/postgresql/9.1/main/” (this is set in the postgresql.conf file).  I kept the data directory the same on both primary and replication server.

Here is my command adjust it according to where you data directory is located.


        >  time scp -r  /var/lib/postgresql/9.1/main/*   root@172.16.2.208:/var/lib/postgresql/9.1/main/

After the database has been copied over run the following command from the psql, this will say the backup is done on the master server.


        >  SELECT pg_stop_backup();

I got this error.  “NOTICE:  WAL archiving is not enabled; you must ensure that all required WAL segments are copied through other means to complete the backup”

I did not set up WAL archiving but I did set up wal_keep_segments, I think this is only a problem if you do not have enough wal segments.   I think I am OK.  Let’s see…



Replication (Slave) DB server settings

Now that we have backed up the database to the slave/replication database, we need to change it to an actual replication database.

postgresql.conf

First we need to edit /etc/postgresql/9.1/main/postgresql.conf


        >    sudo vi  /etc/postgresql/9.1/main/postgresql.conf


The following needs to be edited (the @line is to show where the typical line number is for this setting)


@line 59
listen_addresses = '*'          # what IP address(es) to listen on;

@line 210
hot_standby = on                        # "on" allows queries during recovery


Here is the “why” for each as best as I have researched.
listen_addresses = '*'       

Listen to any incoming connection. 

hot_standy = on

Allows you to query, but not update the replication DB

recovery.conf

Now the recovery.conf file must be created.   I am using Ubuntu 12.04 with a postgres 9.1 .The recovery.conf file needs to be in the data directory, as defined in postgresql.conf.  In my case that is “/var/lib/postgresql/9.1/main/”

So I copied the /usr/share/postgresql/9.1/recovery.conf.sample file to /var/lib/postgresql/9.1/main/recovery.conf and open it for editing.



        >    cd  /usr/share/postgresql/9.1/
        >    cp recovery.conf.sample  /var/lib/postgresql/9.1/main/recovery.
        >    sudo vi /database/postgresqldata/recovery.conf


The following needs to be edited (the @line is to show where the typical line number is for this setting)


@line 108
standby_mode = on

@line 110
primary_conninfo ='host=172.16.2.209 port=5432 user=postgres'

@line 124
trigger_file = '/home/nrathi/failover'

Here is an explanation of each of these

standby_mode = on

Just sets the standby mode to on

primary_conninfo

All the information for the replication(slave) server to connect to the primary (master) server.
trigger_file

if this file exists the server will stop being a replication server and start being a primary server.  It checks periodically for this file.

Fix ownership of files. (be very careful the chown command fire it as it is  )


        >  sudo chown -R postgres:postgres /var/lib/postgresql/9.1/main/
        >  sudo chmod 700 /var/lib/postgresql/9.1/main/recovery.

Now start the postgres database on the replication (slave) server


        >    sudo /etc/init.d/postgresql start

Now log into the database


        >    sudo su postgres
        >    psql -d postgres –U postgres

Then from postgres I ran the following commands



        >    \c nand
        >    select count(*) from data;
        >    select pg_last_xlog_receive_location();


And I can see that it is getting live data!  So that is it you now have a replication server.  




Wednesday, May 8, 2013

Setup the Quota for internet User.

Using a Simple iptables command we can set up a quota for the internet user

Step 1
Take a Linux machine which will act as a gateway for you and route all the traffic from

All we want to have is the following: packets arriving from the local net with a receipient's IP address somewhere in the internet have to be modified such that the sender's address is equal to the router's address. For further command examples let us assume that the first interface 'eth0' is connected to the local net and that the router is connected to the internet via the second interface 'eth1'. The command for a shared internet connection then simply is:

# Connect a LAN to the internet gt; 
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE

#Lets say we have a user for which i have to set a bandwidth of  13 GB/month 
iptables -A INPUT -p tcp -s 192.168.0.2 -m quota --quota 13958643712 -j ACCEPT
iptables -A OUTPUT -p tcp --dport 80 -j DROP
or
iptables -A INPUT -p tcp -j CLASSIFY --set-class 1:12

So in the above rule will allow the user (192.168.0.2)  to user the internet service up to 13 GB

after a month you need to fire
iptables -F
 and run the above command again or even you can schedule a cron job for this .and we are all set the Shell script will look like this.


#!/bin/bash
 # step 1
iptables -F

# Step 2
# Connect a LAN to the internet gt; 
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE

 #Step 3
#Lets say we have a user for which i have to set a bandwidth of  13 GB/month 
iptables -A INPUT -p tcp -s 192.168.0.2 -m quota --quota 13958643712 -j ACCEPT

 #Step 4
iptables -A OUTPUT -p tcp --dport 80 -j DROP
or
iptables -A INPUT -p tcp -j CLASSIFY --set-class 1:12
#
#choose one from step 4 as per your requirement
#
#Create a shell script name quota.sh

chmod 777 quota.sh

crontab -e

*        *        1        *        *       /bin/bash  /path to/quota.sh

then save the cron by following keys 'esc' then ': wq'

and we are ready to go.