Home > PostgreSQL > TRUNCATE problems with Slony

TRUNCATE problems with Slony

Slony (http://slony.info/) is great for high availability/load balancing and I have been recommending it to users for years now and haven’t really seen any major problems with it if you have got proper checks on the whole replication setup using proper monitoring with something like Nagios.

But well at times there is this annoying thing that always gave me trouble when a user goes and just runs a TRUNCATE on one of the master tables in order to do maintenance and all of the sudden you start getting errors like this on the slave nodes:

2010-06-01 11:02:09 ESTERROR  remoteWorkerThread_1: "insert into "public"."table1" ("a","b") values ('1','one');

" ERROR:  duplicate key value violates unique constraint "table1_pkey"

The reason behind this is…

  • You did a truncate on a table at master node and assumed that this statement is all replicated to slaves as well
  • Truncate event is not triggered like INSERT/UPDATE/DELETE (For PostgreSQL < 8.4) so that means the slave never got it and they still have the old copy of the table
  • On master the table is now empty but the slave table has old records, which means on inserting new data you might start getting duplicate key violation error

I have used the following approach in the past to solve it…

Do a TRUNCATE on slaves for the same table and you will then see all those INSERTS/DELETES/UPDATES going through in the Slony logs file that have been queued up since Slony started getting these duplicate key violation problems

In order to avoid these scenarios, you have to keep on checking Slony lag times from sl_status of replication schema and alert if it goes above the dangerous lag limit. If it does, go and check the Slony logs as they usually tell the detailed story.

Another option to avoid doing TRUNCATE is by using table partitioning, which means when you need to drop old data just get rid of the specific partition and drop it out of the Slony cluster as well instead of doing a TRUNCATE. This way it will save you a lot of database routine maintenance like vacuuming as well because you are not deleting data using DELETEs but you are now dropping the whole partition.

I have recently seen trigger support for PostgreSQL 8.4 and above but I am not sure if it’s been implemented in Slony yet or what exactly is the plan on that, but it will be great to have that with Slony. I do have seen Bucardo (http://blog.endpoint.com/2009/07/bucardo-and-truncate-triggers.html) supporting truncate with triggers though.


Shoaib Mir
shoaibmir[@]gmail.com

About these ads
Categories: PostgreSQL Tags: , ,
  1. June 7, 2010 at 7:44 pm | #1
  2. June 8, 2010 at 2:57 am | #3

    I had a chat with folks at PGCon about this, and getting the analysis going has been on my ToDo list ever since.

    I did a writeup of what is the “sticking point” which relates to the fact that there are two extra variations of TRUNCATE:

    truncate a, b, c;
    truncate a cascade;

    That complicates what to do about TRUNCATE. Comments welcome.

    • June 8, 2010 at 2:34 pm | #4

      >>truncate a, b, c;
      >>truncate a cascade;

      Just needed a little more explanation on truncate cascade trigger behavior…

      Lets say ‘a’ has references to ‘b’ and ‘c’ to make both statements almost the same.

      Does this mean both ‘b’ and ‘c’ need to have truncate triggers (slony ones) as well so that the change can be replicated to the slaves? If ‘b’ and ‘c’ are not part of Slony cluster will truncate not go through properly?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: