Posts Tagged ‘Slony’

Looking for

October 13, 2010 2 comments

I had always been building Slony from source, but for the first time I tried using the community version from “”. I was using PostgreSQL version 8.3 so it was really easy to find the correct rpms at but as I tried installing Slony package I was hit with an error:

" not found"

This looked really weird to me as I already had the libpq library installed as but it was still looking for

Looking for a solution I started going through the 8.3 RPMs and tried looking for file file…. bingooo!!! I found one which had it. compat-postgresql-libs was the answer to this problem.

After installing that package, Slony got installed successfully without any problems.

Shoaib Mir

Categories: PostgreSQL Tags: , ,

(SLONY) cache lookup failed for type

October 13, 2010 1 comment

Just been away from blogging for quite a while now as work had been keeping me extremly busy but finally got sometime today to blog about an interesting problem I saw with Slony yesterday…

While at a customer site, I upgraded them to Slony version 1.2.21 and also during the process re-initialized the cluster. The whole process was done without any problem but right after everything was synched across, one of the Sysadmins saw this error in the database server log file:

"cache lookup failed for type 27889"

On doing an INSERT into sl_log_1 table.

Having never seen such a problem before with Slony, it got me a little worried as this was a production database so we had to fix it very quickly.

After doing a little bit of research found out that, by simply restarting the application server which is making connections to the database server will solve the problem. Reason being, the application server had already established a connection to database when we were dropping and re-creating the cluster and that is why it was still looking for old values  that were sitting in the cache. Once we restarted the application server, it was all fine.

Shoaib Mir

TRUNCATE problems with Slony

June 3, 2010 5 comments

Slony ( is great for high availability/load balancing and I have been recommending it to users for years now and haven’t really seen any major problems with it if you have got proper checks on the whole replication setup using proper monitoring with something like Nagios.

But well at times there is this annoying thing that always gave me trouble when a user goes and just runs a TRUNCATE on one of the master tables in order to do maintenance and all of the sudden you start getting errors like this on the slave nodes:

2010-06-01 11:02:09 ESTERROR  remoteWorkerThread_1: "insert into "public"."table1" ("a","b") values ('1','one');

" ERROR:  duplicate key value violates unique constraint "table1_pkey"

The reason behind this is…

  • You did a truncate on a table at master node and assumed that this statement is all replicated to slaves as well
  • Truncate event is not triggered like INSERT/UPDATE/DELETE (For PostgreSQL < 8.4) so that means the slave never got it and they still have the old copy of the table
  • On master the table is now empty but the slave table has old records, which means on inserting new data you might start getting duplicate key violation error

I have used the following approach in the past to solve it…

Do a TRUNCATE on slaves for the same table and you will then see all those INSERTS/DELETES/UPDATES going through in the Slony logs file that have been queued up since Slony started getting these duplicate key violation problems

In order to avoid these scenarios, you have to keep on checking Slony lag times from sl_status of replication schema and alert if it goes above the dangerous lag limit. If it does, go and check the Slony logs as they usually tell the detailed story.

Another option to avoid doing TRUNCATE is by using table partitioning, which means when you need to drop old data just get rid of the specific partition and drop it out of the Slony cluster as well instead of doing a TRUNCATE. This way it will save you a lot of database routine maintenance like vacuuming as well because you are not deleting data using DELETEs but you are now dropping the whole partition.

I have recently seen trigger support for PostgreSQL 8.4 and above but I am not sure if it’s been implemented in Slony yet or what exactly is the plan on that, but it will be great to have that with Slony. I do have seen Bucardo ( supporting truncate with triggers though.

Shoaib Mir

Categories: PostgreSQL Tags: , ,

Setting up Slony cluster with PerlTools

August 5, 2009 2 comments

A few days back while I was giving a training on Slony and was showing the students steps to building a Slony cluster using Perltools, they asked me to write down the steps for them as it was very hard to memorize everything. I just did that and thought of sharing it with everyone as it might help.

First of all make sure you have the perltools available for Slony and once you have confirmed that, follow the steps given below:

– Edit the slon_tools.conf file according to your cluster needs, i.e. change the nodes, sets information and tables to be replicated. This file by default is located in /usr/local/etc/slon_tools.conf and the Perltools will by default look into this directory for the configuration file.

– Now first of all you need to initialize the cluster

slonik_init_cluster | slonik

– Once that step is successful, start the Slon processes on each of the nodes involved

– Create the Sets, in our case we just had one set so the process goes as

slonik_create_set 1 | slonik

(Where 1 is the ID of the first set which we had set in the slon_tools.conf file)

– Subscribe the sets now, in our case we had just two nodes and one set

slonik_subscribe_set 1 2 | slonik

(Where 1 is the ID of the set and 2 is the node to which you want to subscribe)

– Once the subscription is successfull you should have replication all working now, which can be tested by adding a row in one of the replicated tables on master and verifying the change in your slave node.

Perltools really make life easy and takes rid of those extra steps for creating Slonik scripts yourself . Once you have the slon_tools.conf properly configured it just takes a few minutes generating all the required scripts.

Just make sure when you make any changes to your Slony cluster like adding new sets, new tables, merging sets and doing switchovers or failover… keep on updating the slon_tools.conf file regularly as then it will not take much time rebuilding all scripts for the latest cluster in case that is needed.

Shoaib Mir

Categories: PostgreSQL Tags: , ,

DDL changes in mutiple Slony sets

May 8, 2009 2 comments

I had a client scenario where we were replicating 16 different schemas using 16 sets with Slony and while doing DDL statements it was kind of confusing for the customer to remember which schema has what set number.

In the end I came up with a simple script that used to take input as: schemaname ddl.sql

Here is how the script was…

echo "Schema is" $STR_SCHEMAecho "Filename is" $STR_FILEnames=( scehama1
scehama2 scehama3 scehama4  scehama5 scehama6 scehama7 scehama8
scehama9 scehama10 scehama11 scehama12 scehama13 scehama14
scehama15 scehama16 )
for (( i = 0 ; i < ${#names[@]} ; i++ ))
j=`expr $num + $i`
if [ $nm = $STR_SCHEMA ]
echo  "Set ID for Schema is" $SCHEMA_IDecho "Executing Command" "slonik_execute_script --config=slon_tools.conf $SCHEMA_ID $STR_FILE | slonik"
slonik_execute_script --config=slon_tools.conf $SCHEMA_ID $STR_FILE | slonik

Shoaib Mir

Categories: PostgreSQL Tags: ,

‘public’ schema and Slony

April 8, 2009 4 comments

While setting up Slony the other day I found this weird problem…

I had like 8 schemas in my database and no schema named as ‘public’  and while trying to setup Slony for that specific database, during the INIT cluster step it started giving problems like…

“No schema available to setup objects”

I had no idea that its just a case of not having ‘public’ schema in there and as soon as I added an empty public schema, everything seemed to work fine and I was able to setup 8 sets for my 8 schemas and replication worked nicely as well.

Update: Just found out the problem came as I have deleted the public schema but the user’s search_path was defaulting to ‘public’ and changing that fixed the problem.

Shoaib Mir

Categories: PostgreSQL Tags: ,

Adding new tables to a Slony cluster

March 6, 2009 Leave a comment

I have been using Slony for quite a while now and its been just very reliable and consistent for me. The only thing that I find complicated with its usage is doing DDL changes (i.e adding new tables to the cluster, doing changes to tables like adding new columns etc etc..).

The tables which are already there can be easily modified using Slony EXECUTE SCRIPT for ALTER TABLE commands. But the problem comes when you need to add a new table to am existing Slony set. Right now there is no way to add the new tables to the set once the set has been created, you can only delete a table from the set but not add it. But yes there is a way around that…

– Before we start anything make sure Slony cluster is in good shape by querying “select * from $clustername.sl_status” on master node and check for any abnormal lag times. If there is nothing abnormal, then proceed with adding new tables.

– Create the new tables on all the nodes involved in the Slony cluster using PSQL.

– Now create a new set (you can call it a temporary set) in the Slony cluster with the news tables/sequences.

– Once the set is created do the subscriptions for set exactly the same as the other set for which you wanted to add your tables to originally.

– After the subscriptions have been done, merge the temporary set into your original set (merge will only work if the subscriptions for old and new set are same)

Now you have the new tables/sequences added to your original set and the temporary set is no longer in $cluster.sl_set, you can verify this by querying $cluster.sl_tables

select tab_set from _test_cluster.sl_table where tab_relname = ‘<your_new_table_name>’;

Shoaib Mir

Categories: PostgreSQL Tags: , ,