small logo

melfneerg.com

 - 'cos life is like that


[Blog]  [Pictures]  [Links]  [About
About the Author
Tudor Davies

author Tudor is a techie turned manager who fights like mad to keep his tech skills honed and relevant. Everything from web hosting, networking, *nix and the like. Constantly developing and co-ordinating with others to make the web a better (and easier to use) place.

pen

Friday, 25th Feb 2011  Posted @ 14:06

After a couple of days off with the kids and missus and a morning of meetings, I thought I would finish off the pen config.

To recap: I have 2 RAQ3 boxes running Centos 4.8 running ucarp (alpha and beta). They also have Webmin installed with most services disabled apart from Denyhosts, mysqld, named, postfix and sshd
Incidentally, I found problems with running named on the carp address, so I added a exec /sbin/service named restart to the vip-up script in /etc/sysconfig/carp/ directory, which fixed it neatly.

I have my PC with its DNS set to the carp address shared between alpha and beta and this fails over nicely, with nary a lookup failure. The next step was to get pen running.

yum install pen
Simple enough. Setting up a simple failover balancer is done thus:
pen 80 -C localhost:19000 www.hostname1.tld:80 www.hostname2.tld:80
This sets up pen listening on port 80 (on all addresses including the carp address) and sends the first (and all subsequent requests from the same host) to www.hostname1.tld. The next (and subsequent) request that comes in from another host will be sent to www.hostname2.tld. I ran that on alpha and then beta as well.

The next step was to point a DNS entry at the carp address, browse to it and see where I ended up. On the first web server. Try it from another host - get the second server.

Then I failed the first pen/carp server (alpha) by pulling its network cable out and used another host workstation to browse to the DNS entry defined above. Yet again, I ended up on the first web server, proving carp had failed over, DNS lookups were still working and that pen was pointing me to a live web server.

If you want, you can then start failing the backend web servers and watch pen mark them as down, sending packets only to the live server. Frankly, it just works and it works well.

Now that I have that configuration nailed down I will be trying pound as that offers reverse proxying as well as load balancing, meaning you can offload some of your web server processing

[ no comments : Add ]

Tweet




layout and initial css based on the Qtractor page