I-Space Research Labs

Tag: Tuning

More GFS tuning

by on Jun.25, 2009, under GFS, Tech Stuff

Finally had some time to do some more GFS tuning on my test cluster.

First thing I’ve discovered that even writing small (~1Mb) files, using directio cuts your throughput in half. It’s fail, don’t use it. Same with data journaling. Don’t bother.

But where the SWEET stuff is with glock_purging and demote_secs. On a 100Mbps network connection to an old cranky Dell workstation with the iSCSI target running… 3 servers writing 1000 1-Mb files to random locations on the GFS filesystem, I saw up to 6.8Mb throughput on all 3 servers at the same time. Hopefully I’ll get some real SAN hardware soon so I can get some real performance.

The two parameters are glock_purge and demote_secs. You set them with:

gfs_tool settune /my/gfs glock_purge X

gfs_tool settune /my/gfs demote_secs X

glock_purge accepts an argument that tells gfsd what percentage of unused locks to purge every 5 seconds. Redhat recommends starting at 50 and working your way up. I’m currently pushing 90 right now, but I think that may be a bit too aggressive, but then I’m just doing some benchmarking. Production may turn out to be different.

demote_secs is the number of seconds that gfsd will wake up and demote locks and flush data to disk. So it stands that a lower number may be beneficial. I’m currently at 5, but this may be too silly, but I like to see what the extremes look like as I dial in. The default is 300 seconds.

You can read more about them here

Here’s how I set up my mounts on all 3 servers:

mount -t gfs /dev/myvg/mygfs /mnt/gfs -o acl,noatime,nodiratime

gfs_tool settune /mnt/gfs statfs_fast 1

gfs_tool settune /mnt/gfs glock_purge 90

gfs_tool settune /mnt/gfs demote_secs 5

Remember these numbers are probably not good for production.

On one of the servers, I do a little for loop to set up the test:

for i in {1..1000}; do mkdir /mnt/gfs/$i;done

This creates 1000 folders on the gfs mount.

Then a short bash script:

## gfshammer.sh

##GFS testing script. Yay.

#!/bin/bash

echo “Starting: “`date`>>~/timefile

for i in {1..1000}

do

NUM=`let R=$RAND%1000;echo $R`

SIZE=`let S=$RAND%1000;echo $S`

MYCOUNT=$(($S*1000))

dd if=/dev/urandom of=/mnt/gfs/$NUM/test$i bs=1024 count=$MYCOUNT

done

This will creat randomly sized files full of random data in random places on the GFS filesystem. I ran this on all 3 nodes at the same time and saw lows of 4MB/sec to highs of 6.8MB/sec, usually around 6MB/sec. That ain’t bad given the underlying infrastructure: 100Mbps LAN, single spindle on an old workstation. I think at this point I’m being bottlenecked by the network. I was getting around 6MB/sec with just a single node without any glock tuning the other day, so this seems like a big jump forward.

Also, I tried GFS2, and I’m sad to report that its performance is nowhere near what I was getting with GFS. I can’t tune glocks as GFS2 is supposed to be self-tuning, but I saw a pretty significant drop in throughput when I tried it, so back to GFS we go…

Leave a Comment :, more...

GFS Tuning and Iran

by on Jun.24, 2009, under GFS, Tech Stuff

I’m not surprised that the Iranian government stacked the deck and tried to screw their own people over. People who are in power illegitimately ususally go to any means to ensure that they STAY in power. I’m also disappointed in our own talking heads, especially McCain, who think that we should be charging in somehow and fixing this for the Iranians. Because it worked so well in Iraq. A democracy that is forced on people isn’t a democracy. Countries like Iraq, who have spent the last few decades under the opressive rule of a dictator, they don’t know what to do with the democracy they’ve been given. The fate of the Iranian people lies in their own hands, all we can do as a responsible nation is to make sure that they don’t get all machinegunny on their own people.

Anyway, today’s topic is….GFS tuning. I was doing some benchmarking with a cluster of three nodes all tied back to a shared GFS filesystem that is shared via iSCSI. I don’t think the underlying network is gigabit, likely Fast Ethernet and the iSCSI server is a measly little workstation. With the default parameters, I was getting initially 5 megabytes/sec dd’ing /dev/urandom to a file until it was a gigabyte in size. Once we had that baseline down, I did a few tests. This is the standard command I used on all my tests:

dd if=/dev/urandom of=/my/gfs/file bs=1024 count=1000000 <- Random garbage of about a gig in size

(continue reading…)

Leave a Comment :, more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...

Archives

All entries, chronologically...