I work for a start-up trying to squeeze blood out of every IT dollar, and Raid-0 has worked out well.
On Linux, there's two software Raid-0 solutions available: LVM and md. I ran into a simple case where md performs significantly better than LVM.
I put eight 1TB SATA 7200 rpm 3.5" drives into a SAS enclosure directly attached to a SAS HBA card in the server.
This results in eight "/dev/sdXX" drives magically appearing in Linux.
First I did "cat /dev/sdXX >> /dev/null &" for each of the eight drives and used "iostat -x -k 3" to watch what was happening. I saw about 67MB/s being read from each drive.
Then I striped the 8 drives together using md via: mdadm --create /dev/md0 --level 0 --chunk 2048 --raid-devices 8 /dev/sdXX /dev/sdXY...
Then I did "cat /dev/md0 >> /dev/null" and used iostat again. I saw about 66MB/s being read per drive. Not much different than before.
I killed of the md raid-0, and then used lvm to create a raid-0:
vgcreate --physicalextentsize 1024M tbraid0 /dev/sdXX /dev/sdXY...
lvcreate -i 8 -I 2048 -l 7448 tbraid0 -n vol
I then did "cat /dev/mapper/tbraid0-vol >> /dev/null" and used "iostat". I only saw about 41MB/s per drive being read.
The raw LVM raid-0 block device imposed a substantial overhead on what the hardware is capable of.
(P.S. I tried different chunk sizes for the raid-0 using lvm. It did make a difference. 512K chunk size resulted in about 53MB/s per drive; 64K, 38MB/s.)
The version of RedHat I used:
# uname -a
Linux foobar.com 2.6.9-78.0.1.ELsmp #1 SMP Tue Aug 5 10:56:55 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux
No comments:
Post a Comment