[afnog] G Mirror Configuration

Brian Candler B.Candler at pobox.com
Mon Aug 22 14:57:30 EAT 2005


On Mon, Aug 22, 2005 at 09:59:57AM +0000, Musa.E.A.Kijera wrote:
> Did any body had any idea on configuring gmirror on FreeBSD 5.4   using 
> 4 physical disks . It did work fine on 2 disks , one mirrored on the 
> other one but for some funny reasons it cant work with 2 mirrored on 
> another two . Is it that it can only work with two disks ..

I believe it can work with multiple disks, if they are all mirrors of each
other. That is, 1 disk mirrored against 3 others - you would do that if you
wanted higher read performance from the same data set. Or if you were
*really* paranoid about losing data :-)

Otherwise, probably what you want is mirroring plus striping or
concatenation. e.g.

   disk1  disk2  disk3  disk4
       \   /         \   /
       mirror        mirror
              \    /
              concat
or

   disk1  disk2  disk3  disk4
       \   /         \   /
       concat        concat
              \    /
              mirror

e.g. with four 80GB drives you'll get 160GB of usable space.

The first is arguably more robust (in the second, if you lose disk1 then the
whole concat chunk pair will probably fail so disk2 won't be any use).

But consider also what application you are using for your data. If it's a
mail server then I would strongly recommend just mounting two mirrored pairs
on /mail1 and /mail2 (say). Then you configure your user database so half
the users are on /mail1 and the other half are on /mail2. The same may be
feasible for other applications; e.g. Unix shell users can be on /home1 and
/home2.

This is much easier to grow in the future: just chuck in two extra disks,
mirror them, and mount them on /mail3. It's also easier to manage - e.g. if
you have one user generating a stupidly high amount of disk ops it's easier
to locate them.

If it's a server where disk *space* is more important than *performance*
then a RAID5 array may be more appropriate (with 4 80GB drives you'd get
240GB of usable space)

Regards,

Brian.



More information about the afnog mailing list