I'll just comment on our experience. I agree that the 16-drive vdev
size is too big. The largest I've been comfortable going with is
a 12-drive, 9+3 configuration (2, 3, and 4-TB drives). A lot of our
customers are used to going with 11-drive RAID-6 with hot spare (fits
nicely in a Dell MD1200), so it's an easy sell to make a 12-drive raidz3
out of the same number of drives, with ~10x better MTTDL (by Richard
Elling's charts).
However, we have some setups here that have pushed it to a 13-drive,
10+3 config, using 4TB drives, and still get adequate performance for
the task (mostly-sequential genomics workloads). Fits pretty well in
a 40- or 45-slot JBOD, giving room for a hot (or cold) spare or a few.
Expect up to a 36-hour resilver time on a failed 4TB drive, if the pool
is close to full.
Regards,
Marion
=================================================================
Subject: Re: [zfs-discuss] Large disk system 240 6T drives
From: <***@whisperpc.com>
Date: Thu, 19 Feb 2015 14:23:58 -0800
Post by t***@gmail.comI am configuring a 1P of storage using Dell hardware (r730 and md3060e 60
drive enclosures) with 240 6T drives.
I would like to suggest that you get another md3060e, and only put 48
drives in each. When properly configured with RAID-Z2 8+2, this will put
two drives from each array into each tray. With that configuration, even
if a tray drops out for some reason (i.e. failed expander chip), you won't
lose data.
Even better would be if you could use 10 (11 for RAID-Z3 8+3) 24-drive
units. This would allow a tray to fail and still leave all the arrays
redundant, even at RAID-Z2.
Looking at that disk tray, it appears to cost $11K empty. Is your rack
space really that tight? Using more, smaller, trays is probably a better
choice for reliability. There are also Supermicro disk trays that might
work well for this use, but cost significantly less.
Post by t***@gmail.comMost advise I see is around 10 drive
sets, but the overhead for 10 drive sets it too large for this size box.
There's a reason that you're seeing advice along those lines. It delivers
the best performance for a large capacity system, while maintaining a high
degree of reliability. Smaller arrays will improve random I/O
performance, but the overhead climbs too fast for most people to be
willing to accept. Larger arrays have too much of a performance penalty
when a drive goes bad. Your best bet would be to stick to 10 (RAID-Z2) or
11 (RAID-Z3) drives per VDEV.
Post by t***@gmail.comMy thought is to build 2 ~500T pools with 120 drives each. Then I could
break one pool off to another server if i ever wanted to for additional
performance.
There are better ways to improve performance. The largest problem with
multiple pools on a single system is that it doesn't allow the system to
make optimum use of all the disks, which will have a performance impact.
The second issue is that you won't be able to tune the system as well, as
ZFS doesn't have separate performance counters for each pool.
Post by t***@gmail.comPost by Thomas WakefieldIf you say 16+3 is to big, what size would make you comfortable? would
8+2 sets be more comfortable?
Not for me. I've seen raid6 sets of this geometry fail with 2TB drives
and wouldn't want to take the risk on larger drives.
I have as well, when using Desktop drives or WD RE drives. With
Enterprise SATA drives or SAS Nearline drives, I've never seen data loss
with dual-parity arrays.
Post by t***@gmail.comMultiple smaller pools often provide a more manageable solution if you
are planning ahead or potential disaster recovery requirements.
While they are slightly more manageable, they are not more adjustable, and
they will usually deliver lower performance than a single large pool.
With a large pool, use multiple file-systems. Moving them to a different
server is as simple as a zfs send/receive.
If you feel you absolutely must have multiple pools on a single physical
system, virtualization might be a good idea.
Peter Ashford
To unsubscribe from this group and stop receiving emails from it, send an
email to zfs-discuss+***@zfsonlinux.org.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.