![]() ![]() ![]() |
|
Paul1977:
My only conclusion is that the parity generation on the P420 is VERY FAST, combined with the data being striped across more disks.
that would be the FBWC on the controller that would be doing that.
bagheera:
Paul1977:
My only conclusion is that the parity generation on the P420 is VERY FAST, combined with the data being striped across more disks.
that would be the FBWC on the controller that would be doing that.
I didn't think the 1GB FBWC would make that much of a difference when sequentially reading and writing very large files (files I tested on were approx 20GB). Happy to be wrong about that though.
Thank you for giving us such great feedback on your testing.
Interesting results with the RAID5, given that it has fallen out of favour in recent years.
“Don't believe anything you read on the net. Except this. Well, including this, I suppose.” Douglas Adams
Referral links to services I use, really like, and may be rewarded if you sign up:
PocketSmith for budgeting and personal finance management. A great Kiwi company.
Dynamic:
Thank you for giving us such great feedback on your testing.
Interesting results with the RAID5, given that it has fallen out of favour in recent years.
No problem.
The results would likely be different on faster drives, but on these 7200rpm SATA III drives RAID 5 has come out on top.
I didn't bother trying RAID6 as I didn't see how in a 4 disk array it could possibly be any quicker than RAID10 was.
I'll say this though, the NZ$120 odd dollars for the HP P420 with 1GB FBWC (Ebay from China) was well worth it.
I thought about this a couple of years ago and went for 4 drives Raid 6.
Since then I've added a 5th drive and I really like having the extra redundancy.
Generally known online as OpenMedia, now working for Red Hat APAC as a Technology Evangelist and Portfolio Architect. Still playing with MythTV and digital media on the side.
solutionz: Raid 5/6 sounds good in theory until you work out how long a re-silver would take, risk of additional drive failure etc and effective down time / performance degradation that can exceed that of restoring from backups.
Some fair points.
I think I might do a test rebuild on my largest RAID 5 array (not the one I just created). For this particular array I went with RAID5 to maximise usable storage space, but it would be nice to know how long a rebuild takes.
And don't worry, I can restore from backup if it all falls over!
Here's a question for those in the know:
When rebuilding a RAID5 array after replacing a disk (specifically on HP controllers with FBWC, if that makes a difference) does the amount of data on the array effect the rebuild time? E.g. if you had an 8TB array, would the rebuild take twice as long if you had 6GB of data as opposed to 3TB?
Based on the length of time that the parity initialisation takes when you first create the array (when it has no data), I suspect that the amount of data makes no difference and that the rebuild time is dictated by the size of the disk - but I be interested to know for sure.
Can any experts shed some light on this for me?
Thanks
Paul1977:
Here's a question for those in the know:
When rebuilding a RAID5 array after replacing a disk (specifically on HP controllers with FBWC, if that makes a difference) does the amount of data on the array effect the rebuild time? E.g. if you had an 8TB array, would the rebuild take twice as long if you had 6GB of data as opposed to 3TB?
Based on the length of time that the parity initialisation takes when you first create the array (when it has no data), I suspect that the amount of data makes no difference and that the rebuild time is dictated by the size of the disk - but I be interested to know for sure.
Can any experts shed some light on this for me?
Thanks
never look into this before but my guess longer as it has to rebuild from the parity bits but it may need to do it for the blank space too, if so then you would be right, normal plugin, wait 5 min for it to get a good estimate on time and that how long it will take - so depend alot on how much the drive is being used at the time too. Also from past experiences when you get 1 failed disk, the rest are in dodge state too, and the rebuild overhead could cause another disk to fail (hence why raid 5 is falling out of favour )
Paul1977:
Here's a question for those in the know:
When rebuilding a RAID5 array after replacing a disk (specifically on HP controllers with FBWC, if that makes a difference) does the amount of data on the array effect the rebuild time? E.g. if you had an 8TB array, would the rebuild take twice as long if you had 6GB of data as opposed to 3TB?
I am not an expert, but the time taken is going to be implementation dependent. Some RAID systems will rebuild everything while others may flag blocks as unused and skip those during a rebuild. How many blocks end up being used would depend on how data is mapped within the RAID, and how the file system on top of it all behaves. At the more advanced end of the spectrum, ZFS can rebuild only the data due to the integration of RAID with the file system.
Assuming that every block must be touched during a rebuild, the time taken will be proportional to the size of the disks rather than the amount of data stored on them. The time taken to replace a drive in an eight disk RAID won't be much different from a four disk RAID with the same drive type, irrespective of how much data is on the disk.
SirHumphreyAppleby:
Assuming that every block must be touched during a rebuild, the time taken will be proportional to the size of the disks rather than the amount of data stored on them. The time taken to replace a drive in an eight disk RAID won't be much different from a four disk RAID with the same drive type, irrespective of how much data is on the disk.
My experience bears this out. It's the size of the disk being rebuilt rather than the data in them.
nitro:
SirHumphreyAppleby:
Assuming that every block must be touched during a rebuild, the time taken will be proportional to the size of the disks rather than the amount of data stored on them. The time taken to replace a drive in an eight disk RAID won't be much different from a four disk RAID with the same drive type, irrespective of how much data is on the disk.
My experience bears this out. It's the size of the disk being rebuilt rather than the data in them.
Thanks guys, that's what suspected.
I'm doing a test rebuild right now.
I have a RAID5 array made up of 3x 12TB disks giving me 24TB (well, actually 21.86TB) of usable space. Yes, these disks are way bigger than what a lot people say you should put in a RAID5 due to the long rebuild time.
I started the rebuild at 10:15pm last night, and as of right now the rebuild is at 83%. Based on that I estimate a total rebuild time of about 18 hours. If my estimate turns out to be accurate, I think that's actually pretty good. Apart from the rebuild, the array has been mostly idle; so that has probably helped with the rebuild time.
The array only has 6.59TB of data on it, but based on the discussion above it sounds like the rebuild time would be the same even if it had 20TB of data.
Write performance doesn't seem adversely affected and is still well over 300MB/s (although I only tested a couple of large sequential writes, about 15-20GB each).
Read performance takes a big hit and is about 50MB/s which is slow but (for me) perfectly usable (again, I only tested a couple of large sequential reads).
Rebuild complete with a time of a bit under 17 hours. Considering the size of the disks in the array (12TB) I think that is pretty respectable!
|
![]() ![]() ![]() |