Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


Scott3

4176 posts

Uber Geek
+1 received by user: 2990

Trusted
Lifetime subscriber

#318881 28-Feb-2025 23:21
Send private message

Have ordered some more drives for my NAS.

Currently running 5 drives in SHR1 (array with 1 drive parity).

 

my order will bring me to 8 drives. leaning towards SHR2, but obviously this gives up a full drive extra worth of capacity. Overkill for home use.

Irreplicable stuff has an onsite and (soon to be) offsite backup. Everything else is replicable (but it would be quite a chore) and is not backed up.

 

Max volume size limit means the 8th disk would need to be it's own volume if I went SHR1.


Create new topic
Andib
1395 posts

Uber Geek
+1 received by user: 974

ID Verified
Trusted

  #3348895 1-Mar-2025 08:47
Send private message

How big are the drives and what are you storing on them?

 

If the drives are 8TB+ and not just for “Linux ISOs” I’d suggest two parity drives as as rebuilding a disk of that size will take hours (if not closer to a day) and if they’re all of the same age the added wear of the drives during a real build may cause a second failure.

 

If the data is easily re-downloadable the extra capacity may be more beneficial than the extra redundancy.





<# 
       .DISCLAIMER
       Anything I post is my own and not the views of my past/present/future employer.
#>




Scott3

4176 posts

Uber Geek
+1 received by user: 2990

Trusted
Lifetime subscriber

  #3349292 2-Mar-2025 21:50
Send private message

Andib:

 

How big are the drives and what are you storing on them?

 

If the drives are 8TB+ and not just for “Linux ISOs” I’d suggest two parity drives as as rebuilding a disk of that size will take hours (if not closer to a day) and if they’re all of the same age the added wear of the drives during a real build may cause a second failure.

 

If the data is easily re-downloadable the extra capacity may be more beneficial than the extra redundancy.

 

 

 

 

All ex data center 18TB WD's with 22k - 38k hours.

It is both my primary storage for personal (irreplaceable) documents, photos etc (This portion gets backed up) and for replaceable media (not backed up). Of course, the quantity of the media means re-downloading would be quite a chore. And of course, some of it is older media which will be harder to find in the future.

Leaning towards SHR2


Handle9
11922 posts

Uber Geek
+1 received by user: 9674

Trusted
Lifetime subscriber

  #3349346 3-Mar-2025 03:01
Send private message

Andib:

 

How big are the drives and what are you storing on them?

 

If the drives are 8TB+ and not just for “Linux ISOs” I’d suggest two parity drives as as rebuilding a disk of that size will take hours (if not closer to a day) and if they’re all of the same age the added wear of the drives during a real build may cause a second failure.

 

If the data is easily re-downloadable the extra capacity may be more beneficial than the extra redundancy.

 

 

 

 

24 hours is going to push a hard drive over the edge? Really?

 

I can see reasons to use a second parity drive but not the one you have described  

 

 

 

 




MadEngineer
4591 posts

Uber Geek
+1 received by user: 2570

Trusted

  #3349515 3-Mar-2025 10:21
Send private message

Yes. When one drive in an array fails, experience will tell you the others are probably not far off. When you’re swapping in the replacement you’ve lost redundancy while it’s rebuilding and the extra work required for that can trigger that failure. 





You're not on Atlantis anymore, Duncan Idaho.

Tinkerisk
4798 posts

Uber Geek
+1 received by user: 3660


  #3349550 3-Mar-2025 11:56
Send private message

I use a 4 drive (main) server with RAID-5 and a 6 drive backup server with ZFS-2, plus two off-site backups.





- NET: FTTH & VDSL, OPNsense, 10G backbone, GWN APs
- SRV: 12 RU HA server cluster, 0.1 PB storage on premise
- IoT:   thread, zigbee, tasmota, BidCoS, LoRa, WX suite, IR
- 3D:    two 3D printers, 3D scanner, CNC router, laser cutter


Handle9
11922 posts

Uber Geek
+1 received by user: 9674

Trusted
Lifetime subscriber

  #3349761 3-Mar-2025 22:12
Send private message

MadEngineer:

 

Yes. When one drive in an array fails, experience will tell you the others are probably not far off. When you’re swapping in the replacement you’ve lost redundancy while it’s rebuilding and the extra work required for that can trigger that failure. 

 

 

There's no evidence of what you are claiming.

 

Even if every hard drive in an array or pool were of the same age and the same number of on hours they will all have different amounts of Read and Write operations. A typical NAS hard drive (eg WD Red) has an MTBF of 1,000,000 hours. 

 

The "extra work" is about as good as it gets for a hard drive. A parity rebuild is a sequential read which isn't putting significant strain on the drive. It's just normal run time.


 
 
 

Shop on-line at New World now for your groceries (affiliate link).
lxsw20
3689 posts

Uber Geek
+1 received by user: 2174

Subscriber

  #3349764 3-Mar-2025 22:26
Send private message

SHR1 is basically RAID5 as I understand it. At 126TB usable I suspect array rebuild time in a disk failure is going to take days if not weeks. I'd be going SHR2/RAID6 personally. 


Scott3

4176 posts

Uber Geek
+1 received by user: 2990

Trusted
Lifetime subscriber

  #3349770 3-Mar-2025 23:23
Send private message

Drives have arrived. Running a full smart test now. And plan to deploy SHR2

lxsw20:

 

SHR1 is basically RAID5 as I understand it. At 126TB usable I suspect array rebuild time in a disk failure is going to take days if not weeks. I'd be going SHR2/RAID6 personally. 

 



Max single volume size on my NAS is 108 TB, so if I went with SHR1, I would end up my main volume and a single disk with no parity. (OK as a could put easily replaceable stuff on that disk).

But I don't think that materially changes the point, or the rebuild time. Seems upwards of 50 hours is typical to rebuild an 18 TB drive.


insane
3324 posts

Uber Geek
+1 received by user: 1006

ID Verified
Trusted
2degrees
Subscriber

  #3350286 5-Mar-2025 08:59
Send private message

Handle9:

 

MadEngineer:

 

Yes. When one drive in an array fails, experience will tell you the others are probably not far off. When you’re swapping in the replacement you’ve lost redundancy while it’s rebuilding and the extra work required for that can trigger that failure. 

 

 

There's no evidence of what you are claiming.

 

Even if every hard drive in an array or pool were of the same age and the same number of on hours they will all have different amounts of Read and Write operations. A typical NAS hard drive (eg WD Red) has an MTBF of 1,000,000 hours. 

 

The "extra work" is about as good as it gets for a hard drive. A parity rebuild is a sequential read which isn't putting significant strain on the drive. It's just normal run time.

 

 

 

 

I installed, commissioned and managed many SANs from different vendors for a number of years and this was a thing. The drives wouldn't necessarily hard fail but would get marked as predicted to fail by the storage controllers. Not sure whether it was bad firmware or something else, but it was enough to qualify for a replacement under warranty.

 

Used to typically run Raid 6 (double parity) on SATA arrays for bulk storage and Raid10 on faster SAS ones. There's obviously pros and cons for each, but no one ever wants to loose customer data and need to rely on backups.

 

 

 

 


Create new topic








Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.