When a normal desktop drive encounters a read/write error it will keep trying for a long time, and then take steps to rectify the issue (e.g. remap a bad sector). But there's a problem: during this process the RAID controller doesn't know what's going on, and will consider the drive unresponsive and drop it from the RAID array.
Rebooting the NAS or re-seating the drive will make the NAS realise everything is fine, it'll rebuild the RAID and you'll be back working. But in the meantime you've lost hours to the rebuild, and it'll happen all over again at the next read/write error.
Drives designed for RAID, such as the WD Red or Seagate Constellation, have a setting in the firmware to stop a read/write recovery operation after 10 seconds and just let the RAID controller do its job.
Drives designed to be "Green" or "Low Power" park the drive head after 8 seconds of inactivity. This is fine in a non-RAID scenario where the drive will be accessed briefly and then sit idle for a long time.
Where this really starts to hurt is in a RAID configuration. The problem is twofold:
Many of these Green drives have a slow spin-up speed. By the time the drive has spun back up, the RAID controller may have decided the drive is unresponsive and marked it as bad.
Each spin-up of the drive is very stressful on the drive and counts as a full "load cycle". A typical desktop drive is rated for 300,000 load cycles.
- In a worst-case scenario where a file was being read/written to the drive every 9 seconds (meaning the drive will park between each read/write), that's 9600 load cycles per day. After 31 days the drive has exceeded its rated load cycles.
- In a more realistic scenario, if a file is read or written (remember, this includes Autosave by applications and any type of database access) once per minute during an 8 hour work day, that's 2400 load cycles per week. After 125 weeks (2.4 years) the drive has exceeded its rated load cycles.
RAID is not a Backup
Always remember that RAID is not a substitute for backups.
- If a second drive fails during RAID5 rebuild (more likely than you'd think, due to the stress rebuilding puts on the other drives) you will lose data.
- RAID doesn't protect you from accidental deletion or overwriting, like a daily backup would.
- Having all data in one place could mean trouble if your building suffers a fire or flood. Consider offsite backups or disaster-proof external hard drives.
Comment by Al, on 7-Apr-2013 12:10
+1 For this advice, especially with Mobotix and NAS!!! Green drives are lucky to last 12 months of 24/7 Read/Write operations!
Comment by Matt Beechey, on 9-Apr-2013 21:01
Great succinct article. Very good advice! Especially the note about the likely-hood of a second drive dying during the process of rebuild, I've seen this on cheap Sata drive raid arrays before. I actually wasn't aware of the recovery issues with drives and that probably explains a few occasions I've seen exactly what you describe of a drive dropping out of the raid that on testing appears fine and later rebuilds ok to run again for months. Keep up the good work!
Add a comment
Please note: comments that are inappropriate or promotional in nature will be deleted.
E-mail addresses are not displayed, but you must enter a valid e-mail address to confirm your comments.
Are you a registered Geekzone user? Login to have the fields below automatically filled in for you and to enable links in comments. If you have (or qualify to have) a Geekzone Blog then your comment will be automatically confirmed and shown in this blog post.