Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


dafman

4054 posts

Uber Geek
+1 received by user: 2652

Trusted

#272337 20-Jun-2020 14:08
Send private message

I've been running a Synology DS212j for about 8 years with two WD Red 3GB drives in Raid 1.


In addition to Raid, I separately back up to an external USB.


I'm now getting a message that drive status is abnormal and that one of my drives is failing.


Because I am running Raid 1, my assumption is that when/if the drive fully fails, the remaining drive should not be affected, correct?


Should I remove the failing drive now and just run a single drive with back up, or keep using the failing drive until it dies?


My intention is to keep using the NAS with a single drive (plus back up) for the foreseeable. I would like a new Synology, but at around $1,400 with new drives I am not sure I will get enough use to warrant that amount of expenditure.


 


View this topic in a long page with up to 500 replies per page Create new topic
 1 | 2
SirHumphreyAppleby
2938 posts

Uber Geek
+1 received by user: 1860


  #2508740 20-Jun-2020 14:12
Send private message

RAID 1 will provide redundancy should the other drive fail, but why risk it?

 

It's entirely up to you if you should wait or not. At the first hint of a problem, I will replace my drives. That includes buying a new drive while I wait for a warranty replacement. If the replacement drive isn't completely new, I also treat it as suspect and never use it for anything critical.




richms
29098 posts

Uber Geek
+1 received by user: 10208

Trusted
Lifetime subscriber

  #2508743 20-Jun-2020 14:22
Send private message

What really annoys me with all these raid 1 boxes, is that there is no way to add the additional drive without removing the failing drive. At the point you pull something that is failing out, you have lost all redundancy and any failure on the other drive will mean data loss. I will generally make a new array at the time of a failure and then duplicate everything over to the new array, once its there and critical stuff has checked out ok I delete the old array and retire the failed drive and the non failed one goes in a pile of drives to use if needed.





Richard rich.ms

jpoc
1043 posts

Uber Geek
+1 received by user: 289


  #2509049 21-Jun-2020 09:46
Send private message

Do you know which model of WD 3TB Red drive you are using?

 

 




dafman

4054 posts

Uber Geek
+1 received by user: 2652

Trusted

  #2509054 21-Jun-2020 09:55
Send private message

jpoc:

 

Do you know which model of WD 3TB Red drive you are using?

 

 

No, sorry. All I remember is they are WD Red (I haven't touched them for close on 8 years). 


cyril7
9073 posts

Uber Geek
+1 received by user: 2499

ID Verified
Trusted
Subscriber

  #2509059 21-Jun-2020 10:20
Send private message

dafman:

 

jpoc:

 

Do you know which model of WD 3TB Red drive you are using?

 

 

No, sorry. All I remember is they are WD Red (I haven't touched them for close on 8 years). 

 

 

In DSM under Information > Storage it will list the models of drive installed.

 

Cyril


dafman

4054 posts

Uber Geek
+1 received by user: 2652

Trusted

  #2509064 21-Jun-2020 10:40
Send private message

cyril7:

 

In DSM under Information > Storage it will list the models of drive installed.

 

Cyril

 

 

Ah, this is it ... WD30EFRX-68AX9N0 


 
 
 
 

Shop now for Lego sets and other gifts (affiliate link).
afe66
3181 posts

Uber Geek
+1 received by user: 1678

Lifetime subscriber

  #2509136 21-Jun-2020 11:43
Send private message

I've had a couple of drives fail on my ds414+ over the years. Running default raid mirroring.

Both times I just backed up to external drive, pulled the failing drive, inserted a new drive on same size. Booted at click repair volume or similar and away it goes.
Pretty painless.

Earlier this year the 3TVb drives were 85% full so i bought 6tb. Backed up shut down pulled one of the small drives inserted the 6tbsp and repaired the volume as above. Then shut down removed the other 3tbsp drive inserted second 6tbsp and repaired. Still have those 3tbsp drives in garage which act as a snapshot backup.

jpoc
1043 posts

Uber Geek
+1 received by user: 289


  #2509409 22-Jun-2020 07:10
Send private message

Given the symptoms that you report and the age of the drives, I would make some comments:

 

You have two drives of identical age and very similar use profile.

 

One is beginning to fail and the nature of the failure appears to be related to the age and the use profile of the drive.

 

You should consider that the condition of the second drive is best described as 'about to show signs of failure.'

 

 


1101
3141 posts

Uber Geek
+1 received by user: 1143


  #2509479 22-Jun-2020 10:02
Send private message

dafman:

 

Because I am running Raid 1, my assumption is that when/if the drive fully fails, the remaining drive should not be affected, correct?

 

 

In theory yes

 

In real life, not allways . You can be left with 2 corupted drives if one fails (seen it happen) .

 

You are backing up to USB, so unless the data is so critical that you cant take any risks .......


concordnz
492 posts

Ultimate Geek
+1 received by user: 277

Trusted
EMT (R)

  #2509845 22-Jun-2020 15:42
Send private message

First off - Good on you! - for doing backups!
So many people have the false idea that raid is a backup (which it definitely is NOT)

I agree with jpoc.

You have had a great run out of those drives
(I personally replace drives around the 5yr mark to reduce my chance of failures. )

I'd be backing that data up via USB ASAP - & replacing both those old 3TB with 1 x 4Tb (best cost/size at the moment)

Leaving it thrashing a failing drive - adds significant loading to your NAS (error correction /checking, bad block remapping etc) & from what I've seen in the past does open you up to a randomly corrupted/failed RAID.

dafman

4054 posts

Uber Geek
+1 received by user: 2652

Trusted

  #2509862 22-Jun-2020 16:06
Send private message

concordnz: First off - Good on you! - for doing backups!
So many people have the false idea that raid is a backup (which it definitely is NOT)

I agree with jpoc.

You have had a great run out of those drives
(I personally replace drives around the 5yr mark to reduce my chance of failures. )

I'd be backing that data up via USB ASAP - & replacing both those old 3TB with 1 x 4Tb (best cost/size at the moment)

Leaving it thrashing a failing drive - adds significant loading to your NAS (error correction /checking, bad block remapping etc) & from what I've seen in the past does open you up to a randomly corrupted/failed RAID.

 

Thanks everyone for feedback.

 

One option I am thinking of is to purchase a 6TB to replace the failing drive - ie. continue running the Raid with 1 x 6TB and 1 x 3 TB. Or, for efficiency, would I be best to ditch both 3 TB drives and just run the single 6TB on it's own and continue with regular back up?

 

My thinking is that the 6TB will eventually go into a new Synology NAS in due course. I'm not sure if the DS 212j can take a 6TB drive, official specs say no but user feedback on the net seems to say its ok.

 

A more general question - I'm beginning to question the benefit of raid in addition to regular back up? If I back up regularly, is the argument for maintaining a raid less compelling?


 
 
 
 

Shop now for Dyson appliances (affiliate link).
SirHumphreyAppleby
2938 posts

Uber Geek
+1 received by user: 1860


  #2509885 22-Jun-2020 16:32
Send private message

dafman:

 

My thinking is that the 6TB will eventually go into a new Synology NAS in due course. I'm not sure if the DS 212j can take a 6TB drive, official specs say no but user feedback on the net seems to say its ok.

 

 

It should be fine in most cases. Depending on the RAID model and hard drive, you may need to modify the retention bracket due to larger drives dropping some of the screw holes to make room for more platters.

 

As for the configuration, I'd keep the RAID and regular backups.

 

 


richms
29098 posts

Uber Geek
+1 received by user: 10208

Trusted
Lifetime subscriber

  #2509926 22-Jun-2020 18:19
Send private message

Raid lets you keep going with a drive failure. Everything doesn't grind to a halt till you get new hardware and restore your backup. If you don't need to ensure continuous access to the data then the drive and 2 backups might be a better use of 3 drives. Depends on how often it changes and how much you would lose and how much you care about that loss.

For video and pictures I take I keep the SD cards around till the lot has backed up off the PC to crashplan and Google drive for photos. So if the PC dies in the meantime no biggie because I still have the SD cards.




Richard rich.ms

dafman

4054 posts

Uber Geek
+1 received by user: 2652

Trusted

  #2515694 1-Jul-2020 15:01
Send private message

Ok, I've decided to remove the failing drive and keep the NAS running with a single drive with external back up.

 

So, I want to move from: '2 disks, Raid 1' to '1 disk, Raid 0'

 

Can anyone advise on process to achieve this? I've had a look online, but can't see where someone has gone from two to one drive, all examples I see are for replacing and maintaining the number of drives.

 

This is a short / medium term option as I consider the cost benefit of purchasing a new DS 220 when released in NZ.


Mark
1653 posts

Uber Geek
+1 received by user: 555


  #2517355 4-Jul-2020 21:11
Send private message

dafman:

 

Ok, I've decided to remove the failing drive and keep the NAS running with a single drive with external back up.

 

So, I want to move from: '2 disks, Raid 1' to '1 disk, Raid 0'

 

Can anyone advise on process to achieve this? I've had a look online, but can't see where someone has gone from two to one drive, all examples I see are for replacing and maintaining the number of drives.

 

This is a short / medium term option as I consider the cost benefit of purchasing a new DS 220 when released in NZ.

 

 

Just do your backup to the external drive, don't do anything like pull drives out, just keep an eye on it while you work out on buying a replacement NAS, fiddling will likely result in worse things happening :-)

 

Also RAID-0 is not "RAID with a single drive", it's a totally different layout, provide no resiliency at all and needs a minimum of two drives to create anyway.


 1 | 2
View this topic in a long page with up to 500 replies per page Create new topic








Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.