Need home NAS advice. Synology DS420+ or ?

DeeJayK

Mu-43 Hall of Famer
Joined
Feb 8, 2011
Messages
3,583
Location
Pacific Northwest, USA
Real Name
Keith
1) twice as much actual data storage for a given number of raw terabytes.
2) highly fault tolerant of disk failures and, given a reasonable mean time to repair, the probability of a second disk failing within the MTTR window is almost zero.
3) Completely tolerant of any single disk, box, or wall wart failure. Worst case you might have to swap drives between boxes to get access to all your data.
4) Probably some speed advantage due to parallel operation where data access patterns happen to fit. Minor point, though.

In my case, one of my 2-bay boxes is used exclusively for backups from my machine and my wife's machine. I don't back that one up again to the gun safe SATA drive and I don't really care much if we miss a few day's backups. It's a joint proability thing again. we have gone literally years without needing anything from those backups, so for us to need something inside the MTTR window for a failed Synology box is near zero probability. So if my main NAS box failed, I'd just move its disks into the other box and be back on the air in a few minutes.
I get your last three points, but I still don't understand your calculations on #1.

In a 2-bay unit, if you want redundancy your only choice is RAID 1/ SHR-1, which means that one drive is devoted to redundancy. Meaning that, assuming both drives are the same capacity, you'll get half your "raw terabytes" in actual storage. If I use two 4TB drives, I'll end up with 4TB of space.

Whereas with a 4-bay unit, I have the option of running RAID 5/ SHR-1 which still means that one drive is devoted to redundancy or RAID 6/ SHR-2 where two drives are used for redundancy. If I choose RAID 5 I can get 8TB of space using three 4TB drives or 12TB of space by using four 4TB drives. So this is more efficient once I go past two drives. If I choose RAID6/ SHR-2 (the belt and suspenders approach) I'd get 4TB of usable space using three 4TB drives or 8TB of space using four.

Only in the three drive RAID 6 example is this less efficient than the 2-bay example — but that comes with the benefit of double redundancy. And I've never suggested I would implement a three drive RAID 6 array. Beyond that specific highly inefficient use case I see no case where a 2-bay solution can be considered more efficient (in terms of usable vs. raw space) than a 4-bay solution. And once you get beyond 4-bays the benefits become even more obvious.

All that said, your solution clearly is working for you, and that's great. I understand as well that it has some other benefits in terms of non drive related failure. But I just don't see how you can contend that a dual 2-bay NAS solution can be considered to have "twice as much actual data storage for a given number of raw terabytes" compared to a single 4-bay solution with similar redundancy.

- K
 
Last edited:

AmritR

Mu-43 Regular
Joined
Jun 18, 2017
Messages
148
Location
Alkmaar
That's precisely what I'm looking for: a single hub to consolidate and store ALL of my various bits of data and digital detritus. The thought of editing photos in a coffee shop is completely unappealing, but I'm fine with keeping local copies of some of my most recent images on my laptop. I just don't want ALL of my photos on the laptop or even on an external USB drive.

I appreciate the Hitachi drive rec. I know they were for a time considered the best, but since they've been absorbed into Western Digital, it seems like the HGST brand has all but disappeared. It seems that Seagate Ironwolf/ Ironwolf Pro are the most widely recommended NAS drives these days.

Finally, regarding your recommendation on the file system, I understand the BTRFS is the more "modern" option and provides some features that EXT4 doesn't. But are there any downsides to using BTRFS aside from a theoretical (at least) performance hit.

- K
When it comes to hard drives, the Backblaze annual report is an interesting read:
https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/
(I bought the 4TB HGST’s at the time)

There are still HGST ‘Ultrastar’ drives available, now with a WD label. The 12TB ‘HUH721212ALN604’ from the Backblaze report for example is still available here. Expensive drive though, and probably rather loud.
And I did not check if it’s on Synology’s compatibility list.
But with one exception in the list, most drives seem to be excellent.

About BTRFS, I take Synology’s word for it. They have implemented BTRFS with modifications. A bit of a hybrid I guess. When it comes to being expert about storage, NAS and filesystems, my assumption is that Synology will be way more knowledgeable than anything I can come up with.
It works for me, and I haven’t seen any issues reported, or issues with Synology in general. They really seem to do very well.

Still good to have a offline USB backup plan though
 

oldracer

Mu-43 Hall of Famer
Joined
Oct 1, 2010
Messages
2,747
Location
USA
... I just don't see how you can contend that a dual 2-bay NAS solution can be considered to have "twice as much actual data storage for a given number of raw terabytes" compared to a single 4-bay solution with similar redundancy. ...
You're right. I was comparing to RAID 10, which I understood to be your preferred situation. There it's four drives to do the work of one, rather than two drives.
 

Armoured

Mu-43 Regular
Joined
May 5, 2015
Messages
167
I don't have any personal experience with BTRFS, all I know is what I've learned through research. From that I was able to glean the advantages BTRFS offers. I just saw some chatter online that folks had experienced (or perceived, at least) performance penalties of using BTRFS in certain use cases. That said, none of those use cases sounded similar to what I envision mine to be. I was just curious if you had encountered any downsides of BTRFS.

I have not encountered any downsides to BTRFS although my use case would surely not show any performance penalties either. What critical data I have on mine is either synced to several locations or well backed up. If it were to fail completely, I'd miss the services of the unit most - but that would an inconvenience.
It's been rock solid for about four years, 24 hrs. In an environment where there were frequent power failures for a while - one year a few times a week (on average- sometimes more) before I finally put a UPS in last summer; I've swapped out multiple disks as expanding and used some old disks to test at the beginning (i.e. I've had disks fail - pop in a replacement and forget about it). I think I've had power outages during a rebuild - no problem. The only issues I've ever had were related to insufficient ram (since fixed). I trust the datascrubbing and similar functionality.
I can't directly compare to ext4 instead, of course - not using it. But I've been following reports and I have seen almost nothing to suggest that btrfs is an issue - if anything the contrary, most praise synology's implementation as solid. It's one of the reasons I'm okay with non-enterprise disks (for my use case of course).
YMMV. Apart from power outages, I'm not claiming I've truly stress tested it. But I'm more confident with this gear than almost anything else I use. The software's quite good.
 

Armoured

Mu-43 Regular
Joined
May 5, 2015
Messages
167
When it comes to hard drives, the Backblaze annual report is an interesting read:
https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/
(I bought the 4TB HGST’s at the time)

It's worth noting that - unless something's changed - they have always focussed on using (cheap) consumer drives, not enterprise drives. The extra cost just isn't worth it (for them) - they plan expliclity on replacing failed drives and data redundancy etc - they're not ignoring the risk, just using the savings to mitigate the risk in a different way.
Not for everyone or every use case, of course - but an interesting take.
 

Armoured

Mu-43 Regular
Joined
May 5, 2015
Messages
167
Obviously the chances of two drives in a four drive array failing simultaneously is remote, but he also makes the point that rebuilding an array from the loss of a disk presents its own risk in that the rebuilding process itself causes significant stress on the disks.

I'm not going to argue about the math or the spreadsheet - I agree with the general point or argument, although I've never seen real-world numbers to back it up, just heard the narrative and seen extrapolations like the above ("spreadsheeturbation") - not that most of them claim a very high additional risk due to this factor either.
I accept that additional risk, that's what my backups are for. I think it's (very) small; but I'm not running citibank in my basement either.
 

AmritR

Mu-43 Regular
Joined
Jun 18, 2017
Messages
148
Location
Alkmaar
It's worth noting that - unless something's changed - they have always focussed on using (cheap) consumer drives, not enterprise drives. The extra cost just isn't worth it (for them) - they plan expliclity on replacing failed drives and data redundancy etc - they're not ignoring the risk, just using the savings to mitigate the risk in a different way.
Not for everyone or every use case, of course - but an interesting take.
Yes they did as far as I know, but there are also enterprise drives in there. The HGST 12TB ‘HUH721212ALN604’ I mentioned earlier is an enterprise disc, they have lot of those and with good results.

interesting comment from Backblaze / Andy Klein in the comments

One thought is we are at the sweet spot in the lifecycles of the various drives. As drives age the tend to get more stable as a population. The troublemakers have failed and what is left are reliable drives. The smaller drives are older (in general) and fall into that category. On the other side, the newer drives (the larger one's for the most part) are not experiencing "infant mortality" to the degree we've seen in the past. This probably has to do with better QA/testing by the manufacturers and improvements in our qualification process. There is also a notion that aren't moving drives around as much, especially over this past year. That is we're better at allocating space better within the data center. No one factor seems to be the driver, just a lot of little things. It will be interesting to see how this works out over the course of the next year or so
 

John King

Member of SOFA
Joined
Apr 20, 2020
Messages
3,079
Location
Beaumaris, Melbourne, Australia
Real Name
John ...
Enterprise level drives will usually last "forever" in any kind of domestic environment. I have personally never had one fail, or even develop bad sectors.

One argument against using a proprietary file system is that tge disk cannot be read or repaired in a normal workstation. For that reason alone, I have always used NTFS. It is also a redundant and fully recoverable FS.

For files less than block size, it stores the file in the $MFT, so it is extremely fast for small file access (e.g. .xmp files).
 

DeeJayK

Mu-43 Hall of Famer
Joined
Feb 8, 2011
Messages
3,583
Location
Pacific Northwest, USA
Real Name
Keith
Thanks so much for everyone who chimed in on this thread. I wanted to let you all know what decision I made. Sorry this post is so long, feel free to skip down to the TL/DR at the bottom.

So I bit the bullet and bought the DS920+ along with four 4TB Seagate Ironwolf Pro 4TB (7200 rpm/ 128MB cache) drives. It wasn't cheap, but while I'm still a cheapskate, I'm coming to embrace the "buy once, cry once" philosophy. The unit was out of stock everywhere, but I went ahead and placed the order. Although the retailer was quoting 2 weeks on the backorder, it actually arrived at my door in less than a week.

I got everything unpacked and the drives installed. It really couldn't have been much simpler. I don't think I even cracked the manual or the "quick setup" pamphlet. Synology uses tool-less drive sleds and while they work fine, I was a bit concerned about snapping off the pins in the plastic bits that retain the drives. I didn't break any, but I just felt like I easily could. Maybe they're more durable than they look, but the feeling of fragility was the lone negative I can report about the physical NAS unit.

Once I got it turned on, I downloaded an app on my smartphone (via a QR code on the box) that quickly located the device on the network. From there, I connected to it from my laptop and started walking through the configuration. Synology does a good job of guiding the user through this process. While all of the options could be overwhelming for a tech neophyte, answers to most questions one might have were readily available. I wouldn't recommend this process for someone like my 70+ year old mom, but that's not really the target market. Anyone who thinks they need a NAS should feel confident in getting this thing set up.

I initially went with the standard, recommended configuration, which was an SHR1 (one drive of redundancy, similar to RAID 5) array. This resulted in almost 11TB of usable space (Synology subtracts 10% of space from each drive to ensure you don't run into issues caused by completely filling the drive). This seemed like a LOT of space, and my paranoia of drive failure (along with my inclination to tinkering) kicked in, and I decided I would feel better with a smaller array with more redundancy. I considered Synology's SHR2 (similar to RAID 6), but decided to go with RAID 10 for the (theoretical, at least) performance gains. The tradeoff between these two options is that with SHR2 I could conceivably swap out two of the four discs to larger drives and Synology would be able to map that extra space. With RAID 10 if I ever want to increase the size of the array down the road, I'd have to swap out all four drives as the size of the array is limited by that of the smallest drive. I figure I'll cross that bridge when I come to it and reap the performance benefits in the interim. This gives me just over 7TB of space with essentially double redundancy.

I didn't want to start immediately offloading all of my data onto the device, as in my experience if hard drives are going to fail, they often do so right away. Because of this I decided to give the device a week or two for the drives to "burn in" before I started relying on it. This decision turned out to be prescient as yesterday morning (about a week and a half since I got the NAS set up), I woke up to an email alert that one of the drives had failed the scheduled nightly S.M.A.R.T. testing. The email recommended that I run a more thorough diagnostic on that drive, which I did. Within a few minutes this came back with an assessment that the drive was "severely damaged and is failing". Oddly Seagate's own "IronWolf Health Management" scan (which they tout as being more thorough than the standard S.M.A.R.T. diagnostic) still reported the health status of the drive as "normal". I decided that I would need to replace the drive and to shut down the NAS to avoid a second drive failure while it wasn't at full redundancy.

At this point I contacted Seagate to file a warranty claim. The process to do so online was dead simple and very quick. I had an RMA and a shipping label printed within minutes. The only catch was that Seagate said they didn't have any similar drives in stock to facilitate a "quick replacement" whereby they'd ship a new drive first and I'd then return the failing drive. So I'd have to ship my drive (at my expense) and then await a replacement. This meant I'd be down for at least a week and probably more. While this wouldn't have been a major issue since I hadn't really put the unit into service yet, I decided I'd just order a new drive. This would allow me to keep the warranty replacement as a cold spare, which is something I had originally considered having anyway.

I also noticed in this process that Seagate offers something called "Rescue Data Recovery" protection for 3 years on their IronWolf NAS drives. The IronWolf Pro drives are also covered by a 5 year warranty (vs. 3 years for the regular IronWolf line). I'm not sure exactly what this Rescue Data Recovery coverage consists of, but I would imagine that it would cover (or help to cover) the cost of data recovery efforts in the event of a catastrophic failure. Since I had redundancy with the NAS (and also hadn't really put much data on the volume anyway), I didn't need to take advantage of this service, but it's nice to know it's there just in case. I'll update here if the warranty claim goes awry in some way.

The new drive was delivered the same day (thanks Uncle Jeff) and I popped it into the empty slot. At this point I wasn't certain if the NAS would self-heal, but a few persistent (and annoying) warning beeps from the unit gave me to know it wasn't completely happy. I logged into the control panel and found that it had identified the new drive, but hadn't immediately started using it. I was prompted to add it to the volume and once I did this the volume began the process of testing the new drive and then rebuilding itself. In retrospect it makes sense that this process isn't automatic since any existing data on the new drive will get wiped in the process — it makes sense that some user intervention is necessary to warn the user of this. At this point the NAS is back up to full strength.

TL/DR:
I got the DS920+ and am pretty happy with the decision (so far). One of the four Seagate NAS drives failed within the first two weeks, but I was quickly notified of the failure and able to restore the NAS. The initial return process on the drive was easy, although the claim is going to take longer than would be ideal. This stumble out of the gate isn't ideal, but it at least proves that the health monitoring built into the NAS is working.

- K

UPDATE: I received the warranty replacement drive from Seagate on Saturday which was just over a week since I sent off the defective drive. The drive was replaced at no cost besides than the ~$10 it cost me to ship the defective drive to them. I haven't installed the replacement drive as my intention is just to keep it on hand for a cold spare should any of the other drives fail.

I haven't experienced any other issues with the NAS in the interim. At this point I'm feeling pretty confident that the four drives I have are going to be stable.
 
Last edited:

Dinobe

Mu-43 Veteran
Joined
Nov 23, 2017
Messages
420
Location
Belgium
FWIIW
I'm running a 10 year old NetGear Readynas 6 bay NAS. Having 4 Seagate RED drives in it in a RAID 5 config. The warranty and support are far gone, but the machine is still going strong.

Just some observations from my side:
1. The Readynas uses standard computer hardware. I replaced the powersupply and fan in the past.
2. It's an intel atom based machine, so while there is no official support for anymore I can still run the latest and greatest ReadyNas OS on it (debian based linux distro) and it is still updated regurarly
3. I'm running a RAID 5 config, but I wouldn't do this anymore and simply go a with a RAID 1 mirror
4. All my Seagate RED drives are still running fine. Never had a failure in years of use.
5. The "Readynas" appstore is so-so most apps are not really up-to-date
6. While you can start installing no official apps on this NAS (its linux after all), it's a good way to break your system.
 
Last edited:

Latest posts

Links on this page may be to our affiliates. Sales through affiliate links may benefit this site.
Mu-43 is a fan site and not associated with Olympus, Panasonic, or other manufacturers mentioned on this site.
Forum post reactions by Twemoji: https://github.com/twitter/twemoji
Forum GIFs powered by GIPHY: https://giphy.com/
Copyright © Amin Forums, LLC
Top Bottom