Synology DS1815+ 8-bay Intel Rangeley SMB NAS Reviewby Ganesh T S on November 18, 2014 6:30 AM EST
Introduction and Testbed Setup
Synology started the roll-out of their SMB-targeted NAS units based on Intel's latest Atom platform (Rangeley) in September 2014. We have already looked at the 4-bay DS415+ in detail. Today, the 5-bay DS1515+ and 8-bay DS1815+ versions are being officially launched. As a recap, the Rangeley-based NAS units finally bring about hardware accelerated encryption capabilities to DSM in the desktop tower form factor. A host of other advantages pertaining to the storage subsystem are also provided by Rangeley / Avoton. The SoC being used in the DS1815+ (Intel Atom C2538) is the same as the one being used in the DS415+ and the amount of RAM is also the same (2 GB). The difference is in the number of expansion bays that can be attached to the main unit (2x 5-bay DX513 for the DS1815+ vs. 1x 5-bay DX513 for the DS415+, with some volume expansion restrictions on the latter). The RAM in the DS1815+ can be upgraded (one free slot that can accommodate an extra 4 GB of RAM). Unlike the 100W external adapter in the DS415+, we have an internal 250W PSU in the DS1815+.
The I/O ports on the DS1815+ are based on the DS1813+. Compared to the DS1812+ that we reviewed last year, the DS1813+ (and, by extension, the DS1815+ that we are looking at today) added two extra network ports. Eight drive bays and four GbE network links are backed up by an embedded Linux OS, the DSM, which brings a host of virtualization certifications. All in all, the new Atom platform in DS1815+ seems to present a compelling case over the previous 8-bay units from Synology based on the older Atoms. The specifications of the DS1815+ are provided in the table below.
|Synology DS1815+ Specifications|
|Processor||Intel Atom C2538 (4C/4T Silvermont x86 Cores @ 2.40 GHz)|
|RAM||2 GB DDR3 RAM (+ 4GB max. in 2nd slot)|
|Drive Bays||8x 3.5"/2.5" SATA II / III HDD / SSD (Hot-Swappable)|
|Network Links||4x 1 GbE|
|External I/O Peripherals||4x USB 3.0, 2x eSATA|
|VGA / Display Out||None|
|Full Specifications Link||Synology DS1815+ Specifications|
|Price||US $1050 (Amazon)|
In the rest of the review, we will take a look at the Intel Rangeley platform and how the Synology DS1815+ takes advantage of it. This is followed by benchmark numbers for both single and multi-client scenarios across a number of different client platforms as well as access protocols. We have a separate section devoted to the performance of the DS1815+ with encrypted shared folders. Prior to all that, we will take a look at our testbed setup and testing methodology.
Testbed Setup and Testing Methodology
The Synology DS1815+ can take up to eight drives. Users can opt for either JBOD, RAID 0, RAID 1, RAID 5, RAID 6 or RAID 10 configurations. We expect typical usage to be with multiple volumes in a RAID-5 or RAID-6 disk group. However, to keep things consistent across different NAS units, we benchmarked a SHR volume with single disk redundancy (RAID-5). Eight Western Digital WD4000FYYZ RE drives were used as the test disks. Our testbed configuration is outlined below.
|AnandTech NAS Testbed Configuration|
|Motherboard||Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB|
|CPU||2 x Intel Xeon E5-2630L|
|Coolers||2 x Dynatron R17|
|Memory||G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30|
|OS Drive||OCZ Technology Vertex 4 128GB|
|Secondary Drive||OCZ Technology Vertex 4 128GB|
|Tertiary Drive||OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD)|
|Other Drives||12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)|
|Network Cards||6 x Intel ESA I-340 Quad-GbE Port Network Adapter|
|Chassis||SilverStoneTek Raven RV03|
|PSU||SilverStoneTek Strider Plus Gold Evolution 850W|
|OS||Windows Server 2008 R2|
|Network Switch||Netgear ProSafe GSM7352S-200|
The above testbed runs 25 Windows 7 VMs simultaneously, each with a dedicated 1 Gbps network interface. This simulates a real-life workload of up to 25 clients for the NAS being evaluated. All the VMs connect to the network switch to which the NAS is also connected (with link aggregation, as applicable). The VMs generate the NAS traffic for performance evaluation.
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the OCZ Z-Drive R4 CM88
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
- Thanks to Western Digital for the eight WD RE hard drives (WD4000FYYZ) to use in the NAS under test.
Post Your CommentPlease log in or sign up to comment.
View All Comments
DigitalFreak - Wednesday, November 19, 2014 - linkNot everyone is poor like you.
chaos215bar2 - Wednesday, November 19, 2014 - linkI see a lot of comments like this, and I can only imagine you're assuming that:
1) The NAS is only being used as a file server with the most basic setup.
2) Updates are not an issue.
I agree that building a custom NAS box is a fun project and can save a lot of money. However, not everyone wants to deal with the complications that can arise from setting up multiple services and keeping them up-to-date.
Say you want an email server. To install and fully configure Synology's Mail Station takes no more than 10 minutes. If you want webmail to go with it, just install a second package. There's almost zero setup required. Sure, you'll have more options on a generic Linux installation, but setting up a fully functional and securely configured email system takes quite a lot of research if you're just doing it one time.
Of course, all of that time spent properly configuring your custom-built server is worthless if you don't keep it up to date. As of DSM 5.1, Synology will automatically install either all updates or just security updates, and you know that the updated components have been tested and work together. I have never had a problem with a service going down due to a Synology update. With full Linux distributions, not so much. Most of the time updates work fine, but I would never trust something as critical as my primary email server to automatic updates.
shodanshok - Friday, November 21, 2014 - linkHi,
while I agree on the simplicity argument (installing postfix, dovecon and roundcube surely require some time), RedHat and CentOS distros are very good from an update standpoint. I had very little problems with many server (100+) administered over the past years, even with automatic update enabled. Moreover, with the right yum plugin you can install security updates only, if you want.
Nowadays, and with a strong backup strategy, I feel confident enough to enable yum auto-update on all server except the ones used as hypervisors (I had a single hypervisor with auto-update enabled for testing purpose, and anyway it run without a problem).
Sadly, with Debian and Ubuntu LTS distros I had some more problems regarding updates, but perhaps it is only a unfortunately coincidence...
shodanshok - Tuesday, November 18, 2014 - linkHi agree with people saying that similar units are primarily targeted at users that want a clean and simple "off-the-shell" experience. With units as the one reviewed, you basically need to insert the disks, power on the device and follow one or two wizards.
That said, a custom-build NAS has vastly better performance/price ratio. One of our customer bought a PowerEdge R515 (6 core Piledriver @ 3.1+ GHz) with 16 GB of ECC RAM, a PERC H700 RAID card with 512 MB of NVRAM cache memory and a 3 years on-site warranty. Total price: about 1600 euros (+ VAT).
He then installed 8x 2TB WD RE and I configured it with a 11+ TB RAID6 volumes with thin LVM volumes and XFS filesystem. It serves both as a backup server (deduplicated via hard links and rsync) and as a big storage for non-critical things (eg: personal files).
Our customer is VERY satisfied of how it works, but hey - face the reality: a skilled people did all the setup work for him, and (obviously) he paid us...
Beany2013 - Thursday, November 27, 2014 - linkThis is about the most sensible comment on this entire review.
If his budget halved and he couldn't necessarily afford support from you on a regular basis (or at least wanted his hourly callout charge to be lowered) I'm guessing you'd be more tempted to push him in the direction of a device like this, though?
(I've been there, done that, and swapped out more than a few Windows SBS/standard+exchange boxes for Syno units over the last few years for this very reason, natch - the Windows license costs themselves pretty much pay for one of these)
eximius - Thursday, December 11, 2014 - linkI have to agree with these previous two comments.
I have an 1813+ sitting next to my heavily modded (aka needed to use a dremel) case with 15 hot swap disks (currently Linux + btrfs + samba & NFS). I have and use (and love) both. There are use cases for both, but I would certainly not hand my custom solution over to someone random and expect it to just work. I have automated updates and reboots (all hail "if [ -f /var/run/reboot-required ]") but occasionally something does not work right. No normal person is going to be able to figure that out in a reasonable amount of time.
Also 30 minutes to install and configure it yourself is total BS. I have saltstack and automated PXE installs at home and 30 minutes is still stretching it for me for a full stack install and configure. Linux + Samba + backup + updates + RAID and/or mdadm and/or zfs and/or btrfs installed and *configured* in 30 minutes is beyond optimistic, even for technical people. $800 does not cover 8 hours of my time, so ya, I recommend Synology for certain (mostly home/SOHO) scenarios.
I can expect my 70 year old dad to be able to keep his synology up to date, but not a Linux or BSD distro. That is just a ridiculous thing to expect.
DustinT - Tuesday, November 18, 2014 - linkGanesh, thanks for the thoughtful review. I am very interested in seeing how SSD caching affects performance. Take 2 drives out, replace them with 240gb SSDs and retest. Synology is putting a lot of emphasis on ssd caching, and I will be making my buying decision largely based on that aspect alone.
eximius - Thursday, December 11, 2014 - linkIt depends on your use case. For large sequential transfers SSDs are not going to help you very much since a couple of spinning metal drives can easily saturate a gigabit link. If you need a lot more IOPS then an SSD cache will help you out here but only so much, since again the limit is gigabit ethernet (16000 IOPS or so).
*note* this applies to gigabit links, 10+ gigabit ethernet or infiniband connected devices can see an improvement with SSDs.
mervincm - Tuesday, November 18, 2014 - linkYes, please , test Read and Write Cache effect. In my 1813+ (4GB) on DSM 5.0 I installed a 2 disk Read/Write SSD Cache. Strangely streaming performance dropped, and since my use case is highly dependent on streaming, I removed the cache. I wonder now if things are better with 5.1 or with the read only cache.
eximius - Thursday, December 11, 2014 - linkFirst, your bottleneck is the gigabit LAN. A couple of spinning rust drives can easily saturate a gigabit link so an SSD cache is not going to accelerate a streaming (aka sequential read) operation over gigabit ethernet. If you need more IOPS then an SSD cache will help (gigabit ethernet tops out somewhere around 16000 IOPS), though at the cost of reduced throughput.
IOPS and throughput are at opposite ends of the spectrum, an increase in one means a decrease in the other. If your use case is sequential reads and writes, don't bother with the SSDs. On DAS (direct attached storage) you can improve both IOPS and throughput with an SSD cache since it takes a whole lot of platters to equal the performance of a single 850 pro SSD.
Also note that this problem has nothing to do with Synology, you have the same constraints even if you had 24+ thread CPU(s) and 128 GB+ RAM with <insert favourite redundancy technology here>. Gigabit ethernet is slow, period.