|Case||Open Test Table|
|CPU||Intel Core i7 9700K|
|Motherboard||EVGA Z390 FTW|
|Ram||(2) 8GB Corsair DDR4-3200 CMW16GX4M2C3200C16|
|GPU||MSI RTX 2080 SUPER GAMING X TRIO|
|Hard Drives||Corsair Force MP510 NVMe Gen 3 x4 M.2 SSD (480Gb)|
|Network Cards||Dual Port Intel Pro/1000 PT|
Mellanox Connect X-2 PCI-Express x 8 10GbE Ethernet Network Server Adapter
|Switches||MikroTik Cloud Router Switch CRS317-1G-16S+RM (SwitchOS) Version 2.9|
10Gtek for Cisco Compatible GLC-T/SFP-GE-T Gigabit RJ45 Copper SFP Transceiver Module, 1000Base-T
10Gtek for Cisco SFP-10G-SR, 10Gb/s SFP+ Transceiver module, 10GBASE-SR, MMF, 850nm, 300-meter
10Gtek for Cisco SFP-10G-T-S 10GBase-T SFP+ 10 Gigabit RJ45 Copper Transceiver 30m
|Power Supply||Thermal Take Tough Power RGB 80 Plus Gold 750W|
2 Western Digital RED 8 TB 5400 RPM desktop drives were installed and used in the F2-210 NAS tests.
A dual-port Intel network card was installed in the test system.
F2-210 was used with Raid 0 and Raid 1 configurations.
For all tests, the NAS was configured to use a single network interface. The network card was used to test 1Gbps connections. For 1Gbps connection one CAT 6 cable was connected to the MikroTik CRS317-1G-16S+RM from the NAS and one CAT 6 cable was connected to the workstation from the switch. Testing was done on the PC with only 1 network card active. The switch was cleared of any configuration.
Note: I wasn’t able to find any options to set MTU on the NAS itself. Default 1500 MTU settings were used on the NAS interface.
Network drivers used on the workstation are 5.50.14643.1 by Mellanox Technologies. (Driver Date 8/26/2018) (10GbE adapter) and 184.108.40.206 by Intel (Driver Date 10/14/2011)
All testing is done based on a single client accessing the NAS.
Crystal Disk Mark is an old favorite disk benchmarking software which we have used for many years. It provides us with useful information on reading and writes speeds of the targets. You can get your own free copy right here.
ATTO Disk Benchmark gives good insights on the read and writes speeds of the drive. In our tests, we used it against the “share” on the NAS. ATTO Disk Benchmark could be download right here.
Anvil Storage Utilities is a comprehensive storage testing program that provides plenty of information and option for each test. Anvil Storage Utilities could be download right here.
NAS Performance Tester is a free utility benchmark the read and write performance in megabytes per second of network-attached storage connected through SMB/CIFS network shares. Get your own copy here.
All tests were run a total of three times then averaged to get the final result.
With only 2 drives, you are limited to RAID 0, 1, and 10. RAID 0 and RAID 1 were tested for 1GbE and 10GbE connections.
Tests were run after all the RAID arrays were fully synchronized.
|Images courtesy of Wikipedia|
JOBD or Just a Bunch Of Disks is exactly what the name describes. The hard drives have no actual raid functionality and are spanned at random data is written at random.
RAID 0 is a stripe set and data is written across the disks evenly. The advantage of RAID 0 is speed and increased capacity. With RAID 0 there is no redundancy and data loss is very possible.
RAID 1 is a mirrored set and data is mirrored from one drive to another. The advantage of RAID 1 is data redundancy as each piece of data is written to both disks. The disadvantage of RAID 1 is The write speed is decreased as compared to RAID 0 due to the write operation is performed on both disks. RAID 1 capacity is that of the smallest disk.
RAID 10 combines the 1st two raid levels and is a mirror of a stripe set. This allows for better speed of a RAID 0 array but the data integrity of a RAID 1 array.
For a full breakdown of RAID levels, take a look at the Wikipedia article here.
RAID configurations are a highly debated topic. RAID has been around for a very long time. Hard drives have changed, but the technology behind RAID really hasn’t. So what may have been considered ideal a few years ago may not be ideal today. If you are solely relying on multiple hard drives as a safety measure to prevent data loss, you are in for a disaster. Ideally, you will use a multi-drive array for an increase in speed and lower access times and have a backup of your data elsewhere. I have seen arrays with hot spares that had multiple drives fail and the data was gone.