Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Issues?

This room is for the discussion of how the Synology DiskStation can meet the storage needs for Virtual HyperVisors.
Forum rules
1) This is a user forum for Synology users to share experience/help out each other: if you need direct assistance from the Synology technical support team, please use the following form:
https://myds.synology.com/support/suppo ... p?lang=enu
2) To avoid putting users' DiskStation at risk, please don't paste links to any patches provided by our Support team as we will systematically remove them. Our Support team will provide the correct patch for your DiskStation model.
quackhead
Novice
Novice
Posts: 42
Joined: Mon Jun 10, 2013 6:57 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby quackhead » Thu Mar 19, 2015 5:21 pm

jaknell1011 wrote:You all are mentioning 30 VMs and 8/10 VMs, I see that OP has a 3614. What model is everyone else using? I am trying to figure out if an 1815 can handle 8-10 lightly used Vm's.


I am the OP and I am using two RS3412xs in HA. The 1815 should handle 8-10 light VMs just fine... the Synology systems you will want to avoid when working with any appreciable IO would be the ones using Marvell or Freescale procs.

berntek wrote:quackhead, I work with the same setup as you exactly, and went through the same issues with IO. I eventually settled on NFS, but instead of HA, I have each Synology doing half the load with backups replicating to both as we run 25+ VMs with some high IO requirements. We currently run DSM 4.3 and I was wondering if you saw any issues with performance going to DSM 5.0/5.1? I am planning on adding memory during the upgrade, and also migrating the backups to a third unit. With the free'd up drive slots, I will be adding (2) 480GB SSDs for caching and I'm hoping that will make a big difference.

Are you only using 10GB for Synology HA, or is it for the ESX connections as well? How has the experience with HA been overall?


Dont use SSDs for caching, its not really going to improve write performance much, and maybe a little read performance increase. Perhaps Synology (or the fastcache devs) will improve this but right now its not worth it. I would create another disk group w/ RAID 1 with those SSDs. If you have enough memory the Synology box will use that for caching / buffer, which works much better. BTW Synology states that the 3412xs's RAM is "Expandable, up to 6GB", I am at 8GB RAM.

I am using 10GB NICs for NFS, SHA, and ESXi with failover to 1GB NICs. This switch: GS752TXS-100NAS. Flawless operation over the last 3 months. Performs like a dream still too... amazing systems when setup properly for NFS.

Learn from my two years experience of constantly tinkering with these Synology boxes. I finally have no need to babysit them anymore after moving from iSCSI to NFS for ESXi.
laughingmanzero
I'm New!
I'm New!
Posts: 3
Joined: Fri May 23, 2014 7:06 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby laughingmanzero » Fri Mar 20, 2015 5:14 pm

Quackhead, what do you mean by:

quackhead wrote: If you have enough memory the Synology box will use that for caching / buffer


I've never seen anything in the documentation other than the SSD Cache so this piqued my interest.

This is especially relevant to me because I just took out two of my WD Red's and put two 845DC Evos in their place as an SSD cache for my SCSI volume. Although my IOPS shot up much higher, I've suddenly lost the fantastic bandwidth I was getting before on my VMs that had 4x1Gb bonds. I'm not entirely sure just yet of what I broke, but I went from getting almost full utilization of the 4x1Gb bond down to a maximum of 115Mbps R/W (the full utilization of a single 1Gb link).

The only changes I made were putting the SSD cache in, and running 5.2 beta (a decision I've come to regret in this last week).

quackhead wrote:I am using 10GB NICs for NFS, SHA, and ESXi with failover to 1GB NICs. This switch: GS752TXS-100NAS. Flawless operation over the last 3 months. Performs like a dream still too... amazing systems when setup properly for NFS.


On another note, how is that switch? I've steered clear of Netgear's for the last 3 or 4 years because of many bad experiences with them. However, they are the only 10Gb switch on the market with a very appealing price range. I've specifically been looking at the XS712T or a couple of the GS728TX-100NES. It's a nightmare trying to find lower priced 10Gbase-T switches that work with my Supermicro X10DRL-CT boards.
quackhead
Novice
Novice
Posts: 42
Joined: Mon Jun 10, 2013 6:57 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby quackhead » Mon Mar 23, 2015 11:25 pm

laughingmanzero wrote:Quackhead, what do you mean by:

quackhead wrote: If you have enough memory the Synology box will use that for caching / buffer


I've never seen anything in the documentation other than the SSD Cache so this piqued my interest.

This is especially relevant to me because I just took out two of my WD Red's and put two 845DC Evos in their place as an SSD cache for my SCSI volume. Although my IOPS shot up much higher, I've suddenly lost the fantastic bandwidth I was getting before on my VMs that had 4x1Gb bonds. I'm not entirely sure just yet of what I broke, but I went from getting almost full utilization of the 4x1Gb bond down to a maximum of 115Mbps R/W (the full utilization of a single 1Gb link).


Looks under Resource Monitor-> Memory and you will see that all of your "Free Memory" is being used as Cache and Buffer. I saw a very noticeable improvement in IO when I went from 2GB to 8GB of RAM.

The SSD cache is just unreliable. Search the forums, most dont even see a performance improvement or even see a drop in IO performance.

laughingmanzero wrote:On another note, how is that switch? I've steered clear of Netgear's for the last 3 or 4 years because of many bad experiences with them. However, they are the only 10Gb switch on the market with a very appealing price range. I've specifically been looking at the XS712T or a couple of the GS728TX-100NES. It's a nightmare trying to find lower priced 10Gbase-T switches that work with my Supermicro X10DRL-CT boards.


I have never had any issues with Netgear myself. The switch I have provides great management features with four 10Gb SFP+ ports. I went the SFP route because the SFP+ 10Gb NICs are dirt cheap on eBay when compared to the standard rj45 NICs. You do need to make sure your 10Gb NICs are compatible with your Synology box though... those are Intel and Emulex only, so a Brocade or other brand will not work.
Tu9a2
I'm New!
I'm New!
Posts: 4
Joined: Wed Nov 11, 2015 8:59 am

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby Tu9a2 » Thu Nov 19, 2015 3:13 am

quackhead wrote:
laughingmanzero wrote:Quackhead, what do you mean by:

quackhead wrote: If you have enough memory the Synology box will use that for caching / buffer


I've never seen anything in the documentation other than the SSD Cache so this piqued my interest.

This is especially relevant to me because I just took out two of my WD Red's and put two 845DC Evos in their place as an SSD cache for my SCSI volume. Although my IOPS shot up much higher, I've suddenly lost the fantastic bandwidth I was getting before on my VMs that had 4x1Gb bonds. I'm not entirely sure just yet of what I broke, but I went from getting almost full utilization of the 4x1Gb bond down to a maximum of 115Mbps R/W (the full utilization of a single 1Gb link).


Looks under Resource Monitor-> Memory and you will see that all of your "Free Memory" is being used as Cache and Buffer. I saw a very noticeable improvement in IO when I went from 2GB to 8GB of RAM.

The SSD cache is just unreliable. Search the forums, most dont even see a performance improvement or even see a drop in IO performance.

laughingmanzero wrote:On another note, how is that switch? I've steered clear of Netgear's for the last 3 or 4 years because of many bad experiences with them. However, they are the only 10Gb switch on the market with a very appealing price range. I've specifically been looking at the XS712T or a couple of the GS728TX-100NES. It's a nightmare trying to find lower priced 10Gbase-T switches that work with my Supermicro X10DRL-CT boards.


I have never had any issues with Netgear myself. The switch I have provides great management features with four 10Gb SFP+ ports. I went the SFP route because the SFP+ 10Gb NICs are dirt cheap on eBay when compared to the standard rj45 NICs. You do need to make sure your 10Gb NICs are compatible with your Synology box though... those are Intel and Emulex only, so a Brocade or other brand will not work.



Hi quackhead,

Please kindly provide more information on the memory upgrade. Did you buy the RAM module from Synology, or just use a normal desktop/server RAM module?

Thank you,
Tu9a2
ScottWCO
I'm New!
I'm New!
Posts: 1
Joined: Mon Feb 29, 2016 4:24 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Issues?

Postby ScottWCO » Mon Feb 29, 2016 4:57 pm

I have an RS3614RPxs running with a 12 drive RAID 10 configuration connected to a VMware cluster. During initial implementation I configured the array with MPIO iSCSI, created a vmfs datastore, and started using it as a target for my backups. Not more than a few days after implementation I noticed that backups were failing so I checked DSM which indicated that "the data in this volume may be crashed" with 3 of the physical disks in a "crashed" state. I rebooted the array and to my surprise the data was 100% intact. I noticed that the array would crash about every 2-3 days so I opened a case with Synology support. They blamed the physical disks and asked for extended integrity tests using manufacturer tools. Of course these tests take 3-4 hours for each drive, and of course the disks passed with flying colors. I slapped the drives back into the array and Synology said "let us know if the issue happens again". Wait, what? We haven't changed anything. Of course the problem will happen again, and it did. Next step - they had me RMA the enclosure. Two months and a new array later, no improvement. After searching the web, I noticed that many people including the ones in this thread have experienced similar problems when using iSCSI with a Synology array.

Bottom line, Synology does not do iSCSI well. Further, they should not do iSCSI at all as it appears this problem has been around for years and obviously they have no intention of fixing it. I will say that since switching to NFS, the array has performed as desired.
quackhead
Novice
Novice
Posts: 42
Joined: Mon Jun 10, 2013 6:57 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby quackhead » Tue Jan 03, 2017 7:33 pm

Tu9a2 wrote:Hi quackhead,

Please kindly provide more information on the memory upgrade. Did you buy the RAM module from Synology, or just use a normal desktop/server RAM module?

Thank you,
Tu9a2


I know you asked a while ago but I wanted to report back on my latest findings while also answering your question about the RAM modules I use.
I successfully upgraded our RS3412xs to 16GB RAM, thats right, 10GB more than max spec. Memory utilization is about 15% with a majority of the free RAM being used as cache.

For being a 2012 model Im not sure we could ask for better performance. This thing absolutely rips through data while the aging i3 proc hardly breaks a sweat. SMB shares are on BTRFS partition while the NFS shares (for ESXi) are on EXT4. DSM is latest (6.0) Still NOT using SSD cache... from what I have read it provides little performance improvement and possibly more issues.

Here are my current numbers. 10Gb ethernet to ESXI Host via NFS (Windows VM)
Crystal DiskMark 3.0.3: (MB/s)
Seq:
Read: 383.6
Write: 655.2

512K:
Read: 332.2
Write: 545.7

4K:
Read: 14.53
Write: 14.07

4K QD32:
Read: 65.96
Write: 61.13


RAM Memory Modules:
mod note: shop link removed

Two of those will work (confirmed on DSM 6)... possibly four (32GB!) I didnt want to push my luck but I have two free slots still.

Return to “Virtual HyperVisors (VMWare/ESXi)”

Who is online

Users browsing this forum: No registered users and 1 guest