Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Issues?

This room is for the discussion of how the Synology DiskStation can meet the storage needs for Virtual HyperVisors.
Forum rules
1) This is a user forum for Synology users to share experience/help out each other: if you need direct assistance from the Synology technical support team, please use the following form:
https://myds.synology.com/support/suppo ... p?lang=enu
2) To avoid putting users' DiskStation at risk, please don't paste links to any patches provided by our Support team as we will systematically remove them. Our Support team will provide the correct patch for your DiskStation model.
quackhead
Novice
Novice
Posts: 42
Joined: Mon Jun 10, 2013 6:57 pm

Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Issues?

Postby quackhead » Wed Jun 19, 2013 6:13 am

I have read a few posts here and there about iSCSI having performance issues with ESX 5.1. Wondering if this is still the case.

I currently have one RS3412xs that I have four ports (of six total) dedicated for iSCSI via MPIO to two R715 servers with NO local storage. I am NOT using a switch and have two ports connected to each ESX server via crossover cable. Im using block-level iSCSI LUN on RAID 10 with 6 disks (No VAAI for block-level LUN). All iSCSI ports are on different subnets and I have followed best practices to the T on both sides (ESX and Synology) but I am getting crappy VM performance.

I have read that some have gone to NFS with much better performance but I really dont want to go that route. Anyone else experiencing the same issues still, am I stuck with NFS if I want decent performance. Kinda bought this with the impression that it offered (near) enterprise capability.
quackhead
Novice
Novice
Posts: 42
Joined: Mon Jun 10, 2013 6:57 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby quackhead » Wed Jun 26, 2013 3:16 am

Must just be me... Anyway I have switched over to file based iSCSI LUNs with VAAI enabled and im seeing improved performance, still not nearly ideal but improved. If anyone knows of a way to troubleshoot iSCSI issues, im all ears.
yakbone
Seasoned
Seasoned
Posts: 502
Joined: Wed Jun 03, 2009 11:33 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby yakbone » Thu Jun 27, 2013 3:58 pm

What is crappy performance? How many IOPS are you expecting versus what you are getting? Is your load predominantly random or sequential?

You should not need crossover for this.

-yakbone
quackhead
Novice
Novice
Posts: 42
Joined: Mon Jun 10, 2013 6:57 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby quackhead » Fri Jun 28, 2013 4:08 am

Thank you very much for the reply. I discovered a few tweaks and got things about as good as they are gonna get with iSCSI. That said, I think this has evolved into more a question of why iops are so much higher with NFS than iSCSI. Performance with regular files and VAAI enabled is not bad but I would like to get iSCSI features with NFS iops. Check it out:

iSCSI with 2 NICs (MPIO) Round Robin (io 1000 setting)
Image

iSCSI with 2 NICs (MPIO) Round Robin (io 3 setting)
Image

NFS with 1 NIC
Image

NFS just p0wns iSCSI in the random IO department. Virtual environments are all about IO, not as much throughput.

I am using crossovers because I only have two ESXi nodes in HA config and want to eliminate the need for iSCSI switches which would only add another point of failure.
capoitaly
I'm New!
I'm New!
Posts: 8
Joined: Mon Jul 29, 2013 9:48 am

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby capoitaly » Mon Jul 29, 2013 10:50 am

Refer to this thread: viewtopic.php?f=148&t=68047

I've... We've a BIG issue with iSCSI, I've tried a lot of configurations but nothing changes.
I'm experience random latency of 1000/1500 ms and 2000 ms during boot storm (can I say "STORM" 8 VM booting????).
I'm new user of Synology and I think (and hope) this is a bug of this DSM release... a stupid openfiler can work better!!

Bye
quackhead
Novice
Novice
Posts: 42
Joined: Mon Jun 10, 2013 6:57 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby quackhead » Mon Jul 29, 2013 10:54 pm

Just checking back after fully implementing this system... I was skeptical of the performance based on what these numbers were showing me, but after firing up 8 VMS

MailCleaner (Linux)
Exchange 2007 (Server 2008)
Proxy (Linux)
AD (Server Server 2008)
Utility Server (Server 2012)
DB Server 1 (Server 2003)
Proprietary Application Server (Server 2003)
Groundwork Enterprise Monitoring Server (Linux)

Performance is pretty damn good, ALMOST SAN like. I have most systems on (file based) iSCSI LUNs and a few of the high IO systems on just NFS volumes. All are HA capable and fail-over is working like a charm.

I also have a handful of NFS shares hosted straight from the Synology, so this thing is busy.

Overall, I am happy with the bang for buck on this system. 3200 + 750 x 2 (6 x 2TB Disks in each Synology RackStation) So under 8k for full HA on the storage side of things. WAY cheaper than any SAN solution, but lacking some enterprise level capabilities.
capoitaly
I'm New!
I'm New!
Posts: 8
Joined: Mon Jul 29, 2013 9:48 am

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby capoitaly » Tue Jul 30, 2013 11:38 am

What vSphere release are you using?
We've vsphere 5.1 with dedicated iSCSI NIC in a dedicated vlan, MPIO to Synology with round robin configuration on each datastore.
Latency is a [Please control your language], during boot storm (5/10 VM starting simultaneously), VMs freeze on windows logo and boot bad (services in timeout, etc...).
I/O latency grow up to 2000 ms and vmware infrastructure became unusable... to power on 30 VMs take about 40 mins!!
I'm testing performance on NFS and it seems works very well... I think Synology wrong something in DSM 4.2 and also in 4.3 (I've tried Beta but the issue is not resolved!)... maybe in implementing VAAI!
quackhead
Novice
Novice
Posts: 42
Joined: Mon Jun 10, 2013 6:57 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby quackhead » Wed Aug 07, 2013 3:42 pm

We are on 5.1 also (DSM 4.2), but we are only running 9 VMs, not 30. What RAID are you running? We are running RAID 10 with 6 disks and IO latency is pretty low... UNLESS (and I have an open ticket with Synology about this) we are running a Time Backup on our NFS share.

Our RackStation also serves as an NFS server for our main project repository. Unfortunately when we run time backup on that share, I/O goes through the roof... This is even with a speed limit imposed on the backup of 5MB(!). So yeah, something is broken there. I am going to post on another thread about it.
cwilliams255
I'm New!
I'm New!
Posts: 1
Joined: Mon Dec 09, 2013 9:24 am

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby cwilliams255 » Mon Dec 09, 2013 10:45 am

Its been a while since this topic was posted but I thought I would share my findings as we too seem to have similar issues. We have a RS10613XS+ currently with 14x4TB SAS drives in it. First problem - only allowed 12 drives per array, so 12 disk RAID-10 it is. We have an Intel X520-DA2 10gbe in both the synology and ESXi and they are directly connected on both ports. They are configured in ESXi(5.1) and DSM(4.3-3810) on seperate subnets, MTU is 9000 throughout and Multipathing is enabled and set to round robin on ESXi end.

We configured a single raid group and block based LUN and allocated the whole lot to ESXi. What we see is fairly poor and inconsistent performance - peak of about 720mb/s read and 400mb/s write but its all over the place.

Image

The next thing we tried was a file based LUN with advanced features on. This did stabilise and increase performance and we were able to get about 1200mb/s read and 1100mb/s write but something just didn't seem right.

Image

To rule out it being the limit of the disks / controller we tried some 4xSamsung 840Pro's in RAID0 and got fairly similar results. I could see that MPIO was working as both NICs were passing traffic so tried changing the number of I/O's before ESXi switches paths (as this is recommended by others) but this didn't help

The weird thing is we did try this exact same setup with Server 2012 as the initiator of the file based LUN and got much better results but I don't have any benchmarks to hand as it was a while back

In the end I went to NFS, broke the array down into a 6 disk RAID10 and 8 disk RAID10 each with a seperate NIC and both arrays now pretty much saturate the 10gbe connections, effectively I'm now getting about 1800mb/s read and write when both arrays and being used.
quackhead
Novice
Novice
Posts: 42
Joined: Mon Jun 10, 2013 6:57 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby quackhead » Tue Jan 06, 2015 4:47 pm

Just an update to this thread I opened a year and a half ago.
Until recently we have had two RS3412xs systems running in HA with iSCSI (With 10GB upgrade).

After some testing with NFS we are switching to NFS over iSCSI for Vmware HA (ESX 5.5).

For Synology most are going to see better performance (and stability) with NFS. The stability issues we have had are related to high I/O situations occurring when Time Backups were running on the same system (we also used the Synology to serve windows shares for our organization). What would happen is Time Backups would start, IO would spike, then the iSCSI connections would either become unresponsive for a period long enough for some VMs to crash, or the iSCSI service would crash entirely. All of this with no error logs produced. The Synology would NOT fail over as expected because it didnt think anything was amiss.

I can say that we are seeing relatively impressive performance with just NFS. I suspect reliability will be improved simply because NFS appears to be what Synology does best.

The moral of this story, for Synology with Vmware: NFS > iSCSI.
Last edited by quackhead on Tue Jan 06, 2015 4:56 pm, edited 1 time in total.
quackhead
Novice
Novice
Posts: 42
Joined: Mon Jun 10, 2013 6:57 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby quackhead » Tue Jan 06, 2015 4:55 pm

Also, Synology has an NFS plugin available for ESXi that allows VAAI support! Before this was only supported with iSCSI.
Bluefinrad
I'm New!
I'm New!
Posts: 1
Joined: Sat Feb 14, 2015 10:08 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby Bluefinrad » Sat Feb 14, 2015 10:19 pm

Thanks for the information. Just purchased a 3614xs+ and was researching iSCSI/NFS for VM 5.5 and this is very helpful.
berntek
I'm New!
I'm New!
Posts: 1
Joined: Fri Feb 20, 2015 12:36 am

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby berntek » Fri Feb 20, 2015 12:51 am

quackhead, I work with the same setup as you exactly, and went through the same issues with IO. I eventually settled on NFS, but instead of HA, I have each Synology doing half the load with backups replicating to both as we run 25+ VMs with some high IO requirements. We currently run DSM 4.3 and I was wondering if you saw any issues with performance going to DSM 5.0/5.1? I am planning on adding memory during the upgrade, and also migrating the backups to a third unit. With the free'd up drive slots, I will be adding (2) 480GB SSDs for caching and I'm hoping that will make a big difference.

Are you only using 10GB for Synology HA, or is it for the ESX connections as well? How has the experience with HA been overall?
User avatar
Barto
Versed
Versed
Posts: 214
Joined: Fri Mar 08, 2013 3:15 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby Barto » Fri Feb 20, 2015 12:16 pm

AFter doing quite a bit of testing, I though I would report back my findings.

When my Virtual Machines where running on iSCSI, in general everything would work fine.

However, if there is any for of high I/O activity against the Diskstation, the Virtual Machine would grind to a halt, and when looking at the latency of the virtual machines it would be in many thousands!!

(Kick off a few vmotions, or even a big windows update inside a vm and the Disk utilisation on the DS1812+ would be solid 100%)

I moved all my VM's back to VFS, and carried out the same tests, whilst the Diskstation is reporting 100% utilisation on the volume, the VM's running on NFS only get about 20-60 on the latency.

It seems in times of high utilisation, the NFS VM's still get a good priority and suffer far less, wheras iSCSI really suffers with Latency.

Anyone else have similar findings?

(Using htop on the diskstation, and esxtop on the vmware host, gave these results quite clearly)
DS1812+, WDRED3TB, Latest DSM
jaknell1011
Trainee
Trainee
Posts: 12
Joined: Thu Mar 14, 2013 3:44 pm

Re: Synology NAS, Multipath iSCSI, & ESX 5.1 Performance Iss

Postby jaknell1011 » Mon Mar 02, 2015 8:10 pm

You all are mentioning 30 VMs and 8/10 VMs, I see that OP has a 3614. What model is everyone else using? I am trying to figure out if an 1815 can handle 8-10 lightly used Vm's.

Return to “Virtual HyperVisors (VMWare/ESXi)”

Who is online

Users browsing this forum: No registered users and 1 guest