ESXi 6 iSCSI vs NFS

This room is for the discussion of how the Synology DiskStation can meet the storage needs for Virtual HyperVisors.
Forum rules
1) This is a user forum for Synology users to share experience/help out each other: if you need direct assistance from the Synology technical support team, please use the following form:
https://myds.synology.com/support/suppo ... p?lang=enu
2) To avoid putting users' DiskStation at risk, please don't paste links to any patches provided by our Support team as we will systematically remove them. Our Support team will provide the correct patch for your DiskStation model.
baron164
Trainee
Trainee
Posts: 17
Joined: Tue Mar 08, 2016 7:35 pm

ESXi 6 iSCSI vs NFS

Postby baron164 » Tue Mar 15, 2016 3:07 pm

I was browsing the forums looking for ideas. I just picked up a DS1815+ and I'm going to fill it with 8x 5TB WD Red Pro Drives in a Raid 10. My plan is to have two ESXi hosts using the Synology as an iSCSI target. I have the 4 nics on the Synology setup in a team to provided 1x 4gb connection. And then I plan on having 4gb nics on each host running multipath to the Synology. I have a VLAN setup for the iSCSI network so all iSCSI traffic is segregated from the rest of the network.

I will be running about 10vm's between the two hosts. I've been running them all on local storage up until now. I suspect most of them will be light with the exception of the media server which serves out large mkv's via SMB shares and has Plex running as well.

Some of the people on the forums have expressed issues with iSCSI and have said they received much better performance by running NFS. I'm just wondering if anyone has had success running iSCSI and how they configured the Synology so I know I'm doing things right. Or if I should just dump iSCSI and try running NFS. I updated the Synology with the latest DSM version 5.2-5644 Update 5
baron164
Trainee
Trainee
Posts: 17
Joined: Tue Mar 08, 2016 7:35 pm

Re: ESXi 6 iSCSI vs NFS

Postby baron164 » Mon Mar 21, 2016 8:29 pm

Well if anyone is interested I've setup ESXi6 using iSCSI and a RAID10 configuration in my Synology. I created a single volume and I've been creating Regular File LUN's so I can use the advanced LUN features. So far when I migrate vm's to and from the host I see around 150MBps on the volume/iscsi monitoring graph. Only issue I have currently is when I'm copying large files to the guest vm that is currently residing on the Synology file transfer speeds start out at about 110MBps (fully utilizing 1gb connection) but about halfway through the transfer the speeds drop off considerably, down to 20/30MBps. I have yet to find a solution to this issue but it's rather frustrating considering file transfers to guest vm's which are on the locally attached storage have no issues maintaining that 110MBps speed throughout the connection.
User avatar
freefly
Knowledgeable
Knowledgeable
Posts: 388
Joined: Sat Mar 12, 2016 8:46 pm

Re: ESXi 6 iSCSI vs NFS

Postby freefly » Mon Mar 21, 2016 9:37 pm

I had this issue in the past with ESXi5, the Network speed dropped at some point, the issue can be various.
First the standard ones, check cabling, use S/FTP cables, disable Autoneg on all interfaces, check Firmware version for the Network cards of your Physical Host..
Depending on the OS you are running you should play around with the used virtual network cards on the guests.

However, i'm wondering how you can run 10Vm's with the NAS/Disk combo you are using, ok if they are idling a lot it might work,
but faster Disks plus 1Tb SSD RW cache would not be a bad idea, maybe you also have to work with increased MTU levels.
In the past we have used Netapp NAS for virtualization, the Storage Network was using MTU9000.

Regards
Before: DS207+
Current: DS415+ 2x6Tb WD and 1x4Tb for Backup.
baron164
Trainee
Trainee
Posts: 17
Joined: Tue Mar 08, 2016 7:35 pm

Re: ESXi 6 iSCSI vs NFS

Postby baron164 » Tue Mar 22, 2016 9:47 pm

So I've been doing more testing. What I've narrowed it down to is with VMware vm's running on the Synology when I copy a large 2gb+ file to the VM the speed drops off after 2gb. So if the file is 4gb in size the performance drops at about 50% or 2gb transferred. With a file that is say 1.9gb in size there is no slow down at all. So it's only after a 2gb file transfer. This issue also does not occur using local storage on the VMware host.

I setup another LUN and set it up for my Hyper-V host. I've been doing testing with Hyper-V and the Hyper-V guest VM's do not have any issues. I've been able to copy 12gb files to the Hyper-V guest VM Synology with no loss in performance.

I'm running 8x Western Digital 5TB Red Pro drives which are fully support. These are 7200rpm drives, and I'm running them in a Raid 10. I also upgraded the DS1815+ to the full 6gb of ram. VMware seems to be able to write 3000 IOPS fairly easily and I've seen Hyper-V do 4000iops as well. I also have jumbo frames enabled throughout the environment, the Switch, the Synology, and both Hosts all have jumbo frames enabled on all NICs. So at this point I'm leaning towards it being a VMware issue.
User avatar
freefly
Knowledgeable
Knowledgeable
Posts: 388
Joined: Sat Mar 12, 2016 8:46 pm

Re: ESXi 6 iSCSI vs NFS

Postby freefly » Tue Mar 22, 2016 10:52 pm

Even if it's not Synology related...

Do you have anything logged in /var/log/vmkernel.log when the Network speed drops?
Not sure if you have to increase log level
https://kb.vmware.com/selfservice/micro ... Id=1004795
Before: DS207+
Current: DS415+ 2x6Tb WD and 1x4Tb for Backup.
baron164
Trainee
Trainee
Posts: 17
Joined: Tue Mar 08, 2016 7:35 pm

Re: ESXi 6 iSCSI vs NFS

Postby baron164 » Wed Mar 23, 2016 12:50 am

I always forget to check the vmkernel log. I'll try digging through the logs and increasing their level. Right now it's set at verbose and I'm not seeing any errors pertaining to the iSCSI adapter. I'll bump it to trivia logging and see if it tells me anything.

On a hunch I dropped the number of nics assigned to the iSCSI software adapter to 1 instead of 4 and the issue went away. So I'm going to play around with that too.

Right now I have a QUAD Intel 82571EB Gigabit card and I use 3 of those nics for iSCSI. Then I have an Intel 82574L gigabit card that I threw in to use as a forth iSCSI adapter. This way if one of the cards fails I don't loose all connectivity. I'll play around with combinations to see if it's something to do with one of the cards.
baron164
Trainee
Trainee
Posts: 17
Joined: Tue Mar 08, 2016 7:35 pm

Re: ESXi 6 iSCSI vs NFS

Postby baron164 » Thu Apr 07, 2016 4:11 pm

I wanted to post an update to this thread to let others know where I ended up.

I found that a Round Robin Policy Count of 100 resolved my SMB performance issues for the most part. With a count of 1 I would see SMB performance drop offs after about 2gb, when I bumped it up to 100 or 200 it was running much better. However this lead me to discovering a larger issue with ESXi.

I ended up ditching ESXi and going to Hyper-V because with 4 NIC's dedicated to iSCSI traffic, when using the VMWare Software iSCSI Adapter it is impossible to get more than 1 NIC worth of throughput. So unless I upgraded to 10gb NIC's in my hosts and bought a 10gb capable switch I was never going to see more than 1gb of throughput to the Synology.

What I found was with the iSCSI Round Robin Policy in VMware it only uses 1 NIC at a time. As a test I cranked the Round Robin Count up to 10,000 and I use the resource management in VCenter to watch it use NIC1, then NIC2, then NIC3, then NIC4, and then it would swing back around to NIC1 and start over again. It would not use all 4 NICs at once. I was unable to find anyway of having multiple active paths with a Synology.

From what I read HP and EMC have created custom plugins for ESXi which allow for using more than 1 NIC at a time but nothing like that exists for Synology as far as I can tell. With Hyper-V I was able to use MPIO and create 4 Active paths so I have 4gb of throughput to the Synology so that is why I went with Hyper-V over ESXi in this case.

Also, I did look into NFS but ESXi 6 supports NFS 4.1 where as Synology only supports 4.0 so I would be stuck with NFS 3. Again with this, there is no way that I could find of making ESXi use more than NIC worth of throughput so I'd still be stuck at the 1gb limit for any single NFS share.
brimur
Apprentice
Apprentice
Posts: 83
Joined: Sat Jan 12, 2013 8:22 pm

Re: ESXi 6 iSCSI vs NFS

Postby brimur » Wed May 18, 2016 3:53 pm

iSCSI implemented correctly using MPIO will scale with every NIC you use in VMWare and the LUNS are easily maintained, snapshots for example. I am currently using ESXi 6 and MPIO (2 NICs) over iSCSI to my DS1815+ and see over 200MBps read and write. If I used 3 NICs I would prob see 300MBps read and write. The previous poster may not have known how to implement it correctly but it is indeed possible. You need to create a vmKernel for each NIC and disable the other NICs in each.
DiskStation: 413j,1815+
DSM 6.1-14871
User avatar
syno.dustin
Sorcerer
Sorcerer
Posts: 2244
Joined: Thu Oct 29, 2015 11:03 pm
Location: Seattle, WA

Re: ESXi 6 iSCSI vs NFS

Postby syno.dustin » Wed May 18, 2016 5:36 pm

baron164 wrote:What I found was with the iSCSI Round Robin Policy in VMware it only uses 1 NIC at a time. As a test I cranked the Round Robin Count up to 10,000 and I use the resource management in VCenter to watch it use NIC1, then NIC2, then NIC3, then NIC4, and then it would swing back around to NIC1 and start over again. It would not use all 4 NICs at once. I was unable to find anyway of having multiple active paths with a Synology.


Did you follow the VMware MPIO tutorial from the synology website? If so did it show 4 connections to the target instead of just one in Storage Manager?
If you need technical support please use this form: https://account.synology.com/support/support_form.php
Synology does not consistently browse this forum for technical support, feature requests, or any other inquiries as it notes at the top of the page. Please use the proper channels when you need help from someone at Synology.
brimur
Apprentice
Apprentice
Posts: 83
Joined: Sat Jan 12, 2013 8:22 pm

Re: ESXi 6 iSCSI vs NFS

Postby brimur » Thu May 19, 2016 8:55 am

He mentioned teaming in the first post. If this is the case then was pointing at one IP address which will not work for round robin. Each NIC used for iscsi on the Synology needs it's own IP to benefit from mpio. Teaming, trunking, link aggregation all reduce performance in mpio in a SAN.
DiskStation: 413j,1815+
DSM 6.1-14871
Pentangle
I'm New!
I'm New!
Posts: 7
Joined: Tue Jan 05, 2016 9:56 pm

Re: ESXi 6 iSCSI vs NFS

Postby Pentangle » Fri Aug 12, 2016 4:12 pm

brimur wrote:iSCSI implemented correctly using MPIO will scale with every NIC you use in VMWare and the LUNS are easily maintained, snapshots for example. I am currently using ESXi 6 and MPIO (2 NICs) over iSCSI to my DS1815+ and see over 200MBps read and write. If I used 3 NICs I would prob see 300MBps read and write. The previous poster may not have known how to implement it correctly but it is indeed possible. You need to create a vmKernel for each NIC and disable the other NICs in each.


^^^THIS. I've used a Synology RS2212RP+ since 2012 and a RS3614RP since 2015 and have managed to have 2 or 4 simultaneous gig connections to the Synology with no problem.

HOWEVER, I have recently retired the RS2212RP+ and dedicated it to testing, and can say that my observations are:

1) When using iSCSI in block mode, with no volume, the on-board RAM is NOT used for caching. This I believe is one of the reasons it is relatively slow.
2) NFS is **CONSIDERABLY** faster than iSCSI, irrespective of what you do.
3) iSCSI appears to light the physical disk lights on the device considerably faster than NFS despite being much slower, which might indicate much smaller read/write blocks.

Mike.
StevenRodenburg1
I'm New!
I'm New!
Posts: 3
Joined: Sun May 22, 2016 9:33 pm

Re: ESXi 6 iSCSI vs NFS

Postby StevenRodenburg1 » Fri Oct 14, 2016 1:14 pm

baron164 wrote:I ended up ditching ESXi and going to Hyper-V because with 4 NIC's dedicated to iSCSI traffic, when using the VMWare Software iSCSI Adapter it is impossible to get more than 1 NIC worth of throughput.

No it's not. It is perfectly possible. No problem at all. You are doing it wrong.

It's always a shame when people blame a manufacturer (VMware in this case) when they should blame only themselves for not reading the documentation and implementing a technology correctly.

One word: MPIO
Meruem
Apprentice
Apprentice
Posts: 83
Joined: Sun Jun 01, 2014 9:04 pm

Re: ESXi 6 iSCSI vs NFS

Postby Meruem » Fri Jan 20, 2017 6:49 am

brimur wrote:iSCSI implemented correctly using MPIO will scale with every NIC you use in VMWare and the LUNS are easily maintained, snapshots for example. I am currently using ESXi 6 and MPIO (2 NICs) over iSCSI to my DS1815+ and see over 200MBps read and write. If I used 3 NICs I would prob see 300MBps read and write. The previous poster may not have known how to implement it correctly but it is indeed possible. You need to create a vmKernel for each NIC and disable the other NICs in each.


iSCSI is slower than NFS from my testing. creating a bucket container that is inflexible is a dumb idea to begin with. btrfs does clones. i see no reason to use iscsi over nfs unless you want a headache
mbu10
Novice
Novice
Posts: 44
Joined: Fri Jul 02, 2010 8:27 am

Re: ESXi 6 iSCSI vs NFS

Postby mbu10 » Mon Jan 23, 2017 5:09 pm

I always laugh
Have done for vmware since they started, converted to iscsi a couple of years ago
multiple dedicated (3-4) nics on each ESX machine
set up the lun
do a detection, you shoudl see under paths every ip address on the storage
select round robin the in the storage under proprieties
make sure that you have applied all the nics as active in the vnkernal port
enable and install the vm supported storage plugin(allows copys from LUN to LUN to happen on the storage side - i get between 200-300 GB a sec)
copy a large file internally to the machine say a 12 GB movie file then copy to the desktop from the desktop (this is to show internal performance and only (ISCSI) as over the network you are dealing with none mpio connections
so this is using MPIO
then is fully shared across all nics on both side
t
internal copy's i can get over 250GB a sec to a single SSD (which goes over the iscsi mpio connection)
without trying i could get more if i use more than one ssd and more than 3 nics on the esx end

Return to “Virtual HyperVisors (VMWare/ESXi)”

Who is online

Users browsing this forum: No registered users and 1 guest