Best Link Aggregation mode for iSCSI

All questions regarding the Synology system as a iSCSI Target can go here.
Forum rules
We've moved! Head over to Synology Community (community.synology.com) to meet up with our team and other Synology enthusiasts!
manleyjw
I'm New!
I'm New!
Posts: 4
Joined: Mon Jun 19, 2017 10:56 pm

Best Link Aggregation mode for iSCSI

Unread post by manleyjw » Wed Jun 06, 2018 6:40 pm

I'm planning to bond two of my 1Gig NICs on my RS3617 and connect the NICs to a Cisco 3650.

The iSCSI LUNs will be available to two ESX servers that have dedicated iSCSI NICs as well.

I have a VLAN configured for my iSCSI traffic to the servers that the NICs will be attached to.

There are four different link aggregation choices for the bond. Which would be the best to use for this configuration?

dognose
Novice
Novice
Posts: 50
Joined: Sun Feb 28, 2016 2:03 am

Re: Best Link Aggregation mode for iSCSI

Unread post by dognose » Mon Jun 11, 2018 2:59 pm

You shouldn't use trunks for ISCSI access.

A trunk of "two 1 Gbit Nics" will be able to spit out 2 Gbit/s to 2 endpoints only. One ISCSI-Initiator wouldn't be able to utilize a trunk. (at least not in terms of performance)

You should use independent IP-Addresses and ISCSI-Targets instead.

Then, configure your ISCSI-Initiator for MPIO or MCS to achieve what is desired (2GB/s throughput)
(VMWare only supports MPIO, iirc)

Squozen
Guru
Guru
Posts: 1561
Joined: Wed Jan 09, 2013 1:35 am

Re: Best Link Aggregation mode for iSCSI

Unread post by Squozen » Mon Jun 11, 2018 3:09 pm

What dognose says is correct. Look for the vmware KB article about how to do this properly. From memory you need to make two iSCSI vmnics on the same vswitch and bind each of them to only one of your physical NICs.

synology_ukman
Seasoned
Seasoned
Posts: 569
Joined: Fri Oct 26, 2012 4:51 pm

Re: Best Link Aggregation mode for iSCSI

Unread post by synology_ukman » Thu Jul 12, 2018 1:38 pm

dognose wrote:
Mon Jun 11, 2018 2:59 pm
You shouldn't use trunks for ISCSI access.

A trunk of "two 1 Gbit Nics" will be able to spit out 2 Gbit/s to 2 endpoints only. One ISCSI-Initiator wouldn't be able to utilize a trunk. (at least not in terms of performance)

You should use independent IP-Addresses and ISCSI-Targets instead.

Then, configure your ISCSI-Initiator for MPIO or MCS to achieve what is desired (2GB/s throughput)
(VMWare only supports MPIO, iirc)
Ok, but you can trunk and MPIO at the same time if you wish.
For example with 4 ports to the switch, you could trunk 2 x 2 ports and then MPIO the 2 trunked ports say to a ISCSI INITIATOR server that only has 2 free nics.
People sometimes do this to reduce the number of IP addresses.
The result is you have more resilience to the switch.

dognose
Novice
Novice
Posts: 50
Joined: Sun Feb 28, 2016 2:03 am

Re: Best Link Aggregation mode for iSCSI

Unread post by dognose » Fri Jul 20, 2018 10:10 pm

synology_ukman wrote:
Thu Jul 12, 2018 1:38 pm
Ok, but you can trunk and MPIO at the same time if you wish.
For example with 4 ports to the switch, you could trunk 2 x 2 ports and then MPIO the 2 trunked ports say to a ISCSI INITIATOR server that only has 2 free nics.
People sometimes do this to reduce the number of IP addresses.
The result is you have more resilience to the switch.
Yes, that should work. At least you get additional resilience on one of both ends, without having to double the amount of IP-Adresses used.

But beside that, I don't see any advantage over having "4 Target-IPs" connected by 2 initiators. Therefore I would apply the KISS-Principle and go with 4 Target-IPs.

synology_ukman
Seasoned
Seasoned
Posts: 569
Joined: Fri Oct 26, 2012 4:51 pm

Re: Best Link Aggregation mode for iSCSI

Unread post by synology_ukman » Mon Jul 23, 2018 10:38 am

Should work? It does work.

dognose
Novice
Novice
Posts: 50
Joined: Sun Feb 28, 2016 2:03 am

Re: Best Link Aggregation mode for iSCSI

Unread post by dognose » Sat Aug 04, 2018 5:21 pm

synology_ukman wrote:
Mon Jul 23, 2018 10:38 am
Should work? It does work.
Yes, but there is no advantage to it. If you are using MPIO it doesn't matter, if you are using it with 4 physical connections or 2 trunked connections - resilience stays the same: 4 physical paths, where up to 3 might fail.

Except, with 4 physical connections and MPIO you could get ~ 480 MB/s, while with 2 trunked connections and MPIO, only 240 MB/s would be possible. (for one one initiator)

Also, why "overcomplicate" the layout? You don't create 2 Raid 1 and a Raid 0 "on top of that" - you'll create a Raid 10 right away.

Locked

Return to “iSCSI Target”