Welcome to Starwind Forum!
You can go with 1x iSCSI and 1x Sync (physically separated) links. Ideally 2x iSCSI and 2x Sync. Preferably direct.
If you are using a Windows-native application, no need for vSwitch for iSCSI and Sync.
Use 4 x separate vSwitches.
Normally direct links for iSCSI and SYNC should do it.
You can go with 1x iSCSI and 1x Sync (physically separated) links. Ideally 2x iSCSI and 2x Sync. Preferably direct.
If you are deploying CVM, you need vSwitches to somehow connect the adapters to the VM.Q1: Why do I need virtual switches for iSCSI and sync/HB, if both traffic types are going to be handled at host level?
If you are using a Windows-native application, no need for vSwitch for iSCSI and Sync.
Best practices say no teaming. Please do not use teaming for iSCSI and SYNC. That's why the guide mentioned no teaming.If I have 2x iSCSI + 2x sync NICs (4 NICS in total), then would I create two external virtual switches for each traffic type ( a total of 4 virtual switches)? I take I cannot use a SET team or any hardware-level teaming like LACP. The guide makes no mention of it (or it escaped my attention).
Use 4 x separate vSwitches.
You can connect the physical hosts to the switches, yet they are to be redundant.Q3: In the light of Q1, what virtual machine would I connect to the virtual switches and what would be its role in the specific use case described in the guide?
Normally direct links for iSCSI and SYNC should do it.
Statistics: Posted by yaroslav (staff) — Sun Dec 22, 2024 8:20 pm