Regarding this exchange still not able to mesh what you are saying with what I found on VMware's website. For example:
So with LBT, NFS traffic in our case all goes from all virtual machines through one vmkernel port group, then it chooses a dvuplink before leaving for the physical switch. If dvluplink1 is saturated to 75%, for example, it can start to go through dvuplink2. Its using the same vmkernel port in any case. Even with one subnet, I'm still getting LBT to work, because dvuplink2 still gets used. Correct?
Sadly no. LBT would have to migrate the entire vmkernel port to the other uplink (you can see which uplink it is using with ESXTOP). If you have only one vmkernel port on a single subnet, it will never consume more than one associated uplink.
In this article:
http://blogs.vmware.com/performance/2010/12/vmware-load-based-teaming-lbt-performance.html
The author tests a single dvs port group with 2 dvuplinks. When 1 dvuplink hits the saturation threshhold of 75% (and not before), he is saying that the other uplink is invoked. Here is the specific quote that I'm referring to:
We then reconfigured the vDS with two dvUplinks and a single DV Port Group to which all the vNICs of the VMs were mapped. The DV Port Group was configured with the LBT teaming policy. We used the default settings of LBT, which are primarily the wakeup period (30 seconds) and link saturation threshold (75%). Our goal was to evaluate the efficacy of the LBT policy in terms of load balancing and the added CPU cost, if any, when the same benchmark load of 30,000 SPECweb2005 support sessions was applied.
Before the start of the test, we noted that the traffic from all the VMs propagated through the first dvUplink. Note that the initial affiliation of the vNICs to the dvUplinks is made based on the hash of the virtual switch port IDs. To find the current affiliations of the vNICs to the dvUplinks, run the esxtop command and find the port-to-uplink mappings in the network screen. You can also use the “net-lbt” tool to find affiliations as well as to modify LBT settings.
Phase 1: Because all the virtual switch port IDs of the four VMs were hashed to the same dvUplink, only one of the dvUplinks was active. During this phase of the benchmark ramp-up, the total network traffic was below 7.5Gbps. Because the usage on the active dvUplink was lower than the saturation threshold, the second dvUplink remained unused.
Phase 2: The benchmark workload continued to ramp up and when the total network traffic exceeded 7.5Gbps (above the saturation threshold of 75% of link speed), LBT kicked in and dynamically remapped the port-to-uplink mapping of one of the vNIC ports from the saturated dvUplink1 to the unused dvUplink2. This resulted in dvUplink2 becoming active. The usage on both the dvUplinks remained below the saturation threshold.
Phase 3: As the benchmark workload further ramped up and the total network traffic exceeded 10Gbps (7.5Gbps on dvUplink1 and 2.5Gbps on dvUplink2), LBT kicked in yet again, and dynamically changed port-to-uplink mapping of one of the three active vNIC ports currently mapped to the saturated dvUplink.
Phase 4: As the benchmark reached a steady state with the total network traffic exceeding little over 13Gbps, both the dvUplinks witnessed the same usage.
It seems that only one vmkernel port is using more than one dvuplink, after the saturation threshhold has been reached and the wakeup perioed has passed. Is he wrong? Am I misunderstanding something?