Had some rather nifty issues with a DirectAccess array the other week – so I thought I would return here and blog it!
In short, everything was working fine apart from one very small part – “Manage Out” via IPHTTPS tunnel wasn’t functioning.
In short, clients were connecting the IPHTTPS tunnel before the Teredo was up. Whilst IPHTTPS is connected it will be preferred over Teredo (or 6to4) and disconnects after a random amount of time.
Clients could route traffic down here – so connecting to Intranet services was fine. Tunnel was up on both parts (Intranet/Infrastructure) and everything worked fine apart from “Manage Out”. Routes all fine, Windows Firewall (client-side) all fine.
Queue some hair tearing etc etc.
Raised a call with MS eventually – and in short its VMware causing the issue.
To quote MS (slightly edited to make sense outside of the Email trail);
We have had similar cases before where VMWare template provisioning was used for the UAG hosts, and can confirm that the problem was down to the template creating duplicate adapters that would affect tunnel bindings when configuring UAG DA. And the solution was to rebuild using standard media which completely addressed the issue.
Ouch. Oh well, rebuild we must (I’ll update once they are done!)
Had some other interesting information too regarding VMware, Unicast and DA NLB. I’ll update my original post here
A Customer of mine utilise ESX 4.1 to host their UAG and DirectAccess solution.
VMware KB article 1006778 (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006778) dictates that for a Unicast NLB to function within VMware, the following statements must be true;
- Need two machines running Windows Server 2003 or later
- Each machine needs to have at least one network card and at least one fixed IP address
- Two adapters in each machine is recommended for best performance
- One adaptor mapped to the real IP Address (Microsoft calls this the Dedicated IP) and one mapped to the ‘virtual’ IP Address (Microsoft calls this the Cluster IP)
- One benefit of unicast mode is that it works out of the box with all routers and switches (since each network card only has one MAC address)
- In unicast mode, since all hosts in the cluster all have the same MAC and IP address, they do not have the ability to communicate with each other via their NLB network card
- A second network card is required for communication between the servers
As UAG Direct Access requires an internal and external NLB there are, by default, no none-NLB interfaces. This is contrary to the requirements set by VMware, so we must define a tertiary NIC just for NLB and TMG array communication.
This has been applied as per the following diagram;
Each UAG server has had the tertiary NIC defined as can be seen above. These are configured with a private IP range.
This should be enough to make Unicast NLB work If not, drop me a line as I have some further steps I put in (that I don’t think are required)
UPDATE: After speaking with Microsoft on something entirely different it has come to light that the above solution, although workable, is fully NOT SUPPORTED by Microsoft. In short the only supported solution for Microsoft is to have both UAG servers on the same VMware host (in which scenario you don’t require the Intra-Array link). Note that this is also the case currently for Multicast scenarios (which supposedly is supported generally and still waiting for a Technet update)