Monday, 22 February 2021

HCX Mobility Optimized Networking Policy Routes

With the R145 release of HCX on 30th October 2020, VMware Cloud on AWS customers were treated to some great new functionality at no additional cost. New features such as Replication Assisted vMotion, Application Path Resiliency, TCP Flow Conditioning, Mobility Groups and my personal favourite, Mobility Optimized Networking. I'm not going to go into too much detail around Mobility Optimized Network (MON) since Patrick Kremer (Blog | Twitter) has covered it extensively here.

On an internal slack channel the following question was asked:

When we migrate a VM from on-premises into VMC that resides on a stretched layer 2 network without enabling MON, any traffic that needs to egress that network either destined to VMs on-premises, within VMC or out to the internet will need to go via on-premises. In my lab I have two VMS currently on-premises in the same VLAN:

MigrateVM-12 -
MigrateVM-13 -

If I SSH into and ping I see my latency is ~<1ms:

If I ping a workload currently running in VMC ( you see that my latency is ~100ms as the traffic has to route from Chicago to Frankfurt via a Route Based VPN:

If I ping an IP address out on the internet such as you see that my latency is ~2ms

I'm now going to migrate from on-premises into VMC:

With MON disabled we see that when I ping my latency is now ~100ms since traffic has to traverse the L2 extension, which is as expected:

If I ping which is a VM also running in VMC my latency is ~200ms because in order to egress out of the network I have to do it via the on-premises gateway and then come back into VMC, which once again, with MON disabled is as expected:

Now just for completeness if I ping my latency is ~100ms because once again, I have to traverse the L2 extension in order to egress out of the network, which is as expected:

I'm now going to enable MON on MigrateVM-12 ( and test the same scenarios as above:

If I ping my latency is still ~100ms as expected since I have to go back across the L2 extension to on-premises:

Now when I ping, which is running in VMC, my latency is ~<1ms. This is where the benefit of MON is realised because I now have optimised routing to workloads running in VMC on either native network segments i.e. network segments created directly in VMC or additional stretched networks that have been presented into VMC:

Back to the original question around internet traffic, we see that traffic egresses directly out of the SDDC rather than going via on-premises. We can tell this because the latency is ~3ms when it should be over 100ms:

The reason internet traffic egresses via the SDDC even if the default route of is being advertised into the SDDC is because we need to avoid asymmetric routing.  For stretched networks, if we need internet traffic to always egress via on-premises, maybe because we want to ensure the on-premises security posture is maintained whilst extending into the cloud then we need to configure HCX policy routing. By default, all RFC1918 addressed are configured automatically to route via the source gateway rather than the cloud gateway. In order to route internet traffic via the source gateway, we need to add the default route of into the policy route. Within the HCX Network Extension Advanced menu item select Policy Routes:

We see by default the source gateway is used for routing to, and networks. We simply need to add into the policy and ensure the option to Redirect to Peer is set to Allow:

Once we add that route and click submit we now see that the latency from to the internet is ~102ms since it has to traverse the L2 extension and then egress out via on-premises:

This ensures that any on-premises security policies will be applied to VMs running in VMC that need to access the internet but they will still benefit from optimised routing when they need to communicate with other workloads running in the SDDC.

No comments:

Post a comment