Friday, 17 July 2015

Configuring OSPF / BGP Authentication within NSX and RouterOS

A little tip Andy Kennedy (Twitter) gave in one of his internal presentations was to use authentication when configuring a routing protocol during a Proof of Concept to avoid redistributing routes into a production environment in the event of a misconfiguration.  This is just a quick post to show you how to configure authentication when using OSPF and BGP within NSX and RouterOS.  My lab environment currently looks like this:


I have OSPF configured between my DLR (PA-DLR-01) and my Edge (PA-EDGE-01) and BGP between my Edge (PA-EDGE-01) and my physical router (Mikrotik) which runs RouterOS.  To configure OSPF authentication on the DLR simply fire up the vSphere web client and navigate to Networking & Security -> NSX Edges -> DLR you want to configure -> Routing -> OSPF and then modify the area definition.  Change the authentication type to Password and then enter a password, click OK and then publish the changes:


At this point routes should stop distributing into the Edge as we need to configure the same password on the Edge (PA-EDGE-01).  Once configured on Edge routes should start populating again.  We can also run a show ip ospf interface on the edge and see that authentication is enabled:


To configure authentication for BGP on the edge navigate to Networking & Security -> NSX Edges -> Edge you want to configure -> Routing -> BGP and edit the neighbour you want to configure for authentication.  In the password field enter the required password, click OK and then publish the changes:


Route distribution will stop from the Edge into the physical router and vice-versa until we configure authentication within the physical router.  To do this is RouterOS simply log in and navigate to Routing -> BGP -> Peers and modify the Edge peer.  Within the TCP MD5 Key field enter the same password you used on the Edge and click OK:


Once you do this routes should start redistributing again.  Now if the Edge uplink interface is accidentally connected to the wrong VLAN backed portgroup with a physical router with the same neighbour IP address we will not accidentally redistribute routes into the production environment.

Monday, 8 June 2015

Configuring BGP between RouterOS and an NSX Edge

I use a RouterBOARD RB715G in my homelab as my router and layer 3 switch which has been absolutely rock solid since I purchased it.  I'm probably not even using 10% of it's functionality but it's definitely worth the money.  When testing NSX and creating logical networks I always end up creating static routes into the NSX environment so I can test connectivity from my physical workstation so I decided to configure BGP from the Edge to the MikroTik.  The diagram below shows my current network layout.  I have OSPF configured from the distributed logical router (PA-DLR-01) to the Edge (PA-Edge-01)


Below are the results from the show ip route command on both the Edge and the DLR:

PA-Edge-01:


PA-DLR-01:


Below is the route information from my Mikrotik router:


As you can see OSPF is populating the routes from PA-DLR-01 into PA-Edge-01.  Now it's time to configure BGP between PA-Edge-01 and Mikrotik.  I'm not going to show you how to configure BGP on the Edge as there are numerous blog articles out there that document this process.  To configure BGP on the Mikrotik router log in and navigate to Routing and then BGP and edit the default entry in the Instances tab:


Ensure you have entered a router ID and Redistribute Connected and Redistribute Static options are ticked (Or whatever you want to redistribute via BGP).  Navigate to the Peers tab and add a new entry:


Give the new entry a name and then enter the IP address of the Edge, in my case it's 10.201.1.41. Enter the Remote AS number and then set the Default Originate option to Always and click OK.  If everything is configured correctly all directly connected routes to the PA-WebTier, PA-AppTier and PA-DBTier should appear in the Mikrotik's route list and any new networks directly connected to the PA-DLR-01 should appear as well:

PA-Edge-01


PA-DLR-01


Mikrotik:


Thanks to Geordy Korte (Blog | Twitter) for the assistance in configuring the Mikrotik router.

Monday, 1 June 2015

Decommission old vCenter Server 6.0 Appliance from Platform Services Controller

I deployed a temporary vCenter Server 6.0 Appliance to my existing lab to test the cross vCenter vMotion functionality across three vCenter appliances.  Once I validated this I no longer had the need for the VCSA and didn't want it taking up valuable resources in my lab.  My OCD also kicked in and I wanted to get rid of the associated error message that displayed within the web client:


As a good techie I searched for the solution but not couldnt find anything.  I posted the issue to twitter and legend William Lam (Blog | Twitter) was kind enough to post a link to a VMware KB article that explained the command:


If you've done what I've done and forgotten which VCSA was the PSC when you SSH in it should tell you at the logon screen:



After following the KB article I was able to successfully remove the decommissioned VCSA from the PSC and I no longer receive the warning within the Web Client.  I did have a slight hiccup in that the name of the VCSA in the error above was in capitals but inside the component manager database it was all lower case so the command had to be in lower case

Tuesday, 28 April 2015

North East VMUG - Thursday 21st May

The next North East VMUG will be held on Thursday 21st May at the Centre for Life in Newcastle:

International Centre for Life 
Marlborough Suite - Conference and Banqueting 
Times Square
Newcastle
NE1 4EP

You can register for the event here

The agenda is currently as follows:

9:30      Event Registration
10:00    Welcome & Agenda – VMUG Leadership
10:15    VMware Update – Michael Armstrong (Blog | Twitter)
11:00    vCloud Air DR (DRaaS) – Dave Hill (Blog | Twitter)
12:00    Lunch
12:30    Tegile – Gold Sponsor Presentation
13:30    Break
13:45    Data Centre Migration Project – Alan Burns
14:45    Break
15:00    Reflections on Convergence  – David Thomas
15:45    Break
16:00    EVO:RAIL – Mike Laverick (Blog | Twitter)
16:45    Closing statement and raffle
17:00    Onwards to vBeers

Big thanks to our Gold sponsor Tegile and Silver sponsor 10Zig

Tuesday, 7 April 2015

NSX Fundamentals Training events in May

There are some free NSX Fundamentals training in the UK planned for May by the Networking and Security Business Unit at VMware if you want to understand more around NSX.  The agenda for these events is as follows:

10:00 - Welcome & Introduction
10:15 - Overview and Architecture of VMware NSX
12:00 - Lunch
12:30 - The scalable, automated Data Centre network - Arista
13:30 - Securing the Next-Generation Data Centre - Palo Alto Networks
14:30 - Deep Security - Trend Micro
15:30 - Wrap and close

They will include plenty of white boarding and demo's and the more audience participation the better.

Current dates, locations and registration links are:

12th May - Dublin - Register here
18th May - Manchester - Register here
19th May - Edinburgh - Register here
21st May - London - Register here

I'll be presenting at the Manchester and Edinburgh events so register and I'll see you there

Saturday, 4 April 2015

Nested vSphere 6.0 with VMware tools as standard

Just a quick post on something that I just noticed.  I run a lot of nested ESXi hosts in my homelab for testing as it's quicker and cheaper to stand these up instead of purchasing new hardware.  With vSphere 6.0 when you created a nested ESXi host VMware tools is automatically enabled inside the guest OS without having to install the fling


Friday, 3 April 2015

Performing a cross vCenter vMotion with vSphere 6.0

With the release of vSphere 6.0 it was time to upgrade and completely redesign the homelab.  I wanted to start testing design scenarios around NSX and having multiple vCenter servers but first thing I wanted to try was cross vCenter vMotion which is a new feature in vSphere 6.0.  There are some requirements around cross vCenter vMotion which are required:
  • vCenter 6.0 and greater
  • SSO Domain
    • Same SSO domain to use the UI
    • Different SSO domain possible if using API
  • 250 Mbps network bandwitdh per vMotion operation
  • L2 network connectivity on VM portgroups (IP addresses are not updated)
Within my lab I have two physical ESXi 6 hosts both running 4 nested ESXi 6 hosts each, a vCenter Server Appliance and an NSX Manager.  The four virtual ESXi 6 hosts are added to two clusters (MGMT and Compute) for both PA (Palo Alto) and NY (New York) sites.  Within the New York datacenter I have a TEST01 VM which I will migrate to the Palo Alto Datacenter.  A picture says a thousand words:


All hosts are connected to the same back end storage array so I will not be migrating the storage at the same time and I've already configure VMK ports for vMotion.  Right click on the VM and select migrate:


I'm just going to change the compute resource so ensure it's checked and click Next:


You now need to select the host that you want to migrate the VM to.  You have the option of selecting a specific Host, Cluster, Resource Pool or vApp.  In this example I'm going to move it to the PA-Compute cluster from the NY-Compute cluster.  In order for the clusters to appear I had to enable DRS and EVC on the clusters even though they had exactly the same hardware.  Once the compatibility checks succeed click next:


Select a folder on the remote vCenter to place the virtual machine in and click next:


Select the network on the remote vCenter that you want to connect the VM to.  It doesn't have to have the same portgroup name but obviously it needs to be backed by the same VLAN otherwise you'll loose network connectivity and have to re-ip the VM.  In this example I'm moving a VM from the Compute-VLAN8-NY-Servers portgroup to Compute-VLAN8-PA-Servers.  Once compatibility checks have succeeded click next:


Select the vMotion priority and click next:


Check the destination is correct and click finish and watch the magic happen:


Once the vMotion has completed the VM will now reside in the new cluster attached to the new vCenter:


One ping dropped throughout the entire process:


The task shows completed successfully and the hosts the VM was migrated from and to:


After presenting to numerous customers around what's new with vSphere 6.0 cross vCenter vMotion was definitely a welcomed feature.  So go update your environment and take advantage of all the new features.