Wednesday, 16 October 2019

Adding vCenter Cloud Gateway Proxy Exceptions

I was recently asked about whether or not we could add proxy exceptions to the vCenter Cloud Gateway appliance to ensure that all local traffic i.e. traffic to the on-premises vCenter does not go through the corporate proxy. For those who are not aware, the vCenter Cloud Gateway allows Hybrid Linked Mode between an on-premises vCenter and a vCenter residing in VMC without the requirements on opening specific ports from VMC back to on-premises. The only ports that are required are TCP/443 and TCP/902 as per the pre-requisites:


When checking the VAMI interface on the vCenter Cloud Gateway appliance the only options for proxy are enabling or disabling for HTTP, HTTPS and FTP, there is no option to add exceptions:


To add exceptions you need to use the API. The get the list of current proxy exceptions you can use:

GET https://<Cloud Gateway IP>:5480/rest/appliance/networking/noproxy

If you want to add entries you can do a PUT against the following URL:

PUT https://<Cloud Gateway IP>:5480/rest/appliance/networking/noproxy

with the following JSON:

{
    "servers": [
        "localhost",
        "127.0.0.1",
        "10.0.0.0" ** Add networks that require exception **
    ]
}

localhost and 127.0.0.1 are always added

In the below example I GET the current list of proxy exceptions:


I then PUT two new exceptions into the list (10.0.0.0 and 192.168.1.0):


Then finally do another GET to show the full list:

Saturday, 24 August 2019

North East VMUG - Thursday 26th September

The next North East VMUG event has officially been announced and registration is open. The event will take place on Thursday 26th September at the Royal Station Hotel. It's conveniently located right next to Newcastle Central station and directions can be found here. The guys have pulled out all the stops and arranged a great selection of sponsor and community sessions. Just check out the list of rockstars who will be at the event:

Keynote Sessions

Matt Steiner (Blog | Twitter) - Cloud Management Evangelist/Strategist, VMware
Session - Are you the Platform Engineer of the Future who will #ManageAllTheThings?

As we enter the Multi-Cloud era, the traditional roles in IT are changing. In this talk, we look at how the landscape is changing, and at the Cloud Management technology that is supporting this change. We will talk APIs, Infrastructure as Code, Platforms as Code, Everything as a Service, how you truly can #ManageAllTheThings, and become the Platform Engineer of the Future.

Lee Dilworth (Twitter) - Chief Technologist Storage & Availability, VMware
Session - To be confirmed


Community Sessions

Ricky El-Qasem (Blog | Twitter)
Session - Automation: you're the first, the last, my everything)

A talk about how everything in your IT could and should be automated. Discussing how different facets of automation can help you nail down everything that can be automated, some next gen automation with AI and showing off a new tool in prototype he has been working on to help automate cloud templates.

Gareth Lewis (Blog | Twitter)
Session - VMware NSX Data Centre for vSphere (NSX-V): Micro-Segmentation from the Field

A real-world look at the micro-segmentation of applications with the aid of VMware NSX-V and the NSX Application Rule Manager. By visualising application dependencies, endpoints and services, we can implement a zero-trust environment and prevent lateral network exploits thanks to the Application Rule Manager and NSX Distributed Firewall.

Sam McGeown (Blog | Twitter)
Session - Getting Started with Kubernetes and the NSX-T container network plugin

A hands-on demonstration configuring the NSX-T container plugin with Kubernetes. Minimal slides and maximum command line.

These event would not be possible if it wasn't for the sponsors so a big shout out to them all:

Gold Sponsors

Dell Technologies
Arcserve
HTG

Silver Sponsors

ExaGrid
Exponential-e

Remember to secure a pass out from the other half because the event is only half the fun. vBeers will be held at The Town Wall straight after the event and continue into the night. Be sure to be first in line for the legendary scotch eggs before they disappear.

Wednesday, 29 May 2019

Docker Desktop for Windows running in VMware Cloud on AWS

I had an interesting request from a customer who is potentially looking to move some developers desktops from on-premises into VMC accessible via Horizon 7 but had a requirement to run docker on the Windows 10 desktops and asked if it was possible.

Since Docker uses some functionality of HyperV on Windows 10 I had my doubts but figured I would try it out. In order for this to work you need to enable Virtualization Based Security within the guest VMs settings:



Once you have enable this Docker Desktop for Windows should start successfully and you should be able to run docker images:

Monday, 13 May 2019

Scaling up your single node VMC SDDC

For those of you who don't know, VMware Cloud on AWS offers you the ability to deploy a one node SDDC for testing purposes. These are ideal for POCs or pilots and can very easily be scaled up to a production grade SDDC with the click of a button. So, what exactly is the Single Host offering? Our VMware Cloud on AWS FAQ tells us the following:

What is the Single Host SDDC offering?
With the new time-bound Single Host SDDC starter configuration, you can now purchase a single host VMware Cloud on AWS environment with the ability to seamlessly scale the number of hosts up within that time period, while retaining your data. The service life of the Single Host SDDC starter configuration is limited to 30-day intervals. This single host offering applies to customers who want a lower-cost entry point for proving the value of VMware Cloud on AWS in their environments.

When helping customers with POCs/pilots who want to validate the solution and use cases before purchasing they often want to move the SDDC from a POC/pilot stage into a fully fledge production grade SDDC. A lot of work goes into setting up the pilot which might include:
  • Connectivity to on-premises either via VPNs or Direct Connect.
  • Setting up and configuring various add-on services such as Hybrid Cloud Extension (HCX) and Disaster Recovery as a Service.
  • Various infrastructure workloads might have already been deployed such as Authentication Services, DNS, NTP, Backups, Native AWS integration etc.
It's at this point that I feel I should mention that you should absolutely avoid running production workloads on a single node SDDC due to the lack of redundancy in both the compute and storage layers. If the host fails you could potentially lose data since it's a single host and VSAN doesn't have the ability to ensure your data is stored on multiple hosts.

One of the mains reason for scaling up a POC/Pilot is when you destroy the SDDC the various public IP's are also handed back to AWS which means any VPNs (Policy or Route based) configured would need modifying and if the customer has strict change control processes or the firewalls are managed via a 3rd party there might be additional delays and costs associated with the changes. 

For this article I used a single node SDDC running version 1.6 Patch 01. For future SDDC versions we may change the way the scale up process works.


When you have a single node SDDC that you want to scale up to a production grade three node SDDC there are a few things that you need to take into consideration:

AWS Account
You need to ensure that you have linked your SDDC to your AWS account if you didn't already do this when your deployed your SDDC. Single node SDDC's have a grace period of 14 days before you need to connect them to your AWS account but if you want to scale it up you need to ensure that it's linked before you initiate the process. To check whether your SDDC is linked to an AWS account go to your SDDC, select Networking & Security and select Connected VPC:


If your SDDC isn't connected then go through the process to complete this.

VSAN Storage Policies
When you deploy a single node the default VSAN VM policy is set to No Data Redundancy since there is only a single node we are unable to store data on multiple nodes:


We can see that all our workload and management VMs are using the default policy are are currently compliant with the policy:


Subscriptions
Either before or after scaling up your SDDC to get the best value of discount you need to create a subscription. Subscriptions allow you to save money by committing to buy a certain amount of capacity in a specific region for a defined period, either 1 or 3 years, and a subscription is not required to use VMware Cloud on AWS. Any usage of the service not covered by a subscription is charged the at on-demand rate:


Page 10 in the VMware Cloud on AWS Getting Started guide shows you the process in creating a subscription and you can find more information about our pricing on public facing site here.

Scaling Up
In order to scale up your one node SDDC simply click on the Scale Up button:


A confirmation screen is displayed showing what your current environment looks like and what the new environment will look like once completed. If you are happy to proceed then click on the Scale Up Now button:


The scale up process will start and typically takes about 20 minutes (~10 minutes per host)



You can continue to use the environment and you will notice that within vCenter new hosts are automatically added in maintenance mode and then taken out of maintenance mode:


Eventually the two additional hosts will be added and available to use. The scale up process is complete and you will have a fully supported three node SDDC:



As part of the scale up process we change the VSAN storage policy for the management workloads from being in the VSAN Default Storage Policy to being in the Management Storage Policy - Regular which supports FTT=1 (RAID1):


Within about 20 minutes VSAN will bring the VMs into compliance and ensure data is stored on two different hosts:


We also modify the VSAN Default Storage Policy to ensure we use FTT=1(RAID1):


This will bring all workloads that currently use this policy into compliance within about 20 minutes (Depending on the number of workloads you have running within the environment)


Once this process has completed you are fully in support and running a production grade three node cluster.

One thing I have noticed is that you will see a warning about management network redundancy on the original one node. This alert was present before we scaled up but we currently don't have the ability to suppress it so you will have to initiate a support request via chat to have this suppressed. I will log this internally to suppress the warning as part of the scale up process:

Monday, 3 December 2018

Time for a new role...

As I start writing this I've realised that I have been at VMware for over 5 years having started in October 2013 with absolutely no pre-sales experience as a core Systems Engineer supporting partners based out of the North of England. Working for VMware has always been a dream since being introduced to the technology back in the Virtual Infrastructure 3.x days and having the opportunity to join this company has certainly been life changing. My role was initially supporting partners from a core vSphere technology and introducing them to new technologies (at the time) such as VSAN and NSX and getting existing technologies adopted by customers. This gave me great exposure to working with both partners and customers and getting to grips with what the Systems Engineer role actually involved.

Just over a year later I had the opportunity to join some internal training on NSX and knew from the outlay that this was game changing technology. Prior to joining VMware I was very much exposed to networking and security and understood the value proposition and how this technology could change the way we consume networking and security services within the datacenter. A few months later I had the opportunity to join the Networking and Security Business Unit (NSBU) as a specialist NSX Systems Engineer. This was in the very early days and I remember attending the first internal technical enablement session when there was only around 30 SE's globally and we all fit in a small training room in Palo Alto. I've now been in post for almost four years and have seen the NSBU grow from a single product business unit with NSX to now a true multi-product BU with our Virtual Cloud Network proposition which is resinating extremely well with customers across all verticals. I've covered a mixture of public sector and commercial customers and helped train, design and deploy various solutions for customers to solve a variety of challenges.

I've been looking for a change over the last few month and investigated both internal and external opportunities looking for my next move. I'm glad to say that as of today I'm now a Lead VMware Cloud on AWS Solutions Engineer within the Cloud Platform Business Unit. My role going forward will be to help customers understand the value proposition of VMware Cloud on AWS and the possibilities of extending their on-premises environments into VMware on AWS. With the recent announcements of AWS Outpost and VMware on AWS Outpost I'm truly excited to join the team and see the level of innovation and the relationship we have with AWS continue to grow and benefit customers. Expect more content in the near future.


Saturday, 25 August 2018

North East VMUG - Thursday 20th September 2018

The next North East VMUG is locked in and final arrangements are being made.  The event will take place on Thursday 20th September 2018 at the following address:

Royal Station Hotel
Neville Stret
Newcastle upon Tyne
NE1 5DH

You can register for the event here

The agenda is currently as follows:

08:40 - Registration & Networking
09:00 - NEVMUG Introduction
09:10 - Cormac Hogan (Blog | Twitter) - VMware Keynote

What’s happening in the world of VMware Storage

A closer look at some of the more recent announcements around VMware storage related products and features. There will be lots to talk about as this will be so soon after the US VMworld 2018 event. We will look at new enhancements to VMware, VVols, IO Filters, Core Storage and even projects that are happening around persistent storage in the container space. There should be something for everyone in this space.

10:00 - Networking
10:15 - Rubrik

Details to follow

11:00 - Networking
11:15 - Community Session – Bryan O’Connor (Blog | Twitter)

What's new in vSphere 6.7

  • Management Enhancements
  • ESXI Enhancements
  • Virtual Center Enhancements
  • VM Enhancements
  • Storage Enhancements
  • Security Enhancements
  • Network Enhancements
  • Availability Enhancements

12:00 - Lunch
12:30 - Adam Bohle - VMware on AWS (Twitter)

VMware Cloud on AWS - Whats New

VMware Cloud on AWS is a fast moving technology in the VMware portfolio, this session will consist of a short introduction to the service, as well as an update on all the new features and AWS regions that have become available this year.

13:15 - Networking
13:25 - NAKIVO - Nick Luchkov, Senior Technical Pre-Sales Manager

Protecting VMware/Hyper-V environments with NAKIVO Backup & Replication

NAKIVO develops a fast, reliable, and affordable backup and replication solution for virtual and cloud environments. Over 10,000 companies are using NAKIVO Backup & Replication to protect and recover their data more efficiently and cost-effectively. Join this session to learn:

  • How to ensure business continuity and reduce downtime of your critical virtualized data.
  • How to speed up the backup and replication data transfer, reduce backup size and shrink backup window.
  • How to turn your NAS into the backup appliance and use deduplication hardware appliances to get super-fast backup speed.

14:10 - Networking
14:20 - IGEL - Tom Illingworth

Thin client?  It’s all about the software

Hear IGEL discuss IGEL’s revolutionary endpoint management solutions, simple, smart and secure. We believe it should be as easy to remotely manage 10,000 devices as 10 and add the functionality that’s most important to enterprise, making the life of the IT department easier.

15:05 - Networking
15:15 - Community Session – Dale Handley (Twitter)

A detailed session on the new custom Forms feature in vRealize Automation 7.4vRA.

16:00 - Networking
16:10 - Darren Hirons (Twitter) & Matt Evans from VMware

 ‘To Re, or not to Re (purpose)’

The desktop market offers many desktop re-purposing solutions based on Windows, Linux and Chrome. In this session we will take a deep dive into those technologies, share our test results and present a comparison of the different vendor offerings to help you make an informed choice. Examples of our findings will cover costs, system requirements, performance, device management and limitations.

16:55 - NEVMUG Close – Q&A and prize giveaway
17:00 - vBeers – Cinema room, The Town Wall

Big thanks to all of our sponsors, without you these events would not be possible.





Monday, 12 March 2018

Getting started with VMware AppDefense - Part 3

Getting started with VMware AppDefense - Part 1
Getting started with VMware AppDefense - Part 2
Getting started with VMware AppDefense - Part 3

Now that we have successfully deployed the host and guest modules and verified that the status of both the hosts and guest VMs are active, we can now start configuring an application scope and start protecting an application.

Log into the AppDefense SaaS portal and you should initially be greeted by the dashboard page.  Instantly you can see the number of VMs that are unassigned, in discovery or protected:



In order to protect a VM with AppDefense, we need to create an application scope and then add a VM to the scope.  Imagine an application scope as a group of data centre assets that make up an application or regulatory scope.  To add a scope click on the plus icon next to Scopes and give it a suitable name and click Create:


We now need to create a service.  A service is made up of one or more VMs that perform a function within an application.  An example could be a three-tier application with three services (web, app and DB).  All VMs within a service is expected to homogeneous and have the exact same allowed behaviour and rules.  Click on the Add Service button within the scope:


Enter a Service and optional Service Type (From a predefined list) and Service Description and click Next:


Select the VMs that you want to add to the service.  It's simpler to sort via the State field to show all VMs that have the guest module installed and enabled.  Select the VM or VMs and click Next:


You now have the option to manually enter allowed behaviour by entering information about the process and any inbound/outbound connection required.  You can just leave this blank and click Finish as AppDefense will learn the behaviour:


Once you click finish the service is added to the scope and AppDefense automatically starts to learn the behaviour of the application.  You can add additional services if required based on the application.  You need to leave AppDefense in learning mode for a long enough period of time for it to capture all expected behaviour.  This will vary depending on the application role but a full month cycle should be enough.


Once you have left AppDefense for a suitable period of time you should see the behaviour that has been learnt:



You can change the view by selecting the column icon in the top right-hand corner and expand.  You may also notice that we take process reputation threat and trust scores via the Carbon Black integration:


Once you are confident that AppDefense has had enough time to sufficiently learn the application (Don't worry, you can put it back into learn mode or manually add processes if something has been missed) it's time to start enforcing known good.  Click on the Verify and Protect button at the top of the application scope:


Verify the details and click Verify and Protect:


Once protecting you will notice that we now have a new tab in the scope called rules:


by default, all options are enabled and set to Alert only.  You can enable or disable specific rules depending on what you are particularly interested in protecting and also modify the action.  To modify the action click on the three dots icon in the top right hand corner and click Edit Service:


Select the Rules tab and then you have the option to enable or disable specific rules, change the remediation action from the following options (Quarantine required integration with NSX):


You also have the option to either set the enforcement to Automatic or manual:


With the default options set alerts will be visible within the AppDefense portal with regards to any violations.  This allows you to continue monitoring the application before setting the remediation action block or quarantine.  The following alert show what happens when a violation occurs.  In this example I initiated an SSH session via putty to 192.168.1.11:


The alert is visible within AppDefense and you can drill down an view the actual behaviour:


Since we have set the remediation action to alert when can review the alert and then make a decision on what we want to do next.  In This example I select Power Off:


Confirm Power Off:


The command is then pushed to vCenter and the VM is powered off:


Hopefully, the last three getting started with AppDefense articles has left you wanting more if so I plan on blogging more in the future so keep tuned.