PowerCLI | Who wants this engine?

Hi Guys,

In my last post, I shared a few PowerCLI commands to take a snapshot but today I changed the look and feel. I tried to create an engine\application kind of thing where you just need to press 1, 2 and 3 and all your work related to your snapshot will be done.

No need to login GUI and tasks will be done in a faster way.

It helped me a lot and I hope it will be helpful for you as well. Tell me if anyone wants this engine now.

Thank you,


vCenter | Snapshot Operations

Hi Folks,

Today someone asked about the snapshot. Like if I need to take snapshot from PowerShell then how to? if I quiesce the snapshot in PowerShell then how-to etc. etc. so, I thought to summarize as many as I can snapshot operations in a single page and share with all. Here you go..

Connect-VIserver VC-IP

#To take a snapshot for a single VM without quiescing and memory
$VM = read-host "Enter the VM Name "
New-Snapshot -VM $VM -Name vcnotes_snap -Description "This is test snap"
#with quiesced on
New-Snapshot -VM $VM -Name vcnotes_snap -Description "This is test snap" -Quiesce
#With memory state
New-Snapshot -VM $VM -Name vcnotes_snap -Description "This is test snap" -Memory
#with Memory and quiesce both
New-Snapshot -VM $VM -Name vcnotes_snap -Description "This is test snap" -Memory -Quiesce

#take snapshot for mulitple VMs and with quiesced off and no memory state
foreach ($AVM in (Get-content -path C:\Temp\vmlist.txt)){New-Snapshot -VM $AVM -Name vsnap -Description "This is test"}
#with quiesced on
foreach ($AVM in (Get-content -path C:\Temp\vmlist.txt)){New-Snapshot -VM $AVM -Name vsnap -Description "This is test" -Quiesce}
#with memory state
foreach ($AVM in (Get-content -path C:\Temp\vmlist.txt)){New-Snapshot -VM $AVM -Name vsnap -Description "This is test" -memory}
#with memory and quiesce both
foreach ($AVM in (Get-content -path C:\Temp\vmlist.txt)){New-Snapshot -VM $AVM -Name vsnap -Description "This is test" -Quiesce -Memory}

#Snapshot Consolidation
#For all Vms which needs consolidation
Get-VM | Where-Object {$_.Extensiondata.Runtime.ConsolidationNeeded} | foreach {$_.ExtensionData.ConsolidateVMDisks_Task()}
#for any single VM
$vmname = Read-Host "Enter VM Name"
$vm = Get-VM -Name $vmname

#snapshot deletion
#to delete all snapshot older than specific days for single VM. In order to delete all snapshot taken today just replace 10 with 0 in below command
$vmname = Read-Host "Enter VM Name"
Get-VM -Name $vmname | Get-Snapshot | Where {$_.Created -lt (Get-Date).AddDays(-10)} | Remove-Snapshot

#delete snapshot on multiple VMs older than specific days
$VM = Get-Content -Path C:\temp\vmlist.txt
Get-VM -Name $VM | Get-Snapshot | Where {$_.created -lt (Get-Date).AddDays(-10)} | Remove-Snapshot

#delete all snapshot on a VM
$VM = Read-Host "Enter VM Name"
Get-VM -Name $VM  | Get-Snapshot | Remove-Snapshot
#delete specific snapshot on a VM
$VM = Read-Host "Enter VM Name"
Get-VM -name $VM | Get-Snapshot | Select VM,Name,Created,SizeGB | FT
Write-Host "Tell me the snapshot name from above list"
$snapname = Read-Host "enter name here"
Get-VM -Name $VM | Get-Snapshot -name $snapname | Remove-Snapshot

#Revert to the last snapshot
$VM = read-host "Enter VM Name" 
Get-snapshot -VM $VM | select name 
Write-host "Tell me the name of Snapshot from the above list which you want to revert to"
Get-Snapshot -VM $VM -Name Test1 | Set-VM $VM

Let me know if anyone wants me to add anything to the list.

Thank you,

Free Training and Certifications

Hi All,
Free training and certification currently available for April and May.
Make Use of it.

1. Microsoft - Azure certification

2. AWS - All AWS technology

3. IBM - All IBM technology

4. Oracle University - Cloud Infrastructure and Autonomous Database

5. Fortinet - NSE1 and NSE2

6. Palo Alto - Networks

7. Cisco - Cyber Security

8. Qualysguard - Vulnerabilty management

9. Nessus - Vulnerabilty management

10. SAN's - cyber security

11. Homeland security - ICS Security

12. Coursera - Cloud courses

13. Pluralsight - All Training

14. Sololearn - All Training

 Keep Learning! Keep growing

Thank you,

NSX-T | Micro-Segmentation

Hello Guys,

Hope you are doing well wherever you are and I pray for everyone's life. Stay Home & Stay Safe!

So, as I promised, I am writing about micro-segmentation means DFW (Distributed FireWall) in NSX-T. This post is for those guys who knows how to configure it in NSX-V. In case, you want to understand DFW in detail then click here.

First of all, let's understand the Connectivity Strategy in NSX-T.

1. Blacklist (with or without logging) - This is the default option which creates an allow all rule in DFW. It also does mean that micro-segmentation is off.
2. Whitelist (with or without logging) - It creates deny all rule in DFW. To allow any traffic, we have to create allow rules. It block DHCP traffic as well if not allowed via allow rule
3. None - This option will disable both Blacklisting and Whitelisting of firewall rules. This option is useful when you have already applied rules from older version of NSX-T

In comparison of NSX-V, above terms were not in picture but default rule was allow-all.

In below video, we are going to explore DFW rules in NSX-T.

Now when we know about the connectivity strategy, let's dig in and know about the DFW rules.

Please go through below video

Feel free to ask any query here.

Thank you,

Intro to K8s

I thought to put some definition or short description of Kubernetes terms for my reference. Detailed information you can obviously find on https://kubernetes.io/docs/concepts/.

Hope you will find it good too.

Before I explain Kubernetes, I think it is much useful to understand that what is Container. I know that web is already full with such definitions so I will try to explain in much shorter and easiest way.

So Let's start with Container. Below image is self-explanatory. Some people refer it as VM but difference is pretty clear in below image.

Hope it is clear to you that why it is more useful to use containers. It is faster, remove dependency of guest OS and it doesn't bother if target device is private datacenter, a public cloud or developer's personal laptop. In container, we can simply deploy our application without the need of any hosting OS.

So now, when we know a bit about container, let's think about what is Kubernetes.

So, to explain kubernetes, let's take classic example of three tier application that is web, app and db. Now each application hosted on different container. Web is on container A, app is on container B and the db is on container C (for example).

Now, to deploy these lightweight application, there would be multiple steps involved and also to do day-2 operations like upgrading or upscaling etc. So, to do all these tasks in quick manner we must have some container orchestration technique. isn't it? so that we can avoid any manual task and human error kind of things.

So, Kubernetes is an open source container orchestration system for automating deployment, scaling and management of containerized application. It was originally designed by google and now is being managed by Cloud Native Computing Foundation. Kubernetes basically a cluster-based solution which involve Kubernetes master and Kubernetes nodes aka workers or minions.

Let's explore Kubernetes bit more and understand its components-

Kubernetes Master Components

1. API Server: API server is the target for all external API client like K8 CLI client. This external or internal components like controller manager, dashboard, scheduler also talks to API server.

 2. K8 Scheduler:  A scheduler watches for newly created Pods that have no Node assigned. For every Pod that the scheduler discovers, the scheduler becomes responsible for finding the best Node for that Pod to run on. If you want to know more about it then click here to explore more.
3. Controller Manager: The Controller Manager is a daemon that embeds the core control loops shipped with Kubernetes.  A controller is a control loop that watches the shared state of the cluster through the API server and makes changes attempting to move the current state towards the desired state.
4. Etcd: is the consistent and highly-available key value store used as Kubernetes backing store for all cluster data.
Kubernetes Node Components
1. Kubelet - It is the primary node agent that runs on each node. It works in terms of PodSpec. A PodSpec is a YAML or JSON object that describe a POD
2. The container runtime (c runtime): is the engine that is responsible for running containers. Kubernetes supports several container runtimes like Docker, which is the most adopted one.
3. The Kube-­Proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.  It implements east/west load-­balancing on the nodes using IPTables. 
Kubernetes Namespace
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. Click here to know more about Kubernetes namespace.
Kubernetes POD with example
Kubernetes POD is group of one or more containers. Containers within a POD share an IP address and port space and can find each other via localhost. They can also communicate with each other using standard inter-process communication like system-V and Semaphore or POSIX shared memory. Containers in a POD also shares the same data volume. Pods are a model of the pattern of multiple cooperating processes which form a cohesive unit of service and serve as unit of deployment, horizontal scaling, and replication.  Co-Location (co-scheduling), shared fate (e.g. termination), coordinated replication.
Kubernetes Controllers
A Replication Controller enforces the 'desired' state of a collection of Pods. E.g. it makes sure that 4 Pods are always running in the cluster. If there are too many Pods, it will kill some. If there are too few, the Replication Controller will start more. 
A Replica Set is the next-generation Replication Controller. The only difference between a Replica Set and a Replication Controller right now is the selector support. Replica Set supports the new set-based selector requirements whereas a Replication Controller only supports equality-based selector requirements.

A Deployment Controller provides declarative updates for Pods and Replica Sets.  You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new Replica Sets, or to remove existing Deployments and adopt all their resources with new Deployments.

Kubernetes Service- A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes called a micro-service). The set of Pods targeted by a Service is (usually) determined by a Label Selector.

A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes called a micro-service). The set of Pods targeted by a Service is (usually) determined by a Label Selector.

The kube-proxy watches the Kubernetes master for the addition and removal of Service and Endpoints objects. For each Service, it installs iptables rules which capture traffic to the Service’s clusterIP (which is virtual) and Port and redirects that traffic to one of the Service’s backend sets. For each Endpoints object, it installs iptables rules which select a backend Pod. By default, the choice of backend is random.

With NSX-T and the NSX Container Plugin (NCP), we leverage the NSX Kube-Proxy, which is a daemon running on the Kubernetes Nodes.  It replaces the native distributed east-west load balancer in Kubernetes (the Kube-Proxy using IPTables) with Open vSwitch (OVS) load-balancing features.
Please note that it is extremely important to choose correct versions of Ubuntu OS, Docker, Kubernetes, Open vSwitch, and NSX-T.

Reference compatibility checklist for this lab build-out:  https://tinyurl.com/y5vastd5

Kubernetes Ingress (Example)-

The Kubernetes Ingress is an API object that manages external access to the services in a cluster, typically HTTP.  Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere.  An Ingress is a collection of rules that allow inbound connections to reach the cluster services. 

It can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting, and more. Users request ingress by POSTing the Ingress resource to the API server. An ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional front-ends to help handle the traffic in an HA manner.

The most common open-source projects which allow us to do this are Nginx and HAProxy.  Which looks like the image above. 

In this lab, we will work with the NSX-T native layer 7 load balancer to provide this functionality, as you can see in the example image above.

Within Kubernetes there's also the External load balancer object, not to be confused with the Ingress object.  When creating a service, you have the option of automatically creating a cloud network load balancer. This provides an externally-accessible IP address that sends traffic to the correct port on your cluster nodes provided your cluster runs in a supported environment and is configured with the correct cloud load balancer provider package.

Network Policies
A Kubernetes Network Policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.  NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.

Kubernetes Network policies are implemented by the network plugin, so you must be using a networking solution which supports NetworkPolicy - simply creating the resource without a controller to implement it will have no effect.

By default, pods are non-isolated; they accept traffic from any source.  Pods become isolated by having a Kubernetes Network Policy which selects them. Once there is a Kubernetes Network Policy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by a Kubernetes Network Policy.  Other pods in the namespace that are not selected by a Kubernetes Network Policy will continue to accept all traffic.

Thank you,

NSX-T | Basic Routing Setup

Hi Folks,

As most of the techie guys knows, who are dealing with cloud technologies and specially dealing with VMware cloud applications that NSX-v will sunset soon.

To replace it VMware has already released NSX-T which provides wider support of cloud technologies, easier implementation and independent design approach.

NSX-v was built only for vSphere environment whereas NSX-T can work with any cloud vendor, for example MS Azure, AWS and even with Openstack.

Now a days, integrating k8s with NSX-T is in trend. I will try to create a post on that as well but in today's post let's see how you can setup basic east-west and north-south routing in NSX-T. It will clear many doubts that you might have like what is T0, T1 router? What is SR and DR? and many more.

So, While explaining and demonstrating it, I will be using below topology.

So, let's see how I did it.

Steps are same as in NSX-v, just look and feel is different. Just remember that

Logical switch in NSX-v is similar to Segment in NSX-T
DLR in NSX-v play similar role as Tier-1 in NSX-T (It run on DR that is distributed router)
ESG in NSX-v play similar role as Tier-0 in NSX-T (It has SR that is service router which connect with physical switch\router)

But yes, architecture and configuration is bit different. There was no SR or DR in NSX-v but we have in NSX-T. However this post is to demonstrate the steps to be taken to setup basic routing in NSX-T from my VM to Tier-0 Gateway router.

So what I have already done in my lab :-
  • All VMs are directly connected with Tier-0 gateway through three segments that LS-db, LS-web and LS-app
  • Right now there is no Tier-1 gateway in my lab.

What I will do is:-
  • I will first create one Tier-1 gateway and will connect it to Tier-0 gateway to maintain the topology as shown above
  • Migrate all segments from Tier-0 to Tier-1 and ensure the connectivity
Create Tier-1 Gateway and connect it with Tier-0 Gateway
    Migrate all segments from Tier-0 to Tier-1 Gateway

    Ensure the connectivity now

    So guys, let me know how you found it. I could create one single big video but that becomes bit boring and lengthy.

    Thank you,

    vROPS | Cluster Health-Check Script

    Hi Guys,

    In vROPS, when we deploy\create multi-node cluster then sometime due to large collection of metrics and alerts vROPS database get full and start creating many issues like metric graphs doesn't populate correctly, metrics collection start getting skip the timeline, vROPS UI get slow etc.

    In order to maintain the health of vROPS cluster, you can use below script which gives very clear and exact information on its metric and alert collections, its current size and state of all active nodes in cluster, space in each directory of master node and many more things. I got this script from VMware while working on such an issue so thought to share with all.

    After all, sharing is caring. Isn't it? Below is the script.

    echo -e "\e[1;31mHOSTNAME:\e[0m" > $HOSTNAME-status.txt | hostname >> $HOSTNAME-status.txt;getent hosts | nslookup >> $HOSTNAME-status.txt; uname -a >> $HOSTNAME-status.txt; echo -e "\e[1;31mDNS CONFIGURATION:\e[0m" >> $HOSTNAME-status.txt | cat /etc/resolv.conf >> $HOSTNAME-status.txt; cat /etc/hosts >> $HOSTNAME-status.txt; echo -e "\e[1;31mVERSION INFO:\e[0m" >> $HOSTNAME-status.txt | cat /usr/lib/vmware-vcops/user/conf/lastbuildversion.txt >> $HOSTNAME-status.txt; echo -e "" >> $HOSTNAME-status.txt;cat /etc/SuSE-release >> $HOSTNAME-status.txt; echo -e "\e[1;31mDATE:\e[0m" >> $HOSTNAME-status.txt | date >> $HOSTNAME-status.txt; echo -e "\e[1;31mSERVICES:\e[0m" >> $HOSTNAME-status.txt | service vmware-vcops status >> $HOSTNAME-status.txt; echo -e "\e[1;31mCASA:\e[0m">> $HOSTNAME-status.txt| service vmware-casa status >> $HOSTNAME-status.txt; echo -e "\e[1;31mDISKSPACE:\e[0m" >> $HOSTNAME-status.txt | df -h >> $HOSTNAME-status.txt; echo -e "\e[1;31mHEAPDUMP:\e[0m">> $HOSTNAME-status.txt | ls -lrSh /storage/heapdump/>> $HOSTNAME-status.txt; echo -e "\e[1;31mIFCONFIG:\e[0m">> $HOSTNAME-status.txt | ifconfig >> $HOSTNAME-status.txt; echo -e "\e[1;31mCASADB.SCRIPT:\e[0m" >> $HOSTNAME-status.txt | tail -n +51 /data/db/casa/webapp/hsqldb/casa.db.script >> $HOSTNAME-status.txt; echo -e "\e[1;31mROLE STATE:\e[0m">> $HOSTNAME-status.txt | grep adminroleconnectionstring /usr/lib/vmware-vcopssuite/utilities/sliceConfiguration/data/roleState.properties >>$HOSTNAME-status.txt | grep adminroleenabled /usr/lib/vmware-vcopssuite/utilities/sliceConfiguration/data/roleState.properties >>$HOSTNAME-status.txt; echo -e "\e[1;31mGEMFIRE PROPERTIES:\e[0m">> $HOSTNAME-status.txt | grep locators /usr/lib/vmware-vcops/user/conf/gemfire.* >> $HOSTNAME-status.txt; grep bind-address /usr/lib/vmware-vcops/user/conf/gemfire.* >> $HOSTNAME-status.txt; grep shardRedundancyLevel /usr/lib/vmware-vcops/user/conf/gemfire.properties >> $HOSTNAME-status.txt;grep "serversCount" /usr/lib/vmware-vcops/user/conf/gemfire.properties >> $HOSTNAME-status.txt; echo -e "\e[1;31mPERSISTENCE PROPERTIES:\e[0m">> $HOSTNAME-status.txt | grep ^db* /usr/lib/vmware-vcops/user/conf/persistence/persistence.properties >> $HOSTNAME-status.txt; grep replica* /usr/lib/vmware-vcops/user/conf/persistence/persistence.properties >> $HOSTNAME-status.txt; grep "repl.db.role" /usr/lib/vmware-vcops/user/conf/persistence/persistence.properties >> $HOSTNAME-status.txt; echo -e "\e[1;31mCASSANDRA YAML:\e[0m" >> $HOSTNAME-status.txt | grep broadcast_rpc_address: /usr/lib/vmware-vcops/user/conf/cassandra/cassandra.yaml >> $HOSTNAME-status.txt | grep listen_address: /usr/lib/vmware-vcops/user/conf/cassandra/cassandra.yaml >> $HOSTNAME-status.txt; echo -e "\e[1;31mNODE STATE INFO:\e[0m">> $HOSTNAME-status.txt | $VMWARE_PYTHON_BIN $ALIVE_BASE/tools/vrops-platform-cli/vrops-platform-cli.py getShardStateMappingInfo | sed -nre '/stateMappings/,/}$/p' >> $HOSTNAME-status.txt; echo -e "\e[1;31mWRAPPER RESTARTS:\e[0m" >> $HOSTNAME-status.txt |find /usr/lib/vmware-vcops/user/log/ -name "*wrapper.log" -print -exec bash -c "grep 'Wrapper Stopped' {} | tail -5" \; | cut -d'|' -f3 >> $HOSTNAME-status.txt; echo -e "" >> $HOSTNAME-status.txt; echo -e "\e[1;4;35mPERFORMANCE RELATED INFORMATION\e[0m" >> $HOSTNAME-status.txt; echo -e "" >> $HOSTNAME-status.txt; echo -e "\e[1;31mvCPU INFO:\e[0m" >> $HOSTNAME-status.txt |grep -wc processor /proc/cpuinfo >> $HOSTNAME-status.txt; echo -e "\e[1;31mMEMORY INFO:\e[0m" >> $HOSTNAME-status.txt | awk '$3=="kB"{$2=$2/1024**2;$3="GB";} 1' /proc/meminfo | column -t | grep MemTotal >> $HOSTNAME-status.txt; echo -e "\e[1;31mTOP OUTPUT:\e[0m" >> $HOSTNAME-status.txt; /usr/bin/top -d 0.5 -n 1 -b | head -5 >> $HOSTNAME-status.txt; echo -e "\e[1;31mADAPTER TYPE OBJECT COUNTS:\e[0m" >> $HOSTNAME-status.txt; su - postgres -c "PGDATA=/storage/db/vcops/vpostgres/repl PGPORT=5433 /opt/vmware/vpostgres/current/bin/psql -d vcopsdb -c 'select count(*),adapter_kind from resource group by adapter_kind;'" | awk '{ SUM += $1; print} END {print "Total";print SUM }' | cut -d ':' -f 5 >> $HOSTNAME-status.txt; echo -e "\e[1;31mCASSANDRA ACTIVITIES:\e[0m" >> $HOSTNAME-status.txt | /usr/lib/vmware-vcops/cassandra/apache-cassandra-2.1.8/bin/./nodetool --ssl -h --port 9008 -u maintenanceAdmin --password-file /usr/lib/vmware-vcops/user/conf/jmxremote.password  cfstats -H globalpersistence.activity_2_tbl >> $HOSTNAME-status.txt; echo -e "\e[1;31mALERT DB COUNT:\e[0m" >> $HOSTNAME-status.txt | su - postgres -c "/opt/vmware/vpostgres/9.3/bin/psql -d vcopsdb -A -t -c 'select count(*) from alert'" >> $HOSTNAME-status.txt; echo -e "\e[1;31mALARM DB COUNT:\e[0m" >> $HOSTNAME-status.txt | su - postgres -c "/opt/vmware/vpostgres/9.3/bin/psql -d vcopsdb -A -t -c 'select count(*) from alarm'" >> $HOSTNAME-status.txt; less -r $HOSTNAME-status.txt

    Now, how to use this script? 

    • login vROPS CLI with root account on vROPS master node appliance
    • Copy above script and as it is paste into your CLI window of vROPS master node
    • Don't worry, it doesn't have any side-effect :)
    It will give you exhaustive detail of all the information about your current active nodes in cluster. Hope you find it useful. Must share the feedback!

    Thank you,
    Team vCloudNotes

    NSX | PacketWalk

    Hi Guys,

    Today I thought to put this kind of packetwalk from my VM in vxlan upto the physical firewall. I checked on internet but couldn't find such exhaustive informatin.

    I hope it will be help for many of you. Please leave your comment as a feedback and let me know how you found it.

    In case of any doubt or you see anything which I should include in below snippet, feel free to say via your comment.

    Thank you,
    Team vCloudnotes

    Network Migration | VLAN to VXLAN

    Hi Guys,

    Today I thought to write something about how you can migrate your underlay workload (VLAN) to overlay technology (VXLAN) in NSX.

    So, when I say network migration, it will include migration at two levels.

    A. L3 migration
    B. VM vNIC migration

    A. L3 migration means:-

    Your current setup has VLAN and your VM's default gateway is physical switch\router\firewall whatever you have configured in your environment as shown in below image

                                                                       picture - 1.1
    We basically needs to migrate this default gateway from your physical switch\router\firewall to DLR\UDLR in your NSX environment. See below image

                                                                                    picture - 1.2

    B. VM vNIC migration means:-

    Just changing the mapped portgroup from VLAN to VXLAN from VM properties of a VM. Below is the referenced image.

                                                                        picture - 1.3
    Step by Step Approach:-

    Step X - Deploy and configure NSX (You can do it like pre-requisite)
    Step 1 - Download NSX appliance and deploy it in your vcenter server
    Step 2 - Integrate vcenter server with NSX
    Step 3 - Add esxi host into your NSX cluster which will install the NSX VIBs and will make them ready for vxlan and dfw.
    Step 4 - Create and define transport zone and other basic configuration of NSX

    #Above steps are reference one. Mean to say before final network cutover (Changing VM's NIC), you can configure your NSX environment well in advance. Once it is ready then....

    Step 5 - Create logical switch (it is like portgroup in vCenter server but it creates in the NSX environment that's why we call it logical switch). It is also known as logical wire. let's say I create a logical switch named "LS-mylab-" which I will use to connect the VM. I will show you where exactly it fit in above picture 1.2.

    Step 6 - Create and configure DLR\UDLR(in case of cross-vcenter setup)
    Step 7 - Here, you will create a DLR instance and then connect above created logical switch with this DLR. It will create an LIF(Logical interface) on DLR and assign IP address with subnet mask /24 on this interface. Keep this interface in disabled mode.
    Step 8 - Create another interface on DLR which will connect it to NSX edge for uplink traffic through a separate LS (let's say I named it like Transit-DLR-ESG). It will have its IP configuration to communicate with Firewall and further. We are covering on NSX part in this post.

    #Now your LS is connected with DLR\UDLR and your DLR\UDLR is connected with NSX Edge and further your NSX Edge is connected with firewall for outside traffic.

    Step 9 - Create and configure NSX Edge
    Step 10 - Create an internal interface on NSX Edge and connect it with DLR through another logical switch named as above Transit-DLR-ESG. It will now connect these two devices.
    Step 11 - Create an uplink interface on NSX edge with vCenter simple portgroup with VLAN (for example VLAN 90)

    Now make sure the traffic flow like, Logical switch (LS-mylab- --> DLR --> NSX Edge --> Firewall.

    Also, you need to configure the IP addressing between all these devices. Don't worry,

    Step 12 - Because DLR, ESG or firewall, all are L3/L7 devices so in order to connect these you have to configure routing as well. You have two options like either you configure static routing or enable dynamic routing protocol. I will go with dynamic routing protocol and will enable OSPF on DLR and Edge as well.

    Once done, Make sure your DLR can ping your edge, your edge can ping your firewall and also your DLR can ping your firewall IP address.

    Once above configuration is done and tested now, let's move toward the network migration part.

    Step A - L3 Network Migration:
    Simply, disable the VLAN interface on L3 switch
    Enable LIF on DLR which was configured with same subnet pool that is and having
    IP address We kept it disabled in step 7.

    Step B - VM Network Migration:

    Simply, change the portgroup of VM from VLAN to vxlan which will be visible in the list when you will change the portgroup.

     Once all done, network diagram will look like below.

    Hope you will have some idea on it and might be having lots of questions here. Please feel free to ask any question. More you will ask, more it will clear!

    Now, how to configure OSPF, how to assign IP address of interfaces of edge and dlr, why we only disabled vlan interface on physical switch. I intentionally exclude these points from this post because doing so will make this post very long (then it will be boring:)). In case, you want to know this then please feel free to let me know, I will explain these topics in my next posts.

    Thank you,