So, it begins – the preparation for expanding my VMware knowledge; specifically NSX-T and AVI, but I am sure I will learn a bunch along the way with other VMware products.
This blog will contain a few parts, breaking up the steps I took/am taking to create a home lab suitable for expanding my VMware knowledge.
Without further a-do, below outlines the contents for the upcoming blog series, and breaks down each element. It is worth noting at this point, that to accompany the blogs, there will various videos created so I can provide a walk-through of what I’m talking about.
- Article 1 – VMware NSX-T home lab; planning for the mission ahead
- Article 2 – Setting the foundations – Initialisation, vSAN and Management
- Article 3 – Inception – Nesting NSX-T within ESXi within ESXi
- Article 4 – Initialising VRNI, VRLI and VROps
- Article 5 – Lessons learnt & wrap up
** The articles outlined above, may change in accordance to what I see fit and what I discover on my journey.
This section outlines the plan, including a topology and IP/VLAN matrix’s.
- Dell Precision T910
- 2x Intel Xeon E5-2683 v3
- Memory (RAM)
- 4x32GB DDR4 (will likely up this to 8x32GB)
- Hard drive
- 2x 1Tb Crucial SSD
Whilst this topology isn’t the prettiest, it serves a purpose – it depicts the nested virtualisation I have planned. We can see that a single ESXi host will have many virtualised ESXi hosts within it. To accompany these virtualised ESXi hosts and to make life a little easier, I’ll install vCenter to improve manageability, no doubt I will learn lots along the way here.
Of course, these, virtual appliances require some storage; this will be handled by the deployment of vSAN. I could have got away without using vSAN; but as it’s part of the standard Virtual Cloud Foundation (VCF) build, which I come across a lot at work – I figured I best get into the nitty gritty.
The edge nodes within NSX-T will be configured to peer via BGP to a pair of virtual CSR1000v Cisco routers, this will enable me to similar top of rack (ToR) hand-off connecting to a single VDS.
The networking of all workloads within NSX-T will be handled by NSX-T components, such as T0 and T1 gateways.
The master plan here, will enable me to access NSX-T workloads from my home network. This is pretty similar to most production networks, I’d like to avoid using the jump server if I can – how will that work? I have a few ideas, but nothing concrete, keep reading these articles to find out how its done (plug plug plug 👀).
As mentioned throughout this article, I will be deploying the full suite of tooling VMware offer – VRNI, VRLI and VROps. Much like vSAN, these are heavily used in most VCF environments and provide some great insight into how your network is functioning.
These services will be connected to the management network, and have accessible via the jump host and also have IP reachability to the NSX-T managers.
There’s a bunch of VMware products I have listed here, licencing must be expensive or timely for 30 day evaluation’s, right? Luckily, VMware have a programme called VMUG’s whereby if you sign up to the advantage membership, you get 1/2/3 year evaluations – well worth it!
|151||192.168.151.0/24||Edge Uplink 1|
|152||192.168.152.0/24||Edge Uplink 2|
** I’m aware that these /24 allocations are pretty large, and you’d typically be a bit more frugal in a production environment, but for ease – I have kept the third octet matching the VLAN number.