The source for random IT information

I recently setup a new vCenter 5.1 environment.  One of the first things I noticed was when logging into vCenter it required a full UPN (User Principle Name) or a domain\username format to log in.  In previous versions you could just log in with username & password.  It was a little annoying but I didn't really care as I needed to get everything in vCenter configured.  

I then decided to research how to set it so I didn't have to type in the or domain\ format.

I found out this behavior could be changed.  Steps on changing it:

1.  Log in to the vCenter web (can't set this setting from the J# client)

2.  Click on "Administration" from the left menu.

3.  Click on "Sign-On and Discovery"

4.  Click on "Configuration"

5.  Locate your LDAP binding configuration under the "Identity Sources" tab.

6.  Click on your LDAP config then at the top next to the red X there is a button called "add to default domains".  Click It.

7.  You will get a warning about locking out accounts but it is safe to proceed.  Once that is done click the "save" icon at the top and you're good to go.

Logging into vCenter from the web or from the J# client will no longer need the UPN or domain\ format!

I needed to delete a ton of test VMs and I really didn't want to right click on each VM and delete.  Here's a quick script I used to delete them quickly!  1 line for each VM as you can see.



Remove-VM -VM TEST2K8R2-05 -DeletePermanently -Confirm:$false

Remove-VM -VM TEST2K8R2-06 -DeletePermanently -Confirm:$false

Remove-VM -VM TEST2K8R2-07 -DeletePermanently -Confirm:$false

Remove-VM -VM TEST2K8R2-08 -DeletePermanently -Confirm:$false

Remove-VM -VM TEST2K8R2-09 -DeletePermanently -Confirm:$false


I setup VMware SRM 4.1.1 for the first time the other day and I wanted to document the basic installation and configuration steps.  This is not a full blown step by step guide, it’s more of an outline of basic steps needed.


Pre-reqs:  You have SAN or NAS storage that is using array based replication over to your Disaster Recovery Site.  You’ll need 2 windows servers for SRM installation.  They can be virtual or physical.  I choose VMs running Windows Server 2008R2 (1vCPU & 4GBs of RAM).  You’ll need a SQL database server on both sides as the SRM installation requires a SQL database.


Note:  Although installing SRM on your vCenter server is supported.  It is not the VMware best practice, best practice is to put SRM on it's own server.

1.Download VMware SRM & the SRA (storage replication adapter).  The SRA is like a plugin that SRM uses to communicate with your SAN/NAS devices.

2.On your windows server at the primary location (now on it will be noted as the Protected Site) launch the VMware SRM install exe.  It’s basically a next, next, finish type of install.  The only thing is having an ODBC connection (using SQL Native Client) pre-created that connects to your empty SRM database.   If the ODBC connection is pre-created you can just select your ODBC connection from the drop down menu.  You’ll also need an administrative username/password that SRM will use to communicate with vCenter.  This can be a local admin account on the vCenter server or an Active Directory account that has been set as a local admin on your vCenter servers.

3.When the installation is finished.  You’ll need to install the SRA you downloaded also.  In my case it was the EMC Celerra SRA.  This is also a next, next, finish installation.

4.Now launch the Virtual Center infrastructure client and click on Manage Plugins.  There will be a plugin for VMware Site Recovery Manager.  Install the plugin, it’s a next, next, finish type of install.

5.Once installed, close and re-open vCenter.

6.From the Home section of vCenter at the bottom there will be a new Site Recovery Icon.

7.Basically now steps 1-6 need to be repeated at the DR site (now on it will be noted as Recovery Site)


Now on to basic configuration


1.On the Protected Site’s vCenter.  Launch the Site Recovery icon from the Home section.

2.First the Recovery & Protected sites need to be “Paired”.  To do so click on Configure under Connection.  You’ll need the IP address of the Recovery site’s SRM server along with the username/password used during installation of SRM that connects to vCenter to complete the pairing.

3.Once pairing is complete.  The array managers need to be configured.  Still on the Protected site, click Configure under Array Managers.  You’ll need the management IP addresses of your NAS/SAN along with an admin account.  (you’ll need this information for your storage on both sides, the protected and recovery site)

4.Still on the protected site click on Configure under Inventory Mappings.  Here you will set mappings from the protected site to the recovery site.  In other words dVswitch port group X on the protected site will map to dVswitch port group X on the recovery site.

5.Now a protection group needs to be created.  Click on Create under Protection Groups.  Protection Groups are a logical grouping of the VMs that need to be recoverable on the DR site.

6.Finally launch the Site Recovery icon on the Recovery Site’s vCenter and create a Recovery Plan.

Few weeks ago I wrote about the new vSphere 5 licensing.  Well it appears it has changed for the better.  I think this will help some people swallow the pill to upgrade to vSphere 5.

If you're not familiar with the new vSphere 5 licensing system you can read about it Here.

VMware has decided to up the memory count for each license in the vRAM Entitlement.

Here was the original vRAM entitlement for each type of VMware license.

  • Standard - 24GB
  • Enterprise - 32GB
  • Enterprise Plus - 48GB

Now they changed the vRAM counts to:

  • Standard - 32GB
  • Enterprise - 64GB
  • Enterprise Plus - 96GB
More information can be found on the vmware blog Here.

VMware has released special licensing for the vSphere 5 product if you're going to use the Hypervisor for VDI environments.  The licensing is a lot easier to follow.  It's a simple $6,500 per 100 concurrent desktops.  This product is called "vSphere Desktop".  The hypervisor itself will have all the standard functionality of vSphere 5 but you're only supposed to be allowed to run desktop OSes.

The only problem I have is I suppose you'll still have to buy regular vSphere 5 licensing for your infrastructure servers.  For example, if you're running XenDesktop, you'll need to run your Citrix Provisioning Servers, XenDesktop Brokers, & Web Interfaces on normal vSphere5.

I'm guessing this whole thing is going to be an honor system, but I think VMware is making things more complicated than need be.  For more information on this "vSphere Desktop" product & licensing.  Click Here.

vSphere 5 was announced yesterday (7/12/11), naturally I started reading about what has changed.  Really there aren’t many new features, nothing that would really make you want to upgrade if you have an existing environment.  The new list of features can be found HERE.  Here a short list of a few of the new features:

  • ESXi only for thinner footprint (does nothing for customers already running on ESXi)
  • New Virtual Machine hardware version 8
    • VM8 supports 3d graphics for windows Aero & USB 3.0 Devices
  • Support for Apple Xserve servers running OS X Server 10.6 (Snow Leopard) as a guest operating system
  • Larger Guest VMs with up to 32 vCPUs and 1TB of RAM.
  • VMware vCenter Server can now run as a Linux based appliance
  • vMotion over higher latency networks is now supported (What does this really mean, I have no idea.  It’s a very vague statement)
  • vSphere Auto Deploy - I believe this is a replacement product for Update Manager?  I thought Update Manager worked well, not sure this was really needed.
I’d like to mention that I see no real compelling reason to upgrade to vSphere 5 right now with no big new feature being added.

Now, that we’ve gotten some of the newer features out of the way let’s talk about the biggest thing that has changed which is licensing.  I am extremely unhappy with the way VMware decided to alter licensing.   If you’re not really familiar with how licensing in vSphere 4 worked, here’s a simple breakdown.  It was based on the number of physical CPUs in the ESX/ESXi host and each license per CPU had a limit to number of cores per CPU.

vSphere 5 has taken away the limit to number CPU cores but they have restricted the RAM count per CPU license.  The RAM count is different to each type of license, here is the breakdown:

  • Standard - 24 GBs of RAM
  • Enterprise - 32 GBs of RAM
  • Enterprise Plus = 48 GBs of RAM
In my current production environment our ESXi hosts are 4 physical CPU with 512GBs of ram in each server.  I only needed 4 licenses per host.  With the new RAM entitlement I will need 11 Enterprise Plus licenses for each ESXi host.  With VMware increasing the amount of memory that can go into a Guest VM to 1TB, it would be so expensive to build a guest VM with that much memory!
I will be discussing this with my VMware sales rep in greater detail but if everything I’ve read is indeed correct; VMware is going down a very slippery slope.  Citrix XenServer, anyone?

You can read in great detail about the licensing HERE

      I tend to see a lot of System Administrators never really understanding networking in general.  They are more interested in the configuring of their applications like Microsoft Exchange or Citrix Xenapp, the list can go on.  I’m speaking from experience when I say that.  It’s not really a problem till they decide they want to learn VMware vSphere.  I never really understood networking either until I started learning VMware. 

     I recently found myself explaining Virtual Switching to one of my CO-Workers that was just learning VMware, so I figured I’d write a little article on the concept to help others that are just getting into VMware understand it a little better as well.  One of the first concepts that needs to be understood when it comes to a virtual switching is that a virtual switch in vmware is just like a physical dumb switch except it supports something called VLAN tagging by way of vSwitch Port Groups.  Let me explain what I mean in greater detail.  A VLAN is a way networking engineers can create multiple virtual/logical networks on a physical network (e.g. let’s say I have a 24 port switch, I take 12 ports and assign them to VLAN 7 and the other 12 ports and assign them to VLAN 9.  If I were to plug a computer into one of the first 12 ports and a computer into one of the second 12 ports, they wouldn’t be able to talk to each other unless there was a router to route the traffic to each network, but that’s a discussion for another time).  So, now that you somewhat understand the concept of a VLAN, let’s get back to virtual switching. 

     There are different types of virtual switching but the one that I mainly wanted to cover are the ones used for Guest OS VMs, these are called “Virtual Machine” switches.  In vSphere ESX/ESXi 4.1 Update 1 (current newest version) a standard virtual switch can have up to 4088 switch ports.  That’s obviously quite larger than the 24 port physical switch I used as an example.  So, the next logical question is “Why are there so many ports?!?!”.  Well, I’m glad you asked.  Just like every Server or Desktop PC needs a network switch port to get access to the network so does a Guest VM. 

     Let’s say physically your vSphere server has 4 network interface cards in it that we intend to assign for guest VM network traffic (Stay with me here as we make a mental leap!).  What needs to happen is those 4 physical NICs are assigned as uplink ports to a virtual switch.  So in other words the virtual switch has 4088 virtual ports for your guest VMs, but when any of the guest VMs connected to one of those 4088 ports needs to reach something on your physical network the traffic goes over one of those 4 physical uplinks.  I hope that makes sense.  So, where does VLANs come into play?  Like I said earlier VLAN tagging for vSphere is done by what VMware calls Port Groups.  A Port Group is exactly what its name is.  It’s a logical grouping (collection) of ports on the virtual switch.  In other words, we know that a vSwitch has 4088 ports, If I created a port group, by default it would take 128 ports into its own little group.  So, if I assigned VLAN 5 to this group of 128 ports any Guest VM assigned to this port group would be able to send network traffic to any other Guest VM assigned to the same port group.  The concepts explained in this article are an extremely basic configuration.  There’s much more that can be done with Virtual Switching, I just wanted to help new comers understand the basic concept of how virtual switching is used.  I hope that this was able to help someone out there.  Till next time.

Recently I was working on my ESXi lab environment and I wanted to get ESXi to detect a Realtek onboard NIC.  There's nothing special about the servers, they were servers I put together from desktop components.  However, I did have some Broadcom PCI-E dual port nics in each server.  After ESXi 4.1 U1 was installed ESXi only picked up the 2 nics on the dual port card.  I wanted to get the onboard nic working also.  Offically the Realtek nics are not supported on the VMware HCL, but obviously for a lab/test environment, you may want to get them working.  Here's what I did to get the NIC to be detected by ESXi.

1.  Download a new oem.tgz file from Here (this file has the realtek 8111/816B driver in it)

2.  Browse your local datastore from your vSphere Client.

3.  Upload the oem.tgz from above to the datastore.

4.  Enable SSH on your ESXi server.

5.  SSH into your ESXi server and change directory to /vmfs/volumes/Hypervisor1.

6.  In that directory you'll see an oem.tgz.  Rename the file to oldoem.tgz. (e.g. mv oem.tgz oldoem.tgz)

7.  Path to where you uploaded the new oem.tgz to then copy it to the Hypervisor1 location. (e.g. cp oem.tgz /vmfs/volumes/Hypervisor1)

8.  Reboot your ESXi server and when it comes back up, the Realtek RTL 8111/8168B PCI Express Gigabit Ethernet Controller should appear in your list of network adapters.  Attached is a picture of it installed and working.


About this Blog

This blog's birthday is 7/1/11.  Here you will find IT technical documentation and also views on IT from an enterprise business perspective.

The blog is mostly for myself as of way of archiving cool tips and tricks I pickup long the way.  However, I hope anything I post can benefit other IT professionals in their own projects.  Eventually you will find things here that are related to many different infrastructure/products including (but not only) VMware, Citrix, EMC, Microsoft Windows, SQL, & Powershell.

This blog is also Mobile Friendly!

About Me

My name is Jody Wong. I'm an experienced IT professional.  I currently reside in Houston, TX.  I currently work for Gunvor USA.  Gunvor is a financial (commodities trading) company.  I've been working in the IT field for about 15 years now. I try to keep a broad IT skillset.  You can contact me on my Linked In profile below if needed.  I'm open to new ventures, expertise requests, getting in touch & new opportunities.

Linked In Profile: Click Here 

I hold the following professional IT Certifications:

ITIL - IT Information Library V3 Foundation for Service Management

VCP - VMware Certified Professional VMware Infrastructure 3

VCP - VMware Certified Professional vSphere 4

VCP - VMware Certified Professional vSphere 5

VCP - VMware Certified Professional 6 Data Center Virtualization

VCP - VMware Certified Professional 5 on VMware View

CCA - Citrix Certified Adminstrator PS4, XenApp 5 on 2K8, & XenApp 6.5

CCA - Citrix Certified Administrator Provisioning Server 5

CCA - Citrix Certified Administrator XenDesktop 5

MCITP Windows Server 2008 Administrator

MCTS Windows Server 2008 Active Directory

MCTS Windows Server 2008 Network Infrastructure

MCP - Microsoft Certified Professional WindowsXP & Server 2003

CCNA - Cisco Certified Network Associate

CCENT - Cisco Certified Entry Network Tech

NET+ - CompTIA Network+

Sign in