P2V a SQL cluster with VMDK – step by step

In a previous post I explained how to P2V an existing MSCS environment into VMware using RDMs. In this post I would like to explain how to convert an existing MSCS environment with VMDKs. You may ask, what is the difference. Well it depends on I/O, back end storage setup or policy from your storage team. Maybe your storage team has a policy that all clustered disks must be managed by SAN software on the guest OS. Maybe your back end storage does not support RDMs (only fiber channel is support today in 5.x).

Maybe the I/O is above a certain threshold and should not be on a VMDK. What is that threshold? To investigate what your current I/O is look into your array management software or ask your storage team. Maybe the storage in VMware is already taxed and cannot handle the additional load. To find out what the current I/O is you can use tools like VMware vCenter Operations, Dell’s FogLight for virtualization or you can use a free PowerCLI script. There are a lot of underlying metrics you will need to look into before you decide to even virtualize the cluster. Storage is the platform that will effect you the most. How much is to much I/O for a VMDK? It all depends on your storage array. Maybe you have a Pure Storage array with all SSD drives, they will have guidelines for you to follow. Perhaps you have a VNX array with fast cache, they will have guidelines for you to follow. For me, anything over 1,000 IOPs in the physical world would need to be thoroughly investigated before conversion. That being said, the capacity and current virtual infrastructure would need to be evaluated any time new machines are deployed.

Here are the steps I use when doing a P2V on a MSCS into VMDKs:

Step 1. Check the cluster owner. The shared disk must stay visible during the conversion process, do not take the cluster offline.

Step 2. Note which drive is assigned to what drive letter. Note the size of the drive to help identify the drive letter after the conversion. If needed, note the contents of the drive and the drive letter.

Step3. Note the IP configuration of each node.

Step4. Convert both physical nodes. The node that has the active drives will be sVmotion’ed to a shared volume after the conversion. The conversion processes creates lazy zero thick drives. When the drives are sVmotion’ed, create the shared disks are eager zero thick.

Step 5. At this point, one node has the shared disk and the other does not. sVmotion the disks to a VMFS volume in Eager Zero Thick.

Step 6. Configure primary VM with a second SCSI controller. Configure each shared drive to SCSI 1:0, 1:1, 1:2, etc…

Step 7. Configure disks for virtual compatibility mode.

Step 8. Configure second VM node. Place this VM on a separate VMware host. Create HA and DRS rules for both nodes to remain separate. Install second SCSI controller by adding the existing shared disk from step 5. Assign the same SCSI assignments for 1:0, 1:1, 1:2, etc…

Step 9. Power on both VMs. Complete the VM cleanup process.

Step 10. Add NIC’s for cluster (private and public).

Step 11. Once the cluster is online, log in as a domain user and correct the cluster drive letters noted in step2 from the cluster manager console.

 

P2V to VMDKs

Leave a Reply

Your email address will not be published. Required fields are marked *