VMware Certification Exams 75% off During VMworld 2013!!

So I was at the local VMUG meeting here in Houston today and everyone was notified about the discount. Can you believe it? 75% off of exams? Never in the history of exams has there been such a discount! I only hope I can take the DCA and DCD exams in one week! Has anyone attempted such a thing?

http://blogs.vmware.com/education/2013/06/vmware-certification-exams-75-off-during-vmworld.html

Update: I have had no luck with getting the web page to work. I had to call Pearson to schedule my exam: 800-676-2797

 

New to the vExpert family

This week I had an explosion of twitter followers. It took me a little bit to find out that I landed on the 2013 list of VMware vExperts. I had to go back a few times (refresh/refresh) to make sure my name was really on the list. I am truly honored to be on the list this year next to so many well known names. I have applied for the past 3 years and was not selected. I have been fairly active in the virtualization community since the VMware 2.0 days. It is great to be involved and spread the word about virtualization.

If you want to see all of the interesting stats on the vExperts statistics, they can be viewed here: http://www.thinkmeta.net/2013/06/having-fun-with-vexpert-to-employee-ratios/ and http://blog.vmforsp.com/2013/05/2013-vexpert-group-by-the-numbers/

I do not work for any of the giant companies, I’m from a global financial services company. Congratulation to everyone to made it!

I’m finding that there are all kinda of perks that come with the title:

Train Signal is giving a years worth of free training to all vExpert recipients. http://www.trainsignal.com/blog/2013-vexperts-trainsignal

Nutanix is giving out vExpert print glasses at VMworld 2013: http://basraayman.com/2013/06/05/nutanix-2013-vexpert-gift/

TinTri is giving away free vExperts shirts with your twitter handle: http://www.tintri.com/congratulations-2013-vexperts

 

 

P2V a SQL cluster with RDM – step by step

The P2V process for servers has always been pretty straight forward. You find a physical server you can consolidate into VMware, you evaluate the load using tools, asses the virtual infrastructure for the expected load (cluster capacity, recovery model, CPU, RAM and disk size / IO) and properly convert it into a virtual machine. After the machine is converted, you shut down the physical server, clean up the new VM and everything is off and running.

But what about MSCS servers? This introduced a new dynamic to the conversion process. Do you P2V into a one node cluster? Do you use RDMs or VMDK files? How do you move the clustered services around? Where do I use the paravirtualized controller? What about disk signatures in the cluster for RDMs? What happens to the heartbeat network? If you are going to use an exact mirror of the two node cluster, what is the process?

There are many different methods you can usa to complete this process. You could create new VMs and restore. This is the method I will demonstrate:  P2V a SQL 2 node MSCS cluster into VMware, using the exact same cluster setup as the physical model. During the conversion process we will use RDMs in physical compatibility mode, this is required when clustering with a physical server according the VMware SQL cluster best practice guide.

 

SQL P2V blog

Why should I virtualize SQL?

For the future.

I have run across countless admins who have said “SQL should not be virtualized, hypervisors just cannot handle the load”. Well, that is one of the most common reasons I hear not to virtualize SQL. Is it true? It depends. A poorly planned virtualized SQL environment can suffer from performance issues, just like a physcial environment. There is no cookie cutter plan or guide that will tell you how your environment should be laid out, only guide lines to help you gain the most potential from your virtualized SQL environment. If you find instructions on how to virtualize SQL, take it as a guide and not a plan set in stone. Some companies may want to use only iSCSI storage vs fiber channel or some companies may opt for SQL clusters vs single instance servers. There are countless other options like using the paravirtualized controller, physical or virtual RDM, run a SQL environment in a cluster (up to 5 nodes in vSphere 5.1), single instance SQL servers, proper zoning of the shared disks in a cluster, vNIC types. And most important, do all of the DBAs / network / VM admins have a clear understanding of how the environment should be laid out for maximum performance.

Consolidation has been the biggest driver in virtualizing. Big tier 1 applications should not be immune from this. As hardware becomes EOL, we always have to plan for migrations. It takes less overhead if you have to move a virtual server from one hardware platform to another rather than spending man hours working with the application server itself. For the future, I try to always put applications on a virtualized platform because of the benefits.

I have heard and seen many of the negatives in forums and from other admins. Here are some of the horror stories:

1. One admin was running a virtualized SQL environment in a MSCS cluster. The shared quorum and data disks were zoned and properly setup in the cluster with physical compatibility mode. Things had been running smoothly for months. A VM admin came along and noticed some disks in vCenter and decided to report them as storage that can be reclaimed by the storage team. As soon as the storage team removed the disks, the virtualized SQL environment failed. My question is this: How do you prevent this in a physical environment? This scenario can easily happen in a physical cluster outside of VMware or any other hypervisor. It sounds like a undocumented process that the VM admin was unaware of. The VM admin should have also investigated it further. The storage admins should have documentation on what was provisioned for each server. Was the hypervisor responsible for this outage? You decide. If there is not good communications between the DBAs, VM admins and storage admins, it may not be a good idea to virtualize SQL (or any server).

2. SQL just doesn’t performance well. I have heard this from countless admins at local VMUGs, but none can point to a direct cause of the performance issue. If you are over provisioning the hypervisor, starving SQL for resources, not setting resource reservations, not properly configuring storage, not taking advantage of NUMA, have improperly configured physical NICs or you are just running SLQ on VM workstation (silly), I promise you will suffer performance issues. There is no good answer you can give about a poorly performing virtualized SQL environment, any one of the items listed could be the culprit. Having a good monitoring system in place like vFogLight can give you insight to these issues. In a perfect world a virtualized SQL environment would be on an auto tiering storage array, have all 10GB physical NICs, be on fiber channel SSD and all housed in a dedicated hypervisor cluster with all like application servers. Since we don’t all have this, it is up to you to determine how to best run a virtualized SQL environment in your infrastructure if you can at all.

3. Virtualizing SQL creates to much complexity. DBAs have a hard time managing performance because they don’t understand VMware. Are there more steps in setting up a MSCS SQL environment in VMware? Yes. For a VM admin this should be a routine process. The end user or DBA should be oblivious as to whether the server is virtual or physical. At least in a virtual server you can hot add resources. Troubleshooting does involve one more person in the role, the VM admin.

It all comes down to this: Can your teams can support it?

Can a NASCAR vehicle keep going all day? Not without a good pit crew! The same goes for any virtual or physcial infrastructure.

To have a successful project to virtualize SQL, all teams must have an open mind and understanding of how the hypervisor should properly function in your environment. Expectations should be set as well. Creating a virtualized SQL environment does not always mean better performance. It depends on what you are coming from and going to. There is a big difference in moving from an old Dell 2850 with 32BG of RAM in a 2 node MSCS SQL cluster and direct attached SCSI storage to a Dell M820 Blade with 768GB of RAM in a 5 node virtualized MSCS SQL cluster and a Pure Storage array (SSD) with fusion IO cards. Don’t let the storage admins or the DBAs scare the CIO with horror stories of failed implementations. The Freddy Kruger hypervisor is coming to get you! There are plenty of marketing white papers out there with success stories. I am not published in any VMware white papers for doing SQL clusters, but I can tell you that I have been doing it successfully for years!

In my next post I will be walking through the process on how to P2V an existing SQL 2 node cluster. I will be virtualizing two SQL blades that are using an EMC fiber channel array for shared storage.

Please review the Microsoft SQL server on VMware Best Practice Guide.

XenDesktop 5.6 FP1 and vSphere house keeping

It is pretty common to find Xen products running on top of VMware as the primary hypervisor. I would like to spark a contributed list of housekeeping items for Xen products on top of VMware. Of course we would like the two products to work peacefully like a Xen garden, but this is not always the case.

I would like to bring to your attention to the orphaned VMDK issue with XenDesktop 5.6 FP1 using personal vdisk. Provisioning and deprovisioning desktops is a breeze when using MCS (machine creation services from the DDC) and PVS (provisioning services) over PXE boot. This deprovisioning process does create orphaned VMDK files in VMware. When using the personal vdisk option to build a desktop you end up with 2 separate disks, the identity disk and the pvdisk. When you go through the machine deprovisioning process (deleting), Citrix knows to delete the virtual machine. But, the _pvdisk.vmdk file is left behind. Maybe this is by design so that you can retrieve a user’s personal vdisk later. It is an orphaned VMDK file that gets left behind and is taking up valuable storage. You can either migrate these files to cold storage or delete them. Just be aware that they are sitting there. I like to use Quest PowerGUI with vSphere best practices plugin to search for orphaned VMDK files on a regular basis.

Let’s keep this rolling with other house keeping suggestions.

 

Rage against the cloud

Since the dawn of virtualization, virtual provisioning has been a great cost saver and a means to reduced IT overhead. Building an internal cloud in the company datacenter meant that we could reduce head count in the IT operations department. If I had 1,000 physical servers in my datacenter, that means I would need at least two staff members in my IT operations department to take care of hardware failures, maintenance and provisioning. If I virtualized those same 1,000 servers into two blade enclosures with five blades in each enclosure I may only need one IT operations staff member. Or I may not even need a staff member at all on the IT operations side, why not just pay for services from the hardware vendor or datacenter to replace that person all together.

Virtualization has always been an easy business justification. The cost savings just cannot be ignored with staff reductions, ease of provisioning, hardware foot prints, power savings, the list goes on and on. The private cloud has existed for many years now. The public cloud has existed for even less time. Over the past 10 years I have seen a great speed in adoption to the private cloud. But for the public cloud there has been much resistance.

Is the resistance to the public cloud futile? Many IT departments see the public cloud as this Borg cube assimilating all the tier 1 applications in the infrastructure, taking jobs with it. It seems today, it’s either adopt to service the cloud or go to an environment where policy dictates that the IT infrastructure cannot exist in the public cloud (for now). This may be due to HIPAA or some other regulation. You will find today that most of the public cloud providers are addressing these concerns by becoming compliant with many of the regulations that stopped businesses before. Once those sales guys get a hook in the IT manager or CIO, there may be no stopping the public cloud assimilation at that point.

Not all public clouds are bad. I think they can be an great integration point for a merged private and public cloud infrastructure. It would also be a great tool for DR. But if the exchange administrator sees his job going out the window because the email environment is being shipped out to a 3rd party hosted email systems, you may see that staff members resume out on the job boards pretty quick. The same goes for VMware administrators. If the company decides to go all public cloud and the datacenter is now in charge of provisioning, there would not be much use for the VMware administrator with managed services in a hosted datacenter.

The point here is that IT has created this great job industry that is still evolving. As the infrastructure evolves, look to the future cost savings for the business and think of how it lines up for your career. Don’t assume that CEO’s, CIO’s or any management will not consider that flexibility and cost savings of a public cloud provider. You will not be able to get away with using outdated hardware or Windows 2000 forever. Prepare to be assimilated into the IT infrastructure future plans which many include some form of public cloud or hop in the DeLorean and head back to 1985 for an IT job.