VMworld 2013 wrap up

Another year has gone by along with the 10th annual VMworld. This years VMworld was held in San Francisco. I believe the event will be held in the same location for the foreseeable future.
It was great to meet up with so many other experts in the virtualization space. I had a chance to meet up with guys like the vTexan Tommy Trogden from EMC, Chad Sakac from EMC and David Robertson from SimpliVity. It was also nice to finally meet Scott Lowe and let him know how much I appreciate the books he has written.
The new version of vSphere 5.5 saw some great improvements. I’m still on the fence with the whole NSX appliance. It has some impressive capabilities, but it doesn’t quite fit for the company I am with at the moment. I’m a little shy with buying into 1.0 products anyway. I attended the deep dive on vCenter and noticed some welcome changes to SSO (no more DB). I was hoping that the vsan would be GA, but it is just a beta program for now.
The solutions exchange:
Had the chance to meet up with the guys from Zerto. They had a great booth with very knowledgeable guys. I hate going up to some booths that have 90% booth babes and they have to get a tech who is overwhelmed. At the Zerto booth everyone was able to answer questions. Wednesday was a fun day wearing my Zerto shark print shirt “your not going to need a bigger boat”. I heard they were a buzz with my tweet “I don’t always fail over. But when I do, I use Zerto. Stay protected my friends”. Be sure to check out the new 3.0 release of the product. I believe it has something for everyone in the DR space.
SimpliVity had a great product. I thought the replication and dedup technology they are using is awesome. The technology is called Omnistack. I’m looking forward to seeing more from these guys.
GigaMon has a great virtualization network visibility product. We just bought into the 2.0 release of the product. I had a chance to meet up with some of the really smart guys who work on the product day to day.
Nutanix has a great product going, really something I have not seen from other vendors. It was 4 VMware hosts in 1 with a replication technology across hosts in a small form factor. Really great option for those who don’t need to invest in a SAN. Manny Carral did a great job of explaining it all. I also have to say thanks for providing the vExpert personalized glass!
I have worked with Fusion IO in the past using the PCI based cards. I’m not sure why I missed the IO turbine product. Derek Clark did a great job walking me through the product. This is something I am looking forward to testing in my Cisco UCS environment.
I also had to check out the guys at Violin Memory. They have an impressive all flash array. I had a chance to talk to existing customers at the Violin Memory party on Tuesday and they all had great things to say about the product. It is always nice to get honest opinions from customers.

The general sessions:
I attended several sessions this year. Okay, maybe 3 a day. It’s just so hard to stay away from the solutions exchange. Each session I attended had great speakers. No one was boarding and it did not feel like it was a heavy marketing campaign. I loved the VDI smack down session, but I was hoping there would be panelist from each product (View and XEN). It was Ruben Spruijt who specializes in VDI deployments and he did a great job walking thru the differences in each product.
The “ask the vExperts” session was awesome. It’s good to see the front line guys get asked questions by the community and give feedback on what they see in the field. After that session I picture Duncan as a mad scientist in a VMware lab somewhere cooking up new stuff.

The labs:
The guys did a great job on the labs this year. I find that every year they find some way to improve on top of a great platform. The lab menu in the VMware program did not seem very well structured, but when you sit down at the lab console the menu was great. I found everything organized in a manageable order. I loved that they had hot spot areas you could sit down at and work on some of the labs. This was a great option for those who did not want to wait in line. I had an inside tip that the labs would be available after VMworld in a beta formate. I wonder if will conflict with the VMware Connect training offering.

The VMworld party:
This was the first year that VMware held the grand party at the AT&T park. The event just got so big that they had to find a new venue. Going into the stadium was a nightmare. Everyone on twitter complained that the event sucked. People waiting in lines for 30 minutes for a hot dog (I did that too). After the crowds disbursed around the stadium it was not so bad. I think by the end of the night everyone enjoyed the event. I really enjoyed Train. I especially loved the cover songs they did.
I couldn’t win any of those carnival games. I saw guys with bags of stuffed animals. I guess it’s a good thing I didn’t have all that stuff to take back with me on the plane.

I wonder what the breaking point will be for VMware to decide on having VMworld twice a year on separate coasts? It would be something nice to see. Overall I enjoyed my 5th VMworld and I am hoping to return next year.

VMworld 2013 and Zerto

It’s that time of year again. Grab your shopping carts and run through the gauntlet of vendors at this years VMworld. For me, my first stop will be the DR section. In that section I am looking forward to seeing the demonstrations from Zerto. If you don’t know about Zerto, they provide enterprise-class disaster recovery and business continuity solutions for virtualized data centers and cloud environments.

zerto logo

 

 

 

I recently ran a POC with the product and was quite impressed. I have used the VMware SRM product before and the fail over / fail back options are very similar. You can schedule a live or test fail over at any time. For me, SRM is not an option. With XYZ storage vendor on one end and ZYX storage vendor on the other, storage replication is not an option. VMware Site Recovery Manager has a growing list of supported SAN vendors to provide that SAN replication. Or your storage vendor might just offer to sell you a replacement for that other SAN. Whether that would be cost effective depends on many things: maintenance renewals on the SAN, the cost of the SAN, re-platform costs, training costs, data tiering options. All vs that number of virtual machines you would need to protect in your virtual infrastructure with Zerto. The other question you should ask yourself is “If my environment is highly virtualized (let’s say 80% or more) would it be cost effective to replace the SAN so I can use tools like EMC recovery point and SRM just to replicate that leftover 20% of physical assets along with my virtual infrastructure?”. Of course if those 20% of physical assets are business critical you might say yes. But if you can get away with using a clustering technology across datacenters or a server based replication technology, all you would need is Zerto for your virtual infrastructure. One thing I love about Zerto is that you can replicate virtual machines from something high end in one site to something low end in another. Let’s say an an IBM DS8000 in production to a Dell MD3000 iSCSI array in DR. You can even change the provisioning formats on the fly of the virtual machines. Of course careful planning must be made for performance when choosing these options.

There is also the free vSphere replication option.  I have not used this myself, but I do plan on scheduling some testing. There are some big difference between vSphere replication and Zerto. I’m sure there will be some improvements to vSphere replication in ver 5.5.

I don’t want to get into a step by step instillation in this post. I had a similar experience to Justin Paul when using the product. It is very easy to use and has a lot of great options. Make sure to stop by the booth at VMworld and check them out.

VMware vCenter Server 5.1 Update 1b released

In case you missed it, VMware released vCenter Server 5.1 Update 1b on August 1st. There is a small list of bug fixes. I would not expect any major changes in future releases of 5.1, most should only contain bug fixes. Later this month I expect to see a new release of vSphere at VMworld. This is typically the trend with the VMworld conference.

P2V a SQL cluster with VMDK – step by step

In a previous post I explained how to P2V an existing MSCS environment into VMware using RDMs. In this post I would like to explain how to convert an existing MSCS environment with VMDKs. You may ask, what is the difference. Well it depends on I/O, back end storage setup or policy from your storage team. Maybe your storage team has a policy that all clustered disks must be managed by SAN software on the guest OS. Maybe your back end storage does not support RDMs (only fiber channel is support today in 5.x).

Maybe the I/O is above a certain threshold and should not be on a VMDK. What is that threshold? To investigate what your current I/O is look into your array management software or ask your storage team. Maybe the storage in VMware is already taxed and cannot handle the additional load. To find out what the current I/O is you can use tools like VMware vCenter Operations, Dell’s FogLight for virtualization or you can use a free PowerCLI script. There are a lot of underlying metrics you will need to look into before you decide to even virtualize the cluster. Storage is the platform that will effect you the most. How much is to much I/O for a VMDK? It all depends on your storage array. Maybe you have a Pure Storage array with all SSD drives, they will have guidelines for you to follow. Perhaps you have a VNX array with fast cache, they will have guidelines for you to follow. For me, anything over 1,000 IOPs in the physical world would need to be thoroughly investigated before conversion. That being said, the capacity and current virtual infrastructure would need to be evaluated any time new machines are deployed.

Here are the steps I use when doing a P2V on a MSCS into VMDKs:

Step 1. Check the cluster owner. The shared disk must stay visible during the conversion process, do not take the cluster offline.

Step 2. Note which drive is assigned to what drive letter. Note the size of the drive to help identify the drive letter after the conversion. If needed, note the contents of the drive and the drive letter.

Step3. Note the IP configuration of each node.

Step4. Convert both physical nodes. The node that has the active drives will be sVmotion’ed to a shared volume after the conversion. The conversion processes creates lazy zero thick drives. When the drives are sVmotion’ed, create the shared disks are eager zero thick.

Step 5. At this point, one node has the shared disk and the other does not. sVmotion the disks to a VMFS volume in Eager Zero Thick.

Step 6. Configure primary VM with a second SCSI controller. Configure each shared drive to SCSI 1:0, 1:1, 1:2, etc…

Step 7. Configure disks for virtual compatibility mode.

Step 8. Configure second VM node. Place this VM on a separate VMware host. Create HA and DRS rules for both nodes to remain separate. Install second SCSI controller by adding the existing shared disk from step 5. Assign the same SCSI assignments for 1:0, 1:1, 1:2, etc…

Step 9. Power on both VMs. Complete the VM cleanup process.

Step 10. Add NIC’s for cluster (private and public).

Step 11. Once the cluster is online, log in as a domain user and correct the cluster drive letters noted in step2 from the cluster manager console.

 

P2V to VMDKs

Trainsignal review – vCloud Director 5.1 Essentials

I will start off by saying that I am no vCloud Director expert. I am a log time fan of the Trainsignal training series. I wish I had time to absorb everything in the training catalog in one week. For me, what peeked my interest is the vCloud Director 5.1 Essentials course. Chris Whal does an excellent job of walking through the basics from start to finish. The course starts off by going over the basic principles you will need to understand what you are deploying. When you get into lesson 4, you get right to the installation of the product.

As I went through the course, I followed along in my lab. As long as you have some basic resources set up, you will have no problem keeping up. I had two PCs with 16GB of RAM each, a single quad processor, two 7200 RPM drives and an SSD in each host. You will want to have vCenter already set up along with a RHEL 6.x template and some Windows templates.  Before you get into the deployment, Chris walks you through where to get temp license keys and where to get a RHEL operating system.

The remaining lessons in 5 – 12 are really step by step instructions on how to deploy and configure everything for a basic vCloud. The product can really look like a Pandora’s box from the outside. All these terms like connectors, vCloud networking options, organizations, access control or managing vApps all seem overwhelming when you first look at the product. Chris does an excellent job of putting the pieces together and making sense of things. As you follow along you will be saying “ahh, that’s what that thing does!”.

After you finish this course you will have a better understanding of how things work and what will work best for your environment. I would not use this course as a template to deploy an environment, but as groundwork to understand what will work for you. There are a few VMware books that go over vCloud director, but nothing that will give you step by step deployment guides like this course will give you. You can also find deployment guides that say “deploy vCloud Director in 10 minutes”, but they really don’t give you the understanding that this course will give you. After you run through the deployment once (or a few times), I highly recommend deploying the vCD vApp. This will give you something a little more light weight to play around with. The vCD vApp is not meant for production, it is more of a test platform to get use to the concepts in vCD.

I would highly recommend this course for anyone looking into learning more about vCD.

VMware Certification Exams 75% off During VMworld 2013!!

So I was at the local VMUG meeting here in Houston today and everyone was notified about the discount. Can you believe it? 75% off of exams? Never in the history of exams has there been such a discount! I only hope I can take the DCA and DCD exams in one week! Has anyone attempted such a thing?

http://blogs.vmware.com/education/2013/06/vmware-certification-exams-75-off-during-vmworld.html

Update: I have had no luck with getting the web page to work. I had to call Pearson to schedule my exam: 800-676-2797

 

New to the vExpert family

This week I had an explosion of twitter followers. It took me a little bit to find out that I landed on the 2013 list of VMware vExperts. I had to go back a few times (refresh/refresh) to make sure my name was really on the list. I am truly honored to be on the list this year next to so many well known names. I have applied for the past 3 years and was not selected. I have been fairly active in the virtualization community since the VMware 2.0 days. It is great to be involved and spread the word about virtualization.

If you want to see all of the interesting stats on the vExperts statistics, they can be viewed here: http://www.thinkmeta.net/2013/06/having-fun-with-vexpert-to-employee-ratios/ and http://blog.vmforsp.com/2013/05/2013-vexpert-group-by-the-numbers/

I do not work for any of the giant companies, I’m from a global financial services company. Congratulation to everyone to made it!

I’m finding that there are all kinda of perks that come with the title:

Train Signal is giving a years worth of free training to all vExpert recipients. http://www.trainsignal.com/blog/2013-vexperts-trainsignal

Nutanix is giving out vExpert print glasses at VMworld 2013: http://basraayman.com/2013/06/05/nutanix-2013-vexpert-gift/

TinTri is giving away free vExperts shirts with your twitter handle: http://www.tintri.com/congratulations-2013-vexperts

 

 

P2V a SQL cluster with RDM – step by step

The P2V process for servers has always been pretty straight forward. You find a physical server you can consolidate into VMware, you evaluate the load using tools, asses the virtual infrastructure for the expected load (cluster capacity, recovery model, CPU, RAM and disk size / IO) and properly convert it into a virtual machine. After the machine is converted, you shut down the physical server, clean up the new VM and everything is off and running.

But what about MSCS servers? This introduced a new dynamic to the conversion process. Do you P2V into a one node cluster? Do you use RDMs or VMDK files? How do you move the clustered services around? Where do I use the paravirtualized controller? What about disk signatures in the cluster for RDMs? What happens to the heartbeat network? If you are going to use an exact mirror of the two node cluster, what is the process?

There are many different methods you can usa to complete this process. You could create new VMs and restore. This is the method I will demonstrate:  P2V a SQL 2 node MSCS cluster into VMware, using the exact same cluster setup as the physical model. During the conversion process we will use RDMs in physical compatibility mode, this is required when clustering with a physical server according the VMware SQL cluster best practice guide.

 

SQL P2V blog

Why should I virtualize SQL?

For the future.

I have run across countless admins who have said “SQL should not be virtualized, hypervisors just cannot handle the load”. Well, that is one of the most common reasons I hear not to virtualize SQL. Is it true? It depends. A poorly planned virtualized SQL environment can suffer from performance issues, just like a physcial environment. There is no cookie cutter plan or guide that will tell you how your environment should be laid out, only guide lines to help you gain the most potential from your virtualized SQL environment. If you find instructions on how to virtualize SQL, take it as a guide and not a plan set in stone. Some companies may want to use only iSCSI storage vs fiber channel or some companies may opt for SQL clusters vs single instance servers. There are countless other options like using the paravirtualized controller, physical or virtual RDM, run a SQL environment in a cluster (up to 5 nodes in vSphere 5.1), single instance SQL servers, proper zoning of the shared disks in a cluster, vNIC types. And most important, do all of the DBAs / network / VM admins have a clear understanding of how the environment should be laid out for maximum performance.

Consolidation has been the biggest driver in virtualizing. Big tier 1 applications should not be immune from this. As hardware becomes EOL, we always have to plan for migrations. It takes less overhead if you have to move a virtual server from one hardware platform to another rather than spending man hours working with the application server itself. For the future, I try to always put applications on a virtualized platform because of the benefits.

I have heard and seen many of the negatives in forums and from other admins. Here are some of the horror stories:

1. One admin was running a virtualized SQL environment in a MSCS cluster. The shared quorum and data disks were zoned and properly setup in the cluster with physical compatibility mode. Things had been running smoothly for months. A VM admin came along and noticed some disks in vCenter and decided to report them as storage that can be reclaimed by the storage team. As soon as the storage team removed the disks, the virtualized SQL environment failed. My question is this: How do you prevent this in a physical environment? This scenario can easily happen in a physical cluster outside of VMware or any other hypervisor. It sounds like a undocumented process that the VM admin was unaware of. The VM admin should have also investigated it further. The storage admins should have documentation on what was provisioned for each server. Was the hypervisor responsible for this outage? You decide. If there is not good communications between the DBAs, VM admins and storage admins, it may not be a good idea to virtualize SQL (or any server).

2. SQL just doesn’t performance well. I have heard this from countless admins at local VMUGs, but none can point to a direct cause of the performance issue. If you are over provisioning the hypervisor, starving SQL for resources, not setting resource reservations, not properly configuring storage, not taking advantage of NUMA, have improperly configured physical NICs or you are just running SLQ on VM workstation (silly), I promise you will suffer performance issues. There is no good answer you can give about a poorly performing virtualized SQL environment, any one of the items listed could be the culprit. Having a good monitoring system in place like vFogLight can give you insight to these issues. In a perfect world a virtualized SQL environment would be on an auto tiering storage array, have all 10GB physical NICs, be on fiber channel SSD and all housed in a dedicated hypervisor cluster with all like application servers. Since we don’t all have this, it is up to you to determine how to best run a virtualized SQL environment in your infrastructure if you can at all.

3. Virtualizing SQL creates to much complexity. DBAs have a hard time managing performance because they don’t understand VMware. Are there more steps in setting up a MSCS SQL environment in VMware? Yes. For a VM admin this should be a routine process. The end user or DBA should be oblivious as to whether the server is virtual or physical. At least in a virtual server you can hot add resources. Troubleshooting does involve one more person in the role, the VM admin.

It all comes down to this: Can your teams can support it?

Can a NASCAR vehicle keep going all day? Not without a good pit crew! The same goes for any virtual or physcial infrastructure.

To have a successful project to virtualize SQL, all teams must have an open mind and understanding of how the hypervisor should properly function in your environment. Expectations should be set as well. Creating a virtualized SQL environment does not always mean better performance. It depends on what you are coming from and going to. There is a big difference in moving from an old Dell 2850 with 32BG of RAM in a 2 node MSCS SQL cluster and direct attached SCSI storage to a Dell M820 Blade with 768GB of RAM in a 5 node virtualized MSCS SQL cluster and a Pure Storage array (SSD) with fusion IO cards. Don’t let the storage admins or the DBAs scare the CIO with horror stories of failed implementations. The Freddy Kruger hypervisor is coming to get you! There are plenty of marketing white papers out there with success stories. I am not published in any VMware white papers for doing SQL clusters, but I can tell you that I have been doing it successfully for years!

In my next post I will be walking through the process on how to P2V an existing SQL 2 node cluster. I will be virtualizing two SQL blades that are using an EMC fiber channel array for shared storage.

Please review the Microsoft SQL server on VMware Best Practice Guide.