Skype for Business on vSphere

Is it support? That is an interesting question if you ask a Microsoft consulting company. You might just get a mixed bag of answers. The goal of a Microsoft consulting company is to push HyperV.  Lync and Skype for Business are absolutely supported on VMware hypervisors. It falls under the “Server Virtualization Validation Program” from Microsoft. UC products like Skype and Lync do not fall under the same restrictions as Exchange when it comes to the storage platform. Microsoft will not support Exchange if it is on NFS storage (even though the same conditions for the restrictions exist in SMB3). There is no known restrictions on storage platform for Lync or Skype.

The design considerations in the “Planning a Lync Server 2013 Deployment on Virtual Servers” guide are all geared towards HyperV. VMware took up issues with this article (detailed here) and asked why Microsoft never created a validation document for VMware (clearly a market leader). To date, there still has not been a document published by Microsoft, and I do not expect them to publish a favorable article for a competing product. As far as designing your environment, do use the guidelines listed by Microsoft, but pay not attention to the restrictions on HyperThreading and memory sharing. There is not a good technical justification from Microsoft to disable these options when using VMware products.

I work in an environment where I have a multi pool global Skype deployment for 5,000 users and a US pool for 5,000 users all running on vSphere. I have not had any hypervisor related issues and I’ve never had issues with Microsoft support when it comes to having the platform on VMware products.

Don’t be persuaded that vSphere is not the best platform for Skype or Lync. I’ve heard comments like “so you’ve chosen the most expensive and complex product for your environment” or “you are not guaranteed to get support from Microsoft if you have issues in your environment”. That last statement would be somewhat true if the environment was poorly designed. Just make sure your design considerations fall within Microsoft guidelines.





VMware ROBO license usage

VMware ROBO license model was announced last year. Since the announcement, it was very difficult to get any clear information on how you actually use ROBO licenses. The licenses are sold in 25 packs and they keys are licensed “per site”.  I call support to get a definition of what “per site” meant. Was it a location, a data center, a cluster or a host? Support really couldn’t help, they only concluded that a site was a physical location.

The most difficult part of testing ROBO licenses is that there is no trial license, not even vExperts get a ROBO key.

I recently had an an opportunity to deploy two separate data centers with ROBO keys. When VMware says a key must be licensed per site, that means the key you purchase must be used in one location, just like you would with any other product key. But, the ROBO keys can be split up for the number of virtual machines you need to run in each location, just like an enterprise  or enterprise plus key can be broken up in to how many host sockets you need to license.

Lets say you have two different data centers with a requirement to run 10 virtual machines in one data center and 15 in another data center. You would log in to your VMware license portal at and divide or combine your ROBO licenses to the amount you need for each key. You can do the same for vSphere Enterprise keys based on how many hosts you want to license. These license keys are then applied to each host. The license key keeps track of how many powered on virtual machines you have based on the ROBO license key applied to each host. So, you can have 10 hosts in one data center with a ROBO key for 10 virtual machines (but you must purchase the keys in 25 packs). A big cost savings vs purchasing licenses for each socket in each host. Imagine having 10 hosts with quad 16 core processors and only having to license based on the number of virtual machines you are running. I think VMware’s intention for this license model was meant for remote branch offices, but I have not found anything that says it cannot be used in a primary data center.

VMware has ROBO standard and advanced. Both have the same features you would expect to see in the host licenses for Enterprise and Enterprise Plus. After you install the license key in the host, it looks something like this:

ROBO key1

You then get a layout of the license key information:

ROBO key2

If you do have a remote branch that uses 25 virtual machines and you need to license the 26th, you then purchase a 25 pack of licenses. You would then combine your two 25 packs of licenses for one key of 50 licenses. You would then divide that key in to 26 and 24. That 26 key would then replace your existing key.


Where is the VCIX6-DCV exam?

With much cheering and confusion, the VCIX exams were announced last February.

Current VCAPs wondered, “what do I need to upgrade?”. The upgrade path for current VCAP-DCA and VCAP-DCD holders seems clear enough in the link above.

Where is the exam? Well, VMware education released the VCIX-NV right away. There is still no definite date when the VCIX6-DCV will be released. This is the word I received from VMware education:

“I would like to inform you that, VMware is in process of releasing VCP6-DCV Exam now, so after this VCIX6-DCV Exam should be release. Please note that, as of now we do not have exact date of the release or the update, most probably it should be release by the end of the year. Please go through our website for the upcoming updates.”

I really had high hopes that the exams would be released in time for VMworld 2015, but it looks like more towards the end of the year. I would say to anyone who has been thinking of holding off on taking the VCAP-DCD or DCA, go ahead and take the exams now.

VMware has provided a link that you can sign up for notifications for the exam release.

VCAP5-DCD : a pass is a pass

So I sat the exam 5/6/15, for the 4th time I think. I tried once when the VCAP-DCD 4 came out in 2011, I tried once on ver 5.1 in 2012 and once with the latest 5.5 exam a the last VMworld (which ended in a crashed exam). Each time I got within 10 to 20 points from passing. Each time I took the exam it was years apart. I had schedule the exam maybe 3 other times, but had to cancel for various reasons. All of the material I know, I think for me personally I just had personal life events and tremendous workloads that got in the way of my focus to pass the exam. Each time I did not pass I took to the forums to rant about how oddly the questions were worded. I don’t think has changed. One thing I struggled with was figuring out the vision of the answers in the exam. From what I gather, a room full of VCDX’s came up with the questions. They structured these vague questions in a manner to where one (or some) of the answers are correct. I tried to vision what they may be looking for. Almost all of the questions seem debatable, which leaves room for mistakes if you don’t put on a VCDX hat.

I can’t say the exam is any easier, I just took my time studying over a few months. I am familiar with all of the terms, it was more like a review. There is no one source you can turn to that will help you pass the exam. The blueprint is a guide that will reference all types of documents. The VMware design workshop will not give you 100% of what you need to pass the exam either, but it will help if you are just getting stared with design. This is not an exam that you can just pick up a book and read. It will take some real world experience to conqueror this exam.

The things that helped me?

– Of course Jason Grierson’s exam engine over at This will give you a great understanding of what VMware is looking for when you do the design questions. The site is still a work in progress, but you get the feel for the design and drag & drop questions. Read through his study guide at the end as well. There are some really good topics for the exam. Especially helpful is the NIOC exercise to help you structure host limits, adapter shares and share values. If you see this guy at VMworld, buy him a beer!

– I have Scott Lowe’s design 2nd edition book. That does help. It is a design book to get you thinking about VMware design in general. A lot of the exam topics are covered in the book, but it doesn’t give you an translation into how VMware will word the exam questions based on his material. It’s really up to you to pick out key terms.

– I listened to the vBrownBag VCAP-DCD podcasts every day on my way in to work. If you have time to kill on your commute, listen to them.

– The google+ community is a real help. Scroll through some of the material, you may find some golden nuggets in there.

– There is a set of great videos from Scott Lowe on Pluralsight that I have been watching. It is the Designing VMware Infrastructure course. It is from 2012, but it has the core parts of the DCD exam. I just wish there was a refreshed series that covered all of the products in the exam blueprint like VSA, VSAN, SRM, HeartBeat and others.

– I have my own set of links on my site also that I use.

Really, you have to find your favorite study guide that is out there. I have seen some that break down the blueprint into multiple links. I can’t tell you how many times I started to do that, but it gets so long.

I took the exam around 11AM. I was lucky and got a testing center that had the monitor facing a wall. I hate the exam centers where you have to see someone over your monitor or have distracting things all around you. I did all of the design questions first and marked all of the other for review. I was left with 80 minutes (I think) to go back and finish everything up. At the end I was left with 10 or 15 minutes. I went back to review some of the weirdest questions I had ever seen. I decided to stick with my answers and scored a 306. A pass, is a pass, is a pass.

So what now? Well, for starters I need to update all my email signatures. At some point this summer I will shot for the VCAP-DCA once ver 6 comes out. Will the certification mean more money in the role I am in? Probably not, but who knows what opportunities could be on the horizon.



Discussing Auto Deploy dependencies

In studying for the VCAP-DCD exam (objective 2.2 – Map Service Dependencies in the VDCD550 exam blueprint), I noticed a few comments on the Google+ community about what exactly is the dependency mapping of VMware Auto Deploy. Specifically, Active Directory, DNS and PowerCLI.

Yes, it is true, you can have an environment with no Active Directory by using the VMware built in accounts like “administrator@vSphere.local”. And, you could forgo DNS by using host files. No where in the Auto Deploy setup guide does it mention a requirement for Active Directory, only that administrative rights are given to Auto Deploy. DNS is mentioned in the Auto Deploy “proof of concept” setup. Nearly all setup guides include DNS configurations. The configuration calls for you to make a static DNS record (to avoid DNS scavenging I assume) and a DHCP reservation for the IP address of the host. DHCP is required, but DNS could still be replaced by host files (yuk).

host files YUK

You will be hard pressed to find any VMware document on Auto Deploy that says “this is required” and “this is optional”. There is no product map for dependencies.

Think of it like this. If I gave you a task to get a car running and I laid out some parts for you. An engine, windshield wipers, transmission, starter, battery, wheels, ignition and some bucket seats. If you were in a hurry and you see all of these parts, you would put everything together. All I tasked you with was to get the car running, I didn’t say it was going anywhere. What would be required? Most likely the engine, battery, starter and ignition. Windshield wipers, wheels, transmission and bucket seats are not required to start the car. If I start listing crazy stuff like truck nuts, ejection seats or flame paint job, you would know to disregard those because they are not relevant to a car going anywhere. I know it is a timed exam, but slow down and read what is being asked.

Sitting at an exam, who really thinks in terms of non-enterprise environments? You need to think in the realm of any possibility. I guess in any situation there could be the very most complex, administrative overhead process for deploying a product. And there could be the simplest, least management option. You could be designing something for a small dentist office or a global data center for PayPal. Even in those small offices, it is now possible for them to purchase ROBO advanced edition licenses to get host profiles instead of purchasing Enterprise Plus licenses.

I myself do not know of any environments utilizing VMware Auto Deploy, I would think there are even less using host files. But then, I am not a consultant and have not seen everything. I supposed if it is possible for an environment to use host files on every server it might be possible the environment is full of Wal-Mart Lindows machines and Windows 95.

The only way to get truly familiar with Auto Deploy dependencies is to deploy it. I am by no means the “Auto Deploy Master”. I do not use it. I’m a fan of SD cards (not usb sticks) in production. Below is the setup I will go through in order to map the required dependencies for Auto Deploy in vSphere 5.5. We will see the items “required” and “not required”. This is not meant to be a step by step procedure to install Auto Deploy, I am just going over dependencies. The lab I will be working with is bare bones for what is “required”. If you would like to see the setup and configuration procedure, please visit this link.

A few things that are “required” to get started with the the Auto Deploy setup will be vCenter, a TFTP server and DHCP (options 66 and 67). We will also need to configure the hosts to PXE boot. For this lab, I will be using nested ESXi servers to boot.

I have to say, this is the first time I have ever deployed vCenter on a non domain joined server. I will be setting up DHCP on a Windows 2012 R2 server and TFTP with Win agents TFTP server.

AutoD01 - VC workgroup


You will get a warning during the setup of vCenter that you are not joined to the domain.

AutoD03 - VC workgroup


The “administrator@vsphere.local” can be used for all logins.

AutoD03 - VC user

In my lab, I will be using host files on 3 servers.

AutoD04 - host files



on my Windows 2012 TFTP server I had I had to unblock all of the files in the TFTP root.

AutoD05 - unblock tftp

So far we know that TFTP will depend on DHCP. The host will first depend on DHCP to get an ip address. With options 66 and 67 the host will then pull the “undionly.kpxe.vmw-hardwired” file form the TFPT server. This file contains instructions for the host to get an image profile (not a “host profile” from vCenter) and what VIBs to pull from the image depot/repository on the Auto Deploy server.

Once vCenter is up, you need to download the TFTP boot Zip from vCenter (after auto deploy is setup with vCenter). So, we know now that TFTP depends on the Auto Deploy server because it must get a configuration file “undionly.kpxe.vmw-hardwired” to boot remote hosts from.

PowerCLI is now needed to import VIBs or offline bundles to the image depot / repository. So the Auto Deploy server is dependent on image profiles and the image depot. PowerCLI is used to create the deployment rules (image profile) for each ESXi host (or group of hosts), then that rule must be added to the active set so that it will take effect.

Here is a question. If there is no DNS, how will the newly provisioned hosts resolve hostnames without a hostfile? The image is brand new and does not carry a host file. When the host is added to vCenter, it will actually use the IP of vCenter. So we still do not need DNS.

AutoD08 - managed by VC IP

After the host is connected, we then create the host profile from the newly attached host. This host profile will be applied to all the clusters. But, this is not a requirement. We were able to successfully add a host to vCenter. The requirements of the product do not call for you to make custom changes to any hosts. As far as Auto Deploy is concerned, it’s job is done. It is possible to include the vCenter host profile as a part of the deployment rule, but that is not a requirement to get Auto Deploy running.

So what do we know so far? Thou shalt have: vCenter, TFTP, DHCP, a host to boot, an image depot, image profiles (for the active working set), Auto Deploy server and PowerCLI. Can the PowerCLI part be argued? Maybe. Perhaps there is some way to manipulate the vCenter database to create an image profile and upload the VIBs to the depot or someone has some third party tool to create image profiles. As far as I can tell, the image profiles and VIBs uploaded to the image depot must be done via PowerCLI. Removing PowerCLI from the equation would seem far outside of the normal operations of Auto Deploy. But then again I thought Active Directory and DNS were a part of the normal operation. The image builder itself would not be considered a requirement in the dependency map either. You have the option to download the offline bundles from VMware and include your vendors hardware VIBs with the deployment.

To examine each dependency, think about if each compenent was not available.

vCenter: With no vCenter, how would you install the Auto Deploy server? It would not be possible. Where would hosts go?

TFTP: Without a TFTP server, how would the host PXE boot receive the undionly.kpxe.vmw-hardwired file and then get configuration information from Auto Deploy?

DHCP: Without DHCP, how would a stateless host get an IP address and know what to do from there? DHCP would be the first link in the chain for the host to boot from PXE and do anything.

Host: Without a host, what good is all that Auto Deploy configuration?

Image depot: Without an image depot, where would the host get an ESXi image or hardware vendor VIBs?

Image profile: Without an image profile, how would the host get deployment rules from the Auto Deploy server?

PowerCLI: Without PowerCLI, how would you create the image profiles and image depot?

Auto Deploy server: Without the Auto Deploy server itself, where would the image profiles live and the image depot. The Auto deploy server is the traffic cop directing hosts to the vCenter server via the image profiles and image depot.

So after all of this, what would a VMware Auto Deploy “requirements” dependency map look like?

Auto deploy dependency


Tintri and XenDesktop: my 411

When I first started looking at using Tintri for my VDI environment, all I could find were white papers and webinars that said “best of the best of the best SIR!”.


I did not know how this mystical unicorn “Tintri Clones” could help me. What is the mechanism that will help get VDI off the ground? How is this so different from what I use today?

In my particular situation, I am using Xendesktop 7.5 MCS with PvD on top of vSphere. I originally used personal vDisks as a way to save disk space. I would redirect the user data to a NAS and use the PvD as user application install space. I found this to be a great provisioning method on a traditional SAN. I would have a central image to maintain and push out the updates to a catalog. But, storage migration is a nightmare when using PvD because it is not supported by Citrix. The way to get around this is to do a backup and restore on to a new pool of desktops. You can continue to use PvD with Tintri, but it really becomes unnecessary. I will show you why.

The plan: build a new pool of desktops on Tintri storage to take advantage of its features. How do I take advantage of the array features and space savings? It was really quite simple. I was expecting to find some complicated configuration setup between Citrix, Tintri and VMware to get things working. It all really boils down to the Tintri host plugin and cloning your master image on the Tintri array. The host plugin activates the Tintri VAAI for reduced host IO, space savings and fast deployments of VMs by offloading the process to the array. Offloading processes to the array is not something new, a lot of vendors do this today. Tintri offers a few whitepapers on how to use Citrix with Tintri: . There are some important Pros, CONs and gottachas to go over. The most important gottacha that I think should be noted is that a catalog should not be updated or created from a VM that has snapshots. Doing so negates the space savings and creates full clones of the desktops. It is important to remember that the base image for the catalog should be created from a Tintri clone. This involves logging on to the array and cloning your master image. Simply cloning from the vSphere web client or C# client will not do it. This clone will be used to spawn all of your virtual machines. You would then re-clone this master image with no VM snapshots to push out any updates to your catalog from Citrix with MCS if you are using PvD.

From the diagram below, you can see how personal vDisks work. The user / application data is saved out to a seperate drive. Each vSphere datastore gets a copy of the base disk for each VM to link back to. Each VM with MCS (machine creation services) also gets a small 16MB identity disk. I can say that I’ve had Citrix issues with every other release using personal vDisks. Adding on this piece to your deployment adds a layer of complexity. I find it much easier to just use clones of the master image as regular desktops. You already get space saving from the Tintri array vs PvD. The only advantage would be the ability to push out updates to a catalog from a master image when using a PvD catalog.




The only negative with using a machine catalog that saves data to the local disk instead of a PvD is that you cannot grow the drive on individual VM from the vSphere console. Each VM is tied to a Citrix created snapshot in vSphere with the base disk. A virtual machine with a snapshot cannot change the drive size even if it is powered off. This is not a negative aspect of Tintri, it is a function with Citrix. How do you grow the drives? You would need to create dedicated machines from Tintri clones or clone the VM to a individual VM that is not tied to a master (this would involve creating a new pool).

Citrix creates disk layouts in different ways when it comes to dedicated machines and PvD machines in MCS. PvD machines link each VM’s C drive back to a base master disk in each datastore, but each dedicated machine links back to an individual snapshot of the master disk, so you are dealing with many more C drives that could grow vs having a PvD on each machine that would grow. To review this process, visit the Citrix documentation.

I am not going to cover PVS provisioning, that will be for another post.

So why is it the “best of the best of the best, Sir!”? There are auto tier systems and then there is Tintri AWS. This is the Active Working Set which runs 99% of IOPS in flash. My users get data in flash when they need it.

Overall, I found the process of creating my VDI environment from Tintri quite easy. It is so easy to investigate who is doing what in the Tintri console! No complicated java setups or fat clients that require days of training. It is important to review all of the Tintri best practices before you go about re-platforming your environment. You don’t want to go spawning desktops in a catalog from a VM you built that has snapshots!


Why I chose Tintri

Let me first say. No, I did not pick Tintri because of all those vExpert shirts. I selected them because they have a fantastic storage platform. They are so easy to use!

I spent months going through a lot of the newer storage platforms. The enterprise space I work in now has been with a couple of the big block vendors for years and have spent a ton in maintenance fees every year with little in the way of array features we can use when it comes to virtualization. Yes, we went from fiber to an IP storage fabric. I was looking for something purpose built for virtualization. Something so simple to setup and manage that I could get a junior admin to run it without having them attend a complex storage course. The array really does only take 10 minutes to setup. Should I be more impressed with the array if it takes a team of consultants days to setup? No way! Does it mean that the features Tintri provides is limited because it does not require a team of storage admins to manage the array? I found VMstore very feature rich when used for virtualization. I’m not dogging the big block vendors, each one is multi-purpose for different types of workloads in the physical and virtual world. With the introduction of VVOLs, the block vendors will be easier to use when it comes to leveraging storage. We will soon see if any of the big boys on the block will provide code upgrades to existing gen 1 arrays for VVOLs.

As technology progresses, innovation should include ease of use. Tintri got it right with the VMstore product. Tintri Global Center is a virtual appliance used to manage all of the VMstore arrays. Global center has a very clean look and feel for troubleshooting performance. My only gripe about Global center is that it does not have ldap integration for user logins. Tintri also has a vRealize Operations management pack for VMware vRealize Suite you can use if you do not want to dig around Global Center.

I see other storage vendors catching up to Tintri, but I can’t wait a few years for the big block guys. The amount of storage I get in such a small footprint in my racks is amazing! Always take a cautious approach when selecting a storage vendor. Keep in mind: What kind of vendor lock in will I be stuck with based on features used?  What does it take to upgrade the array firmware? (I already did this one, it’s non-disruptive and nondestructive). And,  How will I move data if I upgrade or want to get off of the array?

I don’t want to get in to all of the performance gains and use case statistics. This can be different for everyone based on your environment. Doing a POV (proof of value) is a very easy process. Just get in touch with your local Tintri rep or VAR.

I realize this post is not overly technical, but in follow on posts I will do some lab work on building Citrix environments on Tintri.

Why do I participate in the VMware vExpert program?

Yes, getting all the free stuff is pretty cool. But being involved in the community is what it is all about for me. Sharing knowledge helps everyone. It is more than sending a couple of tweets or making a few blog posts throughout the year. Sometimes I may run across an issue that at least one other person out there has seen before. When you blog about it, that may just save the weekend! Experiences with new products can be invaluable when looking at something for your company. I have had more traffic from people looking at my posts on Dell products and VMware than anything else. And no, I do not work for Dell or VMware. I would like to think that a few things I have blogged about have been used in deployments. Getting involved in VMUG and chatting with others is a big part as well.

I do not receive anything from my current employer in the enterprise space for participating in the program. I mention it, but it really does not mean much in the enterprise space. I really don’t expect anything from my current employer either. At most, I get to attend VMworld every year. I really love the post from HANS DE LEENHEER “So now your a vExpert“. It is good advice on how to use the vExpert title where you work.

Do I consider myself an “expert” in all VMware products? No way, I still need to wrap my head around NSX. But I have worked with VMware since the early days of the product. I feel that I have a good grasp on where the company started and where they are today. Virtualization is such a large part of the IT landscape today. Not getting involved may just leave you in the dust with the windows 2003 era.

The most valuable part for me in the vExpert program is the PluralSight training. I use this all the time and it is the greatest training platform out there. I share my experiences with this training with everyone and encourage everyone to look in to it.

I am honored to be on the list again this year for the third time. I have applied every year since it started. I would be interested in seeing how many vExperts apply themselves. Early in the program I heard others say “I’m not going to nominate myself, I think someone else should do that”. If you are a passionate IT guru in the enterprise space, I do not see that there would be an opportunity to have others nominate you. Unless you are a superstar in the consulting space, book author or a VMUG leader, I think everyone else must be doing a self nomination. If you do not take the effort to apply, you may be missing out on taking part in a great program. It is up to the VMware Social Media & Community Team to nominate the winners. The criteria to be selected: “The VMware vExpert Award is given to individuals who have significantly contributed to the community of VMware users over the past year. vExperts are book authors, bloggers, VMUG leaders, tool builders, and other IT professionals who share their knowledge and passion with others. These vExperts have gone above and beyond their day jobs to share their technical expertise and communicate the value of VMware and virtualization to their colleagues and community.”.

Why do I participate in the VMware vExpert program? I believe in the technology. The better we understand virtualization and how we use the technology will determine the future of business.

Zerto vs array based snapshots with replication

I want to take a moment and discuss a few feature sets from some of the popular storage vendors on the market today and where their replication technology may overlap with Zerto. I have been doing some shopping lately for a storage array. I’m not talking about the big boys at EMC, NetApp, IBM or HP. I’m talking about the guys growing in popularity like Solidfire, Nimble StorageTintri and Tegile. Violin and Pure Storage are all flash arrays, these are a different animal but provide the same type of replication and snaps. When you are looking at these storage array vendors in the enterprise or cloud space (more often than not) you will find overlapping features with some tools you may own. In the Zerto documentation it states that replication and snapshot management requires IT overhead when using the built storage array features. How true is this?  Nimble Storage is the only one on my list that has a plugin for SRM today, but we are going to talk about the built in features to the array and not VMware SRM. Remember, all of these require more than one like array if you want to take advantage of replication. And also, replications and snapshots do not always give you an orchestrated failover and failback architecture. Most of these will include features with the array and others may charge to turn on a feature. Let me first say that all of these storage arrays are fantastic. They are very forward thinking and each have a great place in the enterprise and cloud space.

Tintri logo

Tintri is built on a application aware storage architecture. The array is purpose built for virtualization. Tintri has three main features. Clone, snapshot and protect. The cloning feature is pretty nice because it allows you to clone form an existing, past or present snapshot (Tintri storage snapshot) to another Tintri array or to the same one.

The array can give you a view from the vCenter web client of all the snapshots from Tintri.

Tintri snapshots







You can setup a schedule to “protect” the VM with a snap and make it crash consistent. You can keep this local to the array or remote to another Tintri array.

Tintri protect






If you create the snap to another array you can view the bytes remaining to replicate and the throughput.

Tintri replication throughput

Tintri replication status






All of these features are great, but what are you going to do with that replicated copy once it is on the other side? There is no orchestrated way of bringing it online or doing reverse protection once you have it up.  I’m sure there is a way to work with the Powershell cmdlets to get something working, but that would require many man hours. Zerto does this for you. To replicate VMs from one location to another, two separately license product must be purchased. Tintri Global Center and ReplicateVM. In my use case, I would use something like this to replicate VMs to another datacenter so that I could import them in to vCloud catalogs or work with a production VM offline. The cloning feature would be great for creating VDI sessions as well.


Tegile logo

I had a hard time finding any technical documentation on the Tegile web site. Most of what I found was marketing material. You will find plenty of whitepapers, solution briefs and customer stories. But there is not much on how replication functions. What I did find was a lot of product demos on YouTube. One goes through the demo of dedupe, compression and recovery. On the recovery piece, you can see that it is still a manual process. Nothing automated like Zerto provides. Tegile did partner with Voonami to provide offsite replication with its array.

Tegile does has a file level protocol that is SMB 3.0, which can be used as a Zerto backup target. They have partnered with Veeam to provide the backup solution. Veeam does have a great set of tools.




Solidfire logo

Only SolidFire delivers native snapshot-based backup and restore functionality usable with any object store or device that has an S3 or SWIFT compatible API. SolidFire now offers the SF2405 and the SF4805  with enhancements to SolidFire’s real-time replication offering with a storage replication adapter (SRA) for integration with Site Recovery Manager (SRM).

The real time replication built in to the array is not a full DR solution. Investments must be made into SRM and you must have like arrays at the sources and destination. On the VMware side, this would require a 25 pack of licenses for SRM or vCloud suite licensing on all of your hosts.


Nimble logo


I think Nimble has the most comprehensive tools when it comes to replication. Nimble has a post on Nimble OS 2.0 that does a walk through of how to configure replication. This covers only the array based replication. With the backup solution, Nimble has included a set of tools for backup and recovery. They have also partnered with CommVault Simpana to provide a more comprehensive back and recovery process. You will need to register on the Nimble website to get all the details with a best practice document. What it comes down to is that the CommVault solution is still a backup process. It is not a real time replication product like Zerto. This tool relies on array based snapshots and is not at the kernel level like Zerto’s VRA. The recommended snapshot duration from CommVault is 15 minutes. A lot like vSphere replication out of the box. Although Nimble can handle snapshots up to a minute. The recovery process is still manual.

Nimble also integrates with VMware SRM. This is the DR method listed for the array in a VMware environment. They have also included a webinar demo of the array with SRM.


To sum it up, I think that each one of these storage vendors has great potential to replace some virtualization backup solutions but not replace an orchestrated BC/DR solution like Zerto. When you look at one of these storage vendors, think about what your current backup solution does and how it compares to what the array provides. Zerto does provide an offsite backup solution with the product, but it does not provide dedupe or compression at the source today like Avamar. However, you do get dedup when you use a backup target like Windows 2012 R2 and turn on those features or with a storage array that offers this. The target must be an SMB share. Or you can just use a TNT drive and backup to the cloud. What I do like about the Zerto offsite backup product is that it does the backup against the replicated VM at the target site. This reduces resource overhead at the source site. You would not need dedup at the source since the backup does not happen there.

Think of it this way, if you get fed up with your current storage vendor and you want to move to something else, how would you go about reconfiguring an BC/DR plan? If you just use Zerto you will not have to worry about losing any features because the product is storage agnostic! If you made investments in to SRM you might find yourself locked in to sticking with the same storage vendor. The storage array vendors treat SRM like a car dealer does the 3rd party warranties they sell when you buy a new car, they may not tell you about another vehicle warranty company if it is not a built in feature of the product line they are selling, it is up to you to know there are other options out there. It is in the storage vendors best interest to get you tied in with SRM so that you will either buy more than one or stick with the array vendor down the road to depend on the array replication features. If a storage vendor requires you to pay an additional license for a VM protection feature, it may be in your best interest to just stick with one solution like Zerto to reduce overhead. Some times technology overlap is unavoidable, but look to Zerto as the BC/DR solution for your virtualization environment.

I will be adding more vendors to this list as I have time.

Dell Compellent vSphere Web Client Plugin 2.0 permissions

I was working with the VMware vSphere web client plugin 2.0 for Compellent storage and I came across a small roadblock with the service account permissions. Getting the virtual appliance setup is pretty straight forward. Just get your head wrapped around what the CITV is and how it interacts with the Enterprise Manager server “EM Server” (which talks to your storage controllers). CITV does not talk directly to your Compellent controllers.

Looking at the administration guide on page 3, you configure the vSphere web client plugin in vCenter as the service account you want to use.

Compellent account

The first thing you need to do is configure credentials for the CITV to launch tasks in vCenter. I’m a fan of using specified service accounts for virtual appliances. This is a windows account that needs to be specified. However, in the administrators guide it does not specify what level of permissions in vCenter this account needs. I sent an email over to Jason Boche at Dell and he did confirm that the account needs to have top level “administrator” permissions for now. They are working on changing this in the next release.