Quest 2017 webinars

Here is the schedule of upcoming webinars that will be hosted by Quest product experts. Join Quest for each session to learn more about each product and how they can benefit you. Quest subject matter experts have put together some great content for you!




Time (PST) Session Topic Presenter Registration Link
 10/26 9:00 AM PT ViPR Storage Resource Management, Visualize, Analyze and Optimize your Datacenter Rich Colarusso and Roy Laverty Click Here
 10/26 11:00 AM PT Dell EMC Data Center Modernization and Migration Services Deliver 81% ROI Jon Erickson, Forrester Research,
Ted Streck, Dell EMC,
Click Here
 10/31 9:00 AM PT Oracle Database Protection Direct Diana Yang Click Here
11/2 9:00 AM PT Transform Enterprise File Services with Dell EMC Elastic Cloud Storage & CTERA Jim Crook, Ctera; Brian Giracca, Dell EMC; Doreen Eatough, Dell EMC Click Here
 11/9 9:00 AM PT Databases are ready for containers – Learn how and why to do it with a demo Bala Chandrasekaran Click Here

VMworld session calendar export

The option to download a calendar with your sessions is now available on the VMworld 2017 sessions page. I have been checking for this option for weeks. The site had instructions for downloading these, but was never visible. It looks like they finally got around to reorganizing some of the pages. Now if we can just get the app on our phones.

You also have the option to print a CSV. Thanks VMworld.

Dell vCenter Management plugin 1.7 and server iDRAC updates error

If you run iDRAC updates on your VMware hosts you might run in to this error with the Dell Management plug-in for VMware vCenter. “Fail – Unable to contact iDRAC. Check iDRAC credentials and network connectivity”.







I ran in to this when upgrading the iDRAC from 2.30.30 to 2.40.40. I was able to log in to the iDRAC directly and ping the iDRAC from the Dell virtual appliance. The Dell vCenter plugin was the only thing that could not log on to the host iDRAC. The issue turned out to be located in the iDRAC settings under network/services. The web server needs to be set to TLS 1.0.







Unfortunately the Dell OMI only works with 1.0, but they hope to have it upgraded in the future.

So your option is to change the TLS settings in the iDRAC or leave your iDRAC firmware lower than 2.40.40.

Skype for Business on vSphere

Is it support? That is an interesting question if you ask a Microsoft consulting company. You might just get a mixed bag of answers. The goal of a Microsoft consulting company is to push HyperV.  Lync and Skype for Business are absolutely supported on VMware hypervisors. It falls under the “Server Virtualization Validation Program” from Microsoft. UC products like Skype and Lync do not fall under the same restrictions as Exchange when it comes to the storage platform. Microsoft will not support Exchange if it is on NFS storage (even though the same conditions for the restrictions exist in SMB3). There is no known restrictions on storage platform for Lync or Skype.

The design considerations in the “Planning a Lync Server 2013 Deployment on Virtual Servers” guide are all geared towards HyperV. VMware took up issues with this article (detailed here) and asked why Microsoft never created a validation document for VMware (clearly a market leader). To date, there still has not been a document published by Microsoft, and I do not expect them to publish a favorable article for a competing product. As far as designing your environment, do use the guidelines listed by Microsoft, but pay not attention to the restrictions on HyperThreading and memory sharing. There is not a good technical justification from Microsoft to disable these options when using VMware products.

I work in an environment where I have a multi pool global Skype deployment for 5,000 users and a US pool for 5,000 users all running on vSphere. I have not had any hypervisor related issues and I’ve never had issues with Microsoft support when it comes to having the platform on VMware products.

Don’t be persuaded that vSphere is not the best platform for Skype or Lync. I’ve heard comments like “so you’ve chosen the most expensive and complex product for your environment” or “you are not guaranteed to get support from Microsoft if you have issues in your environment”. That last statement would be somewhat true if the environment was poorly designed. Just make sure your design considerations fall within Microsoft guidelines.





Where is the VCIX6-DCV exam?

With much cheering and confusion, the VCIX exams were announced last February.

Current VCAPs wondered, “what do I need to upgrade?”. The upgrade path for current VCAP-DCA and VCAP-DCD holders seems clear enough in the link above.

Where is the exam? Well, VMware education released the VCIX-NV right away. There is still no definite date when the VCIX6-DCV will be released. This is the word I received from VMware education:

“I would like to inform you that, VMware is in process of releasing VCP6-DCV Exam now, so after this VCIX6-DCV Exam should be release. Please note that, as of now we do not have exact date of the release or the update, most probably it should be release by the end of the year. Please go through our website for the upcoming updates.”

I really had high hopes that the exams would be released in time for VMworld 2015, but it looks like more towards the end of the year. I would say to anyone who has been thinking of holding off on taking the VCAP-DCD or DCA, go ahead and take the exams now.

VMware has provided a link that you can sign up for notifications for the exam release.

Discussing Auto Deploy dependencies

In studying for the VCAP-DCD exam (objective 2.2 – Map Service Dependencies in the VDCD550 exam blueprint), I noticed a few comments on the Google+ community about what exactly is the dependency mapping of VMware Auto Deploy. Specifically, Active Directory, DNS and PowerCLI.

Yes, it is true, you can have an environment with no Active Directory by using the VMware built in accounts like “administrator@vSphere.local”. And, you could forgo DNS by using host files. No where in the Auto Deploy setup guide does it mention a requirement for Active Directory, only that administrative rights are given to Auto Deploy. DNS is mentioned in the Auto Deploy “proof of concept” setup. Nearly all setup guides include DNS configurations. The configuration calls for you to make a static DNS record (to avoid DNS scavenging I assume) and a DHCP reservation for the IP address of the host. DHCP is required, but DNS could still be replaced by host files (yuk).

host files YUK

You will be hard pressed to find any VMware document on Auto Deploy that says “this is required” and “this is optional”. There is no product map for dependencies.

Think of it like this. If I gave you a task to get a car running and I laid out some parts for you. An engine, windshield wipers, transmission, starter, battery, wheels, ignition and some bucket seats. If you were in a hurry and you see all of these parts, you would put everything together. All I tasked you with was to get the car running, I didn’t say it was going anywhere. What would be required? Most likely the engine, battery, starter and ignition. Windshield wipers, wheels, transmission and bucket seats are not required to start the car. If I start listing crazy stuff like truck nuts, ejection seats or flame paint job, you would know to disregard those because they are not relevant to a car going anywhere. I know it is a timed exam, but slow down and read what is being asked.

Sitting at an exam, who really thinks in terms of non-enterprise environments? You need to think in the realm of any possibility. I guess in any situation there could be the very most complex, administrative overhead process for deploying a product. And there could be the simplest, least management option. You could be designing something for a small dentist office or a global data center for PayPal. Even in those small offices, it is now possible for them to purchase ROBO advanced edition licenses to get host profiles instead of purchasing Enterprise Plus licenses.

I myself do not know of any environments utilizing VMware Auto Deploy, I would think there are even less using host files. But then, I am not a consultant and have not seen everything. I supposed if it is possible for an environment to use host files on every server it might be possible the environment is full of Wal-Mart Lindows machines and Windows 95.

The only way to get truly familiar with Auto Deploy dependencies is to deploy it. I am by no means the “Auto Deploy Master”. I do not use it. I’m a fan of SD cards (not usb sticks) in production. Below is the setup I will go through in order to map the required dependencies for Auto Deploy in vSphere 5.5. We will see the items “required” and “not required”. This is not meant to be a step by step procedure to install Auto Deploy, I am just going over dependencies. The lab I will be working with is bare bones for what is “required”. If you would like to see the setup and configuration procedure, please visit this link.

A few things that are “required” to get started with the the Auto Deploy setup will be vCenter, a TFTP server and DHCP (options 66 and 67). We will also need to configure the hosts to PXE boot. For this lab, I will be using nested ESXi servers to boot.

I have to say, this is the first time I have ever deployed vCenter on a non domain joined server. I will be setting up DHCP on a Windows 2012 R2 server and TFTP with Win agents TFTP server.

AutoD01 - VC workgroup


You will get a warning during the setup of vCenter that you are not joined to the domain.

AutoD03 - VC workgroup


The “administrator@vsphere.local” can be used for all logins.

AutoD03 - VC user

In my lab, I will be using host files on 3 servers.

AutoD04 - host files



on my Windows 2012 TFTP server I had I had to unblock all of the files in the TFTP root.

AutoD05 - unblock tftp

So far we know that TFTP will depend on DHCP. The host will first depend on DHCP to get an ip address. With options 66 and 67 the host will then pull the “undionly.kpxe.vmw-hardwired” file form the TFPT server. This file contains instructions for the host to get an image profile (not a “host profile” from vCenter) and what VIBs to pull from the image depot/repository on the Auto Deploy server.

Once vCenter is up, you need to download the TFTP boot Zip from vCenter (after auto deploy is setup with vCenter). So, we know now that TFTP depends on the Auto Deploy server because it must get a configuration file “undionly.kpxe.vmw-hardwired” to boot remote hosts from.

PowerCLI is now needed to import VIBs or offline bundles to the image depot / repository. So the Auto Deploy server is dependent on image profiles and the image depot. PowerCLI is used to create the deployment rules (image profile) for each ESXi host (or group of hosts), then that rule must be added to the active set so that it will take effect.

Here is a question. If there is no DNS, how will the newly provisioned hosts resolve hostnames without a hostfile? The image is brand new and does not carry a host file. When the host is added to vCenter, it will actually use the IP of vCenter. So we still do not need DNS.

AutoD08 - managed by VC IP

After the host is connected, we then create the host profile from the newly attached host. This host profile will be applied to all the clusters. But, this is not a requirement. We were able to successfully add a host to vCenter. The requirements of the product do not call for you to make custom changes to any hosts. As far as Auto Deploy is concerned, it’s job is done. It is possible to include the vCenter host profile as a part of the deployment rule, but that is not a requirement to get Auto Deploy running.

So what do we know so far? Thou shalt have: vCenter, TFTP, DHCP, a host to boot, an image depot, image profiles (for the active working set), Auto Deploy server and PowerCLI. Can the PowerCLI part be argued? Maybe. Perhaps there is some way to manipulate the vCenter database to create an image profile and upload the VIBs to the depot or someone has some third party tool to create image profiles. As far as I can tell, the image profiles and VIBs uploaded to the image depot must be done via PowerCLI. Removing PowerCLI from the equation would seem far outside of the normal operations of Auto Deploy. But then again I thought Active Directory and DNS were a part of the normal operation. The image builder itself would not be considered a requirement in the dependency map either. You have the option to download the offline bundles from VMware and include your vendors hardware VIBs with the deployment.

To examine each dependency, think about if each compenent was not available.

vCenter: With no vCenter, how would you install the Auto Deploy server? It would not be possible. Where would hosts go?

TFTP: Without a TFTP server, how would the host PXE boot receive the undionly.kpxe.vmw-hardwired file and then get configuration information from Auto Deploy?

DHCP: Without DHCP, how would a stateless host get an IP address and know what to do from there? DHCP would be the first link in the chain for the host to boot from PXE and do anything.

Host: Without a host, what good is all that Auto Deploy configuration?

Image depot: Without an image depot, where would the host get an ESXi image or hardware vendor VIBs?

Image profile: Without an image profile, how would the host get deployment rules from the Auto Deploy server?

PowerCLI: Without PowerCLI, how would you create the image profiles and image depot?

Auto Deploy server: Without the Auto Deploy server itself, where would the image profiles live and the image depot. The Auto deploy server is the traffic cop directing hosts to the vCenter server via the image profiles and image depot.

So after all of this, what would a VMware Auto Deploy “requirements” dependency map look like?

Auto deploy dependency


Tintri and XenDesktop: my 411

When I first started looking at using Tintri for my VDI environment, all I could find were white papers and webinars that said “best of the best of the best SIR!”.


I did not know how this mystical unicorn “Tintri Clones” could help me. What is the mechanism that will help get VDI off the ground? How is this so different from what I use today?

In my particular situation, I am using Xendesktop 7.5 MCS with PvD on top of vSphere. I originally used personal vDisks as a way to save disk space. I would redirect the user data to a NAS and use the PvD as user application install space. I found this to be a great provisioning method on a traditional SAN. I would have a central image to maintain and push out the updates to a catalog. But, storage migration is a nightmare when using PvD because it is not supported by Citrix. The way to get around this is to do a backup and restore on to a new pool of desktops. You can continue to use PvD with Tintri, but it really becomes unnecessary. I will show you why.

The plan: build a new pool of desktops on Tintri storage to take advantage of its features. How do I take advantage of the array features and space savings? It was really quite simple. I was expecting to find some complicated configuration setup between Citrix, Tintri and VMware to get things working. It all really boils down to the Tintri host plugin and cloning your master image on the Tintri array. The host plugin activates the Tintri VAAI for reduced host IO, space savings and fast deployments of VMs by offloading the process to the array. Offloading processes to the array is not something new, a lot of vendors do this today. Tintri offers a few whitepapers on how to use Citrix with Tintri: . There are some important Pros, CONs and gottachas to go over. The most important gottacha that I think should be noted is that a catalog should not be updated or created from a VM that has snapshots. Doing so negates the space savings and creates full clones of the desktops. It is important to remember that the base image for the catalog should be created from a Tintri clone. This involves logging on to the array and cloning your master image. Simply cloning from the vSphere web client or C# client will not do it. This clone will be used to spawn all of your virtual machines. You would then re-clone this master image with no VM snapshots to push out any updates to your catalog from Citrix with MCS if you are using PvD.

From the diagram below, you can see how personal vDisks work. The user / application data is saved out to a seperate drive. Each vSphere datastore gets a copy of the base disk for each VM to link back to. Each VM with MCS (machine creation services) also gets a small 16MB identity disk. I can say that I’ve had Citrix issues with every other release using personal vDisks. Adding on this piece to your deployment adds a layer of complexity. I find it much easier to just use clones of the master image as regular desktops. You already get space saving from the Tintri array vs PvD. The only advantage would be the ability to push out updates to a catalog from a master image when using a PvD catalog.




The only negative with using a machine catalog that saves data to the local disk instead of a PvD is that you cannot grow the drive on individual VM from the vSphere console. Each VM is tied to a Citrix created snapshot in vSphere with the base disk. A virtual machine with a snapshot cannot change the drive size even if it is powered off. This is not a negative aspect of Tintri, it is a function with Citrix. How do you grow the drives? You would need to create dedicated machines from Tintri clones or clone the VM to a individual VM that is not tied to a master (this would involve creating a new pool).

Citrix creates disk layouts in different ways when it comes to dedicated machines and PvD machines in MCS. PvD machines link each VM’s C drive back to a base master disk in each datastore, but each dedicated machine links back to an individual snapshot of the master disk, so you are dealing with many more C drives that could grow vs having a PvD on each machine that would grow. To review this process, visit the Citrix documentation.

I am not going to cover PVS provisioning, that will be for another post.

So why is it the “best of the best of the best, Sir!”? There are auto tier systems and then there is Tintri AWS. This is the Active Working Set which runs 99% of IOPS in flash. My users get data in flash when they need it.

Overall, I found the process of creating my VDI environment from Tintri quite easy. It is so easy to investigate who is doing what in the Tintri console! No complicated java setups or fat clients that require days of training. It is important to review all of the Tintri best practices before you go about re-platforming your environment. You don’t want to go spawning desktops in a catalog from a VM you built that has snapshots!


Why I chose Tintri

Let me first say. No, I did not pick Tintri because of all those vExpert shirts. I selected them because they have a fantastic storage platform. They are so easy to use!

I spent months going through a lot of the newer storage platforms. The enterprise space I work in now has been with a couple of the big block vendors for years and have spent a ton in maintenance fees every year with little in the way of array features we can use when it comes to virtualization. Yes, we went from fiber to an IP storage fabric. I was looking for something purpose built for virtualization. Something so simple to setup and manage that I could get a junior admin to run it without having them attend a complex storage course. The array really does only take 10 minutes to setup. Should I be more impressed with the array if it takes a team of consultants days to setup? No way! Does it mean that the features Tintri provides is limited because it does not require a team of storage admins to manage the array? I found VMstore very feature rich when used for virtualization. I’m not dogging the big block vendors, each one is multi-purpose for different types of workloads in the physical and virtual world. With the introduction of VVOLs, the block vendors will be easier to use when it comes to leveraging storage. We will soon see if any of the big boys on the block will provide code upgrades to existing gen 1 arrays for VVOLs.

As technology progresses, innovation should include ease of use. Tintri got it right with the VMstore product. Tintri Global Center is a virtual appliance used to manage all of the VMstore arrays. Global center has a very clean look and feel for troubleshooting performance. My only gripe about Global center is that it does not have ldap integration for user logins. Tintri also has a vRealize Operations management pack for VMware vRealize Suite you can use if you do not want to dig around Global Center.

I see other storage vendors catching up to Tintri, but I can’t wait a few years for the big block guys. The amount of storage I get in such a small footprint in my racks is amazing! Always take a cautious approach when selecting a storage vendor. Keep in mind: What kind of vendor lock in will I be stuck with based on features used?  What does it take to upgrade the array firmware? (I already did this one, it’s non-disruptive and nondestructive). And,  How will I move data if I upgrade or want to get off of the array?

I don’t want to get in to all of the performance gains and use case statistics. This can be different for everyone based on your environment. Doing a POV (proof of value) is a very easy process. Just get in touch with your local Tintri rep or VAR.

I realize this post is not overly technical, but in follow on posts I will do some lab work on building Citrix environments on Tintri.

Why do I participate in the VMware vExpert program?

Yes, getting all the free stuff is pretty cool. But being involved in the community is what it is all about for me. Sharing knowledge helps everyone. It is more than sending a couple of tweets or making a few blog posts throughout the year. Sometimes I may run across an issue that at least one other person out there has seen before. When you blog about it, that may just save the weekend! Experiences with new products can be invaluable when looking at something for your company. I have had more traffic from people looking at my posts on Dell products and VMware than anything else. And no, I do not work for Dell or VMware. I would like to think that a few things I have blogged about have been used in deployments. Getting involved in VMUG and chatting with others is a big part as well.

I do not receive anything from my current employer in the enterprise space for participating in the program. I mention it, but it really does not mean much in the enterprise space. I really don’t expect anything from my current employer either. At most, I get to attend VMworld every year. I really love the post from HANS DE LEENHEER “So now your a vExpert“. It is good advice on how to use the vExpert title where you work.

Do I consider myself an “expert” in all VMware products? No way, I still need to wrap my head around NSX. But I have worked with VMware since the early days of the product. I feel that I have a good grasp on where the company started and where they are today. Virtualization is such a large part of the IT landscape today. Not getting involved may just leave you in the dust with the windows 2003 era.

The most valuable part for me in the vExpert program is the PluralSight training. I use this all the time and it is the greatest training platform out there. I share my experiences with this training with everyone and encourage everyone to look in to it.

I am honored to be on the list again this year for the third time. I have applied every year since it started. I would be interested in seeing how many vExperts apply themselves. Early in the program I heard others say “I’m not going to nominate myself, I think someone else should do that”. If you are a passionate IT guru in the enterprise space, I do not see that there would be an opportunity to have others nominate you. Unless you are a superstar in the consulting space, book author or a VMUG leader, I think everyone else must be doing a self nomination. If you do not take the effort to apply, you may be missing out on taking part in a great program. It is up to the VMware Social Media & Community Team to nominate the winners. The criteria to be selected: “The VMware vExpert Award is given to individuals who have significantly contributed to the community of VMware users over the past year. vExperts are book authors, bloggers, VMUG leaders, tool builders, and other IT professionals who share their knowledge and passion with others. These vExperts have gone above and beyond their day jobs to share their technical expertise and communicate the value of VMware and virtualization to their colleagues and community.”.

Why do I participate in the VMware vExpert program? I believe in the technology. The better we understand virtualization and how we use the technology will determine the future of business.