Tintri and XenDesktop: my 411

When I first started looking at using Tintri for my VDI environment, all I could find were white papers and webinars that said “best of the best of the best SIR!”.

nestofthebest1

I did not know how this mystical unicorn “Tintri Clones” could help me. What is the mechanism that will help get VDI off the ground? How is this so different from what I use today?

In my particular situation, I am using Xendesktop 7.5 MCS with PvD on top of vSphere. I originally used personal vDisks as a way to save disk space. I would redirect the user data to a NAS and use the PvD as user application install space. I found this to be a great provisioning method on a traditional SAN. I would have a central image to maintain and push out the updates to a catalog. But, storage migration is a nightmare when using PvD because it is not supported by Citrix. The way to get around this is to do a backup and restore on to a new pool of desktops. You can continue to use PvD with Tintri, but it really becomes unnecessary. I will show you why.

The plan: build a new pool of desktops on Tintri storage to take advantage of its features. How do I take advantage of the array features and space savings? It was really quite simple. I was expecting to find some complicated configuration setup between Citrix, Tintri and VMware to get things working. It all really boils down to the Tintri host plugin and cloning your master image on the Tintri array. The host plugin activates the Tintri VAAI for reduced host IO, space savings and fast deployments of VMs by offloading the process to the array. Offloading processes to the array is not something new, a lot of vendors do this today. Tintri offers a few whitepapers on how to use Citrix with Tintri: http://www.tintri.com/blog/2014/05/tintri-citrix-xendesktop-citrix-ready . There are some important Pros, CONs and gottachas to go over. The most important gottacha that I think should be noted is that a catalog should not be updated or created from a VM that has snapshots. Doing so negates the space savings and creates full clones of the desktops. It is important to remember that the base image for the catalog should be created from a Tintri clone. This involves logging on to the array and cloning your master image. Simply cloning from the vSphere web client or C# client will not do it. This clone will be used to spawn all of your virtual machines. You would then re-clone this master image with no VM snapshots to push out any updates to your catalog from Citrix with MCS if you are using PvD.

From the diagram below, you can see how personal vDisks work. The user / application data is saved out to a seperate drive. Each vSphere datastore gets a copy of the base disk for each VM to link back to. Each VM with MCS (machine creation services) also gets a small 16MB identity disk. I can say that I’ve had Citrix issues with every other release using personal vDisks. Adding on this piece to your deployment adds a layer of complexity. I find it much easier to just use clones of the master image as regular desktops. You already get space saving from the Tintri array vs PvD. The only advantage would be the ability to push out updates to a catalog from a master image when using a PvD catalog.

 

pvd_overview

 

The only negative with using a machine catalog that saves data to the local disk instead of a PvD is that you cannot grow the drive on individual VM from the vSphere console. Each VM is tied to a Citrix created snapshot in vSphere with the base disk. A virtual machine with a snapshot cannot change the drive size even if it is powered off. This is not a negative aspect of Tintri, it is a function with Citrix. How do you grow the drives? You would need to create dedicated machines from Tintri clones or clone the VM to a individual VM that is not tied to a master (this would involve creating a new pool).

Citrix creates disk layouts in different ways when it comes to dedicated machines and PvD machines in MCS. PvD machines link each VM’s C drive back to a base master disk in each datastore, but each dedicated machine links back to an individual snapshot of the master disk, so you are dealing with many more C drives that could grow vs having a PvD on each machine that would grow. To review this process, visit the Citrix documentation.

I am not going to cover PVS provisioning, that will be for another post.

So why is it the “best of the best of the best, Sir!”? There are auto tier systems and then there is Tintri AWS. This is the Active Working Set which runs 99% of IOPS in flash. My users get data in flash when they need it.

Overall, I found the process of creating my VDI environment from Tintri quite easy. It is so easy to investigate who is doing what in the Tintri console! No complicated java setups or fat clients that require days of training. It is important to review all of the Tintri best practices before you go about re-platforming your environment. You don’t want to go spawning desktops in a catalog from a VM you built that has snapshots!

 

Why I chose Tintri

Let me first say. No, I did not pick Tintri because of all those vExpert shirts. I selected them because they have a fantastic storage platform. They are so easy to use!

I spent months going through a lot of the newer storage platforms. The enterprise space I work in now has been with a couple of the big block vendors for years and have spent a ton in maintenance fees every year with little in the way of array features we can use when it comes to virtualization. Yes, we went from fiber to an IP storage fabric. I was looking for something purpose built for virtualization. Something so simple to setup and manage that I could get a junior admin to run it without having them attend a complex storage course. The array really does only take 10 minutes to setup. Should I be more impressed with the array if it takes a team of consultants days to setup? No way! Does it mean that the features Tintri provides is limited because it does not require a team of storage admins to manage the array? I found VMstore very feature rich when used for virtualization. I’m not dogging the big block vendors, each one is multi-purpose for different types of workloads in the physical and virtual world. With the introduction of VVOLs, the block vendors will be easier to use when it comes to leveraging storage. We will soon see if any of the big boys on the block will provide code upgrades to existing gen 1 arrays for VVOLs.

As technology progresses, innovation should include ease of use. Tintri got it right with the VMstore product. Tintri Global Center is a virtual appliance used to manage all of the VMstore arrays. Global center has a very clean look and feel for troubleshooting performance. My only gripe about Global center is that it does not have ldap integration for user logins. Tintri also has a vRealize Operations management pack for VMware vRealize Suite you can use if you do not want to dig around Global Center.

I see other storage vendors catching up to Tintri, but I can’t wait a few years for the big block guys. The amount of storage I get in such a small footprint in my racks is amazing! Always take a cautious approach when selecting a storage vendor. Keep in mind: What kind of vendor lock in will I be stuck with based on features used?  What does it take to upgrade the array firmware? (I already did this one, it’s non-disruptive and nondestructive). And,  How will I move data if I upgrade or want to get off of the array?

I don’t want to get in to all of the performance gains and use case statistics. This can be different for everyone based on your environment. Doing a POV (proof of value) is a very easy process. Just get in touch with your local Tintri rep or VAR.

I realize this post is not overly technical, but in follow on posts I will do some lab work on building Citrix environments on Tintri.

Why do I participate in the VMware vExpert program?

Yes, getting all the free stuff is pretty cool. But being involved in the community is what it is all about for me. Sharing knowledge helps everyone. It is more than sending a couple of tweets or making a few blog posts throughout the year. Sometimes I may run across an issue that at least one other person out there has seen before. When you blog about it, that may just save the weekend! Experiences with new products can be invaluable when looking at something for your company. I have had more traffic from people looking at my posts on Dell products and VMware than anything else. And no, I do not work for Dell or VMware. I would like to think that a few things I have blogged about have been used in deployments. Getting involved in VMUG and chatting with others is a big part as well.

I do not receive anything from my current employer in the enterprise space for participating in the program. I mention it, but it really does not mean much in the enterprise space. I really don’t expect anything from my current employer either. At most, I get to attend VMworld every year. I really love the post from HANS DE LEENHEER “So now your a vExpert“. It is good advice on how to use the vExpert title where you work.

Do I consider myself an “expert” in all VMware products? No way, I still need to wrap my head around NSX. But I have worked with VMware since the early days of the product. I feel that I have a good grasp on where the company started and where they are today. Virtualization is such a large part of the IT landscape today. Not getting involved may just leave you in the dust with the windows 2003 era.

The most valuable part for me in the vExpert program is the PluralSight training. I use this all the time and it is the greatest training platform out there. I share my experiences with this training with everyone and encourage everyone to look in to it.

I am honored to be on the list again this year for the third time. I have applied every year since it started. I would be interested in seeing how many vExperts apply themselves. Early in the program I heard others say “I’m not going to nominate myself, I think someone else should do that”. If you are a passionate IT guru in the enterprise space, I do not see that there would be an opportunity to have others nominate you. Unless you are a superstar in the consulting space, book author or a VMUG leader, I think everyone else must be doing a self nomination. If you do not take the effort to apply, you may be missing out on taking part in a great program. It is up to the VMware Social Media & Community Team to nominate the winners. The criteria to be selected: “The VMware vExpert Award is given to individuals who have significantly contributed to the community of VMware users over the past year. vExperts are book authors, bloggers, VMUG leaders, tool builders, and other IT professionals who share their knowledge and passion with others. These vExperts have gone above and beyond their day jobs to share their technical expertise and communicate the value of VMware and virtualization to their colleagues and community.”.

Why do I participate in the VMware vExpert program? I believe in the technology. The better we understand virtualization and how we use the technology will determine the future of business.

Zerto vs array based snapshots with replication

I want to take a moment and discuss a few feature sets from some of the popular storage vendors on the market today and where their replication technology may overlap with Zerto. I have been doing some shopping lately for a storage array. I’m not talking about the big boys at EMC, NetApp, IBM or HP. I’m talking about the guys growing in popularity like Solidfire, Nimble StorageTintri and Tegile. Violin and Pure Storage are all flash arrays, these are a different animal but provide the same type of replication and snaps. When you are looking at these storage array vendors in the enterprise or cloud space (more often than not) you will find overlapping features with some tools you may own. In the Zerto documentation it states that replication and snapshot management requires IT overhead when using the built storage array features. How true is this?  Nimble Storage is the only one on my list that has a plugin for SRM today, but we are going to talk about the built in features to the array and not VMware SRM. Remember, all of these require more than one like array if you want to take advantage of replication. And also, replications and snapshots do not always give you an orchestrated failover and failback architecture. Most of these will include features with the array and others may charge to turn on a feature. Let me first say that all of these storage arrays are fantastic. They are very forward thinking and each have a great place in the enterprise and cloud space.

Tintri logo

Tintri is built on a application aware storage architecture. The array is purpose built for virtualization. Tintri has three main features. Clone, snapshot and protect. The cloning feature is pretty nice because it allows you to clone form an existing, past or present snapshot (Tintri storage snapshot) to another Tintri array or to the same one.

The array can give you a view from the vCenter web client of all the snapshots from Tintri.

Tintri snapshots

 

 

 

 

 

 

You can setup a schedule to “protect” the VM with a snap and make it crash consistent. You can keep this local to the array or remote to another Tintri array.

Tintri protect

 

 

 

 

 

If you create the snap to another array you can view the bytes remaining to replicate and the throughput.

Tintri replication throughput

Tintri replication status

 

 

 

 

 

All of these features are great, but what are you going to do with that replicated copy once it is on the other side? There is no orchestrated way of bringing it online or doing reverse protection once you have it up.  I’m sure there is a way to work with the Powershell cmdlets to get something working, but that would require many man hours. Zerto does this for you. To replicate VMs from one location to another, two separately license product must be purchased. Tintri Global Center and ReplicateVM. In my use case, I would use something like this to replicate VMs to another datacenter so that I could import them in to vCloud catalogs or work with a production VM offline. The cloning feature would be great for creating VDI sessions as well.

 

Tegile logo

I had a hard time finding any technical documentation on the Tegile web site. Most of what I found was marketing material. You will find plenty of whitepapers, solution briefs and customer stories. But there is not much on how replication functions. What I did find was a lot of product demos on YouTube. One goes through the demo of dedupe, compression and recovery. On the recovery piece, you can see that it is still a manual process. Nothing automated like Zerto provides. Tegile did partner with Voonami to provide offsite replication with its array.

Tegile does has a file level protocol that is SMB 3.0, which can be used as a Zerto backup target. They have partnered with Veeam to provide the backup solution. Veeam does have a great set of tools.

 

 

 

Solidfire logo

Only SolidFire delivers native snapshot-based backup and restore functionality usable with any object store or device that has an S3 or SWIFT compatible API. SolidFire now offers the SF2405 and the SF4805  with enhancements to SolidFire’s real-time replication offering with a storage replication adapter (SRA) for integration with Site Recovery Manager (SRM).

The real time replication built in to the array is not a full DR solution. Investments must be made into SRM and you must have like arrays at the sources and destination. On the VMware side, this would require a 25 pack of licenses for SRM or vCloud suite licensing on all of your hosts.

 

Nimble logo

 

I think Nimble has the most comprehensive tools when it comes to replication. Nimble has a post on Nimble OS 2.0 that does a walk through of how to configure replication. This covers only the array based replication. With the backup solution, Nimble has included a set of tools for backup and recovery. They have also partnered with CommVault Simpana to provide a more comprehensive back and recovery process. You will need to register on the Nimble website to get all the details with a best practice document. What it comes down to is that the CommVault solution is still a backup process. It is not a real time replication product like Zerto. This tool relies on array based snapshots and is not at the kernel level like Zerto’s VRA. The recommended snapshot duration from CommVault is 15 minutes. A lot like vSphere replication out of the box. Although Nimble can handle snapshots up to a minute. The recovery process is still manual.

Nimble also integrates with VMware SRM. This is the DR method listed for the array in a VMware environment. They have also included a webinar demo of the array with SRM.

 

To sum it up, I think that each one of these storage vendors has great potential to replace some virtualization backup solutions but not replace an orchestrated BC/DR solution like Zerto. When you look at one of these storage vendors, think about what your current backup solution does and how it compares to what the array provides. Zerto does provide an offsite backup solution with the product, but it does not provide dedupe or compression at the source today like Avamar. However, you do get dedup when you use a backup target like Windows 2012 R2 and turn on those features or with a storage array that offers this. The target must be an SMB share. Or you can just use a TNT drive and backup to the cloud. What I do like about the Zerto offsite backup product is that it does the backup against the replicated VM at the target site. This reduces resource overhead at the source site. You would not need dedup at the source since the backup does not happen there.

Think of it this way, if you get fed up with your current storage vendor and you want to move to something else, how would you go about reconfiguring an BC/DR plan? If you just use Zerto you will not have to worry about losing any features because the product is storage agnostic! If you made investments in to SRM you might find yourself locked in to sticking with the same storage vendor. The storage array vendors treat SRM like a car dealer does the 3rd party warranties they sell when you buy a new car, they may not tell you about another vehicle warranty company if it is not a built in feature of the product line they are selling, it is up to you to know there are other options out there. It is in the storage vendors best interest to get you tied in with SRM so that you will either buy more than one or stick with the array vendor down the road to depend on the array replication features. If a storage vendor requires you to pay an additional license for a VM protection feature, it may be in your best interest to just stick with one solution like Zerto to reduce overhead. Some times technology overlap is unavoidable, but look to Zerto as the BC/DR solution for your virtualization environment.

I will be adding more vendors to this list as I have time.

Dell Compellent vSphere Web Client Plugin 2.0 permissions

I was working with the VMware vSphere web client plugin 2.0 for Compellent storage and I came across a small roadblock with the service account permissions. Getting the virtual appliance setup is pretty straight forward. Just get your head wrapped around what the CITV is and how it interacts with the Enterprise Manager server “EM Server” (which talks to your storage controllers). CITV does not talk directly to your Compellent controllers.

Looking at the administration guide on page 3, you configure the vSphere web client plugin in vCenter as the service account you want to use.

Compellent account

The first thing you need to do is configure credentials for the CITV to launch tasks in vCenter. I’m a fan of using specified service accounts for virtual appliances. This is a windows account that needs to be specified. However, in the administrators guide it does not specify what level of permissions in vCenter this account needs. I sent an email over to Jason Boche at Dell and he did confirm that the account needs to have top level “administrator” permissions for now. They are working on changing this in the next release.

Fiber status view in the web client

Now that I’m getting use to the vCenter web client, one status feature in particular caught my eye. The Fiber channel status. In the web client, you actually get the status of the link whether it is up or down. Wouldn’t it fantastic to see more about the link status like congestion or fabric speed? That info may come from 3rd party tools.

Typically you would check the path policy on a datastore to see if one side of the link is down (or both).  This is something you could create an alarm for in vCenter, vCOPS or even 3rd party monitoring tools.

When checking the C# client you can view the HBA, but it does not give a status on the device.

HBA unknown - client

If you take a look in the web client at the same level view, you get a “status” column for the HBA.

HBA unknown - webv2

In my particular case I found the fiber cards were set to the default speed of “auto”. In this fiber fabric, the blade module was 8GB, the brocade directors were 16GB and the storage controllers were 8GB. It is best to set the HBA to the lowest speed in the fabric.

For a Dell M620 blade, this is how you set the fiber card speed. These options may vary slightly based on your version of firmware.

Enter the BIOS of the server (not the “ctrl+q” during the boot process).

enter bios

Go to the device settings.

configure device settings

You will then get a list of devices. Make sure to update both of the fiber channel cards.

device list

Select the port configuration page.

configure device port settings

The last step is to set the port speed for the HBA.

configure device port speed settings

After you have saved these updates, the vCenter web client should show the status as “online”. Different conditions may exist as to why your HBA is in an unknown state. It could be a bad fiber cable, it could be unplugged or  a link configuration mismatch.

VMworld 2014 wrap up

Wow, another VMworld just went by? This year was great. So much is changing. These are the highlights from VMworld this year:

1. VMware introduced (ROBO) remote branch office server licensing. You can compare the two editions here.

2. VMware workspace suite for Horizon and Airwatch.

3. With 5.5 U2 you can now modify version 10 virtual machines with the vSphere client. The update will also include a few feature updates.

4. Bummer, vSphere 6 is still beta. You can sign up for the beta and test it out! Nice stuff like 4 vCPU FT, vMotion changes, the web client has improved, vMotion across vCenters and virtual datacenters (just to name a few things)

5. Virtual Volumes (vVols).

6. VMware integrated OpenStack. (VIO)

7. New network certifications, VCP-NV, VCDX-NV and VCIX-NV. More certifications to add to the list!

8. vCloud Air. There will be a government services platform coming in September. In 2015 there will be an on demand version, but there will be a beta available soon. EMC Viper has a tech preview of object storage.

9. EVO:Rack and EVO:RAIL. 6 vendors so far will be selling these setups.

10. VMware vSphere data protection version 5.8 and vSphere replication. SRM is also at version 5.8.

11. VMware vCAC 6.1 will arrive in September.

12. VSAN version 2.0 is out, but in beta.

13. vRealize operations suite.

There are also a lot more announcements based on development of Horizon DaaS, vGPU from Nvidia and Project Fargo.

This was a great VMworld this year. I had a chance to meet up with some really smart guys. It was a great pleasure to see what all the storage vendors are doing in the market place. It will take some time to digest all of these great announcements coming from VMware.

 

 

 

 

VMworld 2014 goals

Whether this is your first time or twelfth time at VMworld, it is important to have a goal when going to VMworld. It is not just about the swag and attending the parties. Remember, this is a huge learning opportunity. At the solutions exchange, ask questions, put them on the spot. The specialists are there to answer questions. See if you can get away with pulling the drive on some SAN arrays.

For me this year it is all about DR and storage. The market for host based storage has exploded over the last few years. From PernixData to VSAN. All flash and hybrid arrays have really taken off as well. There are some really awesome storage arrays like Tintri that makes life really simple. There are new players on the market and old dogs with new tricks.

Take things one day at a time. Take notes on technologies you need to follow up on. You may forget them by the end of the day due to the information overload at the conference.

When good vCenters go bad

The idea of virtualizing the vCenter server is not new. I believe it was version 4.x that really started to push the virtual vCenter hard (eat your own dog food approach). 5.x gave us the Linux vCenter virtual appliance. Even with the virtual appliance, there are special considerations to keep in mind when having a virtual vCenter. Although resource requirements have changed since ver 4.x, best practices around creating and placing the virtual vCenter have not really changed. Typically it comes down to understanding your vSwitch configurations when it comes to getting out of a jam with vCenter. In the past some have relied on vCenter server Heartbeat, but that is EoA as of June 2nd 2014.

Mandvis has a couple of good posts on recovering a vCenter during an outage and also special considerations around using hardware ver 10 on your vCenter server.

I would also like to point out a couple of other scenarios to keep in mind when placing your virtual vCenter server on a host and when it comes to recovery during an outage.

Scenario 1:

You have a blade chassis with different fabrics. Fiber, 10 GB and 1GB management. Virtual machines are connected to the 10GB fabric and host management is connect to 1GB fabric. Fiber channel storage is the primary storage for virtual machines, which traverses the fiber fabric. NFS volumes are mounted to house ISO files and templates, which traverses the 1GB network. I had a situation where the network admin used 4 uplinks from each 1GB fabric and properly split them between upstream switches. This would be a proper design (see diagram below). But, instead of bonding the four 1GB cables from each switch, only 1 cable out of 8 was active to the upstream switch. From a blade perspective, all NICs look active. So when we lost the network on the upstream switch, we lost management to the entire enclosure hosting VMware blades.

blade connections

This also effected the vCenter server that had a CD-ROM attached ISO file. The NFS mount was over the 1GB network. This caused the VM to “pause” with a warning message…

Message on vCenter01: Operation on CD-ROM image file
/vmfs/volumes/16b2bd7c-1d7757ef/VMware/VMware-VIMSetup-all-
5.5.0-1991310-20140201-update01.iso has failed. Subsequent
operations on this file are also likely to fail unless the image file
connection is corrected. Try disconnecting the image file, then
reconnecting it to the virtual machine’s CD-ROM drive. Select
Continue to continue forwarding the error to the guest operating
system. Select Disconnect to disconnect the image file.

As you can see, the VM would not resume until action was taken on the CD-ROM from the host console. This required knowing which host the vCenter VM lived on. It is still best practice to create a DRS rule to keep the vCenter VM on a known host (sometimes the first host in the cluster is best). We could not acknowledge this prompt from vCenter, because the VM was in a paused state. Once the message was acknowledged from the VM, vCenter came out of its pause state.

Scenario 2:

The host freeze. Not a PSOD, but the hypervisor going into a hung state. I have only seen this happen once. Even from the DCUI you are unable to restart the management services. But virtual machines continue to run. You are unable to log in to the host console to take action on any virtual machines. It is in a “zombie host” state. I’m not sure if host isolation elections even kicked in.

We accepted the only course of action was to pull the power cord on the host server to force a fail over. With doing this, HA should kick in and fail over the virtual machines. But even after powering off the host, the virtual machines stayed registered to the host. Even a manual “unregister” was not accepted while the host was powered off. The host would not release the locks on the VMDK files. We had to remove the host from vCenter and then re-register the virtual machines to new hosts in the cluster. So it may have been a combination of the vCenter DB and the host isolation response. This was the first time I have seen HA not work properly. Even VMware support could not pinpoint the issue.

So what do you do when you are in a scenario like this and vCenter is on the host that is in a hung state and will not release the locks on the VMDK files even after a host is powered off? I would image you would need to do something nasty to the storage volume to release those connections. Or possibly restore vCenter from backups to another host server.

Of course there are other recovery scenarios you have to keep in mind with vCenter… DB becomes full, OS corruptions, miss-configuration by other admins (like deleting the wrong SQL tables),  no DB backups or issues with any of the other components (like SSO) installed on your vCenter server.

 

VCAP-DCD 5.1 vs 5.5

 — Edit 8/26/2014 –

So I had a chance to sit the VDCD550 exam on 8/24. Unfortunately the exam crashed on me twice with only 30 minutes to go. I decided to continue on with the exam and receive a modified grade in a week, omitting the question that crashed the exam (I cannot say which one). I have now taken the 5.1 and the 5.5 exam. The biggest question that has come up is “is it easier”. Not really. Just because there are not as many questions as the 5.1 does not make it easier.

I cannot go in to detail on the type of questions I had. But I will say, read the exam blue print. Pay attention to section 1.2 “a mixture of drag-and-drop items and design items using an in-exam design tool”. Do not expect any questions that will give you a single radio button and you just move on. The blueprint tells you the style of questions you will have.

I found the design questions were not as hard on the 5.5 like they were on the 5.1 exam. They did a good job on cutting out the fluff and getting to the point. It was easy to read the scenario while you did the design. The important details will pop out at you.

I don’t think I am allow to say what the master design question is. Again, read the blue print. You have 5 “design” questions and one “master design”.

As far as the content goes, the blueprint is what you need to focus on. Just reading about the topics in each section will not help you. You need to install and configure each component like VSAN, VMware storage appliance, Auto deploy, vCenter virtual appliance, vCenter heartbeat, update manager and any other core product the ties in with vSphere. It helps to read the “what’s new” guide and go from there. Anything that can tie into 5.5 is fair game. Also brush up on the ITIL v3 documents listed in the blueprint. Be aware of storage architectures and how each one is different. Again, the blueprint tells you what you need to focus on. Get the theme? Study the blueprint.

I prepared by reading the vSphere design books, doing the PluralSight videos, reading blogs, watching the vBrownbag sessions and making myself flash cards. I’m telling you though, you need to go through the design process yourself. Use your home lab and come up with a scenario to create concept, logical and physical designs. As crazy it may sound, come up with a project to deploy vSphere and oracle on a laptop and go through the design process. Yes, it will not work, but document the process. If you have a good home lab with multiple whitebox hosts and central storage, that will help.

If for some reason I do receive a failing grade, I will be creating a comprehensive study guide made from the exam blue print for VDCD550. I think the exam was a tough but good one.

Who do I think this exam is good for? Anyone who wants to achieve it. I have been doing architecture for the past 5 years at the same company. I have been in IT for the past 18 years. I have had projects from time to time that require some form of project documentation, but nothing as intense as a vendor coming in to deploy a new technology.

Who do I think would have no problem passing this exam? Consultants who work with multiple different customers to deploy solutions. It is they’re bread and butter to come up with a design to win business. Someone who does consulting for a living would have not problems with this exam.

Who do I think would have a tough time passing the exam? Admins who are not involved in the design process. Usually a process is handed over and tasked to the admins to build. Architects who have been at the same place for a long time and know the infrastructure. Is is not a challenge to think about storage or network architecture you work with every day unless the workloads for a project require something else. It would also be tough for admins or architects who do not have VMware as they’re only focus. Someone who must work on Microsoft or other platforms 50% of the time outside of the VMware infrasture may have a hard time. If you have a project to deploy a large SharePoint  environment leading up to the exam, it may be a little tough passing the exam.

But hey, I could be wrong. There could be some super smart guys out there who are helpdesk pursuing CCIE or VCDX. It is up to you to know what you feel confident with. Rise to the challenge and defeat the exam!

 

 

_______

VCAP-DCD 5.1
– 225 minutes
– 100 questions
– 6 design questions

VCAP-DCD 5.5
– 195 Minutes
– 46 questions
– 5 design questions
– 1 Master design question

One big change I also see – there is no mention in blocking you from going back to review flagged questions. This is a big change. Although time management may not allow a whole lot of time to review flagged questions. I am guessing the design questions will still take 15 to 20 minutes a piece. The Master design question needs 30 minutes.

The “Master design question” still remains a mystery.

If you are sitting the exam, you still have the option to cancel your current exam and reschedule for the 5.5 version. I had to problems at all with scheduling the new exam. Just make sure you do it before your cancellation window.

VDCD550 https://mylearn.vmware.com/lcms/web/portals/certification/VCAP_Blueprints/VCAP-DCD-VDCD550-Exam-Blueprint-v3_2.pdf

VDCD551: https://mylearn.vmware.com/lcms/web/portals/certification/VCAP_Blueprints/VCAP5-DCD-Exam-Blueprint-v3_0.pdf

 

Exam discounts: http://www.vmworld.com/community/conference/us/learn/training

You will still need to request authorization for the exam even if you were approved for 5.1. https://mylearn.vmware.com/