I still hear a lot of talk and debate about using traditional PC management products (e.g. Microsoft System Center Configuration Manager, BigFix, LANdesk) in virtualized environments.
Some admins/architects cannot understand why anyone would move away from these existing tools just because their desktops and apps are moving into a virtual world. Others have the opposite reaction, and can’t believe there are shops “still using those old traditional tools” in their new virtual environments.
The reality is that there are reasons to keep agent-based management systems like SCCM in virtualized Citrix XenApp/RDSH and VDI environments, and reasons to dump them in favor of new technologies such as app layering that are designed for virtualized datacenters and the cloud. As you’ll see here, it really all depends (sorry, I was a consultant for too many years… it ALWAYS depends).
Why Companies Manage Their Virtual Environments with ‘Traditional’ Management Tools
So, why do some organizations keep using SCCM (or the like) when they move to virtual?
The best reason is simply that it works. If your org has image management, app packaging, and patch management down to a science, if it’s efficient, if it doesn’t cost the company too much, and if it works almost every time… what would motivate you to change? You’re able to manage all Windows machines the same way (whether physical or virtual) and continue to use existing tools, reporting mechanisms and processes no matter what platform Windows is running on.
Great, right? Sure — if all of the above about being efficient, working every time, not costing much, etc. are true. But this is rarely the case. That’s why it’s more common for organizations to change their management tools, or at least split them so they have a new set of tools for the virtual environment and the agent-based tools for physical. Which leads me to…
Why Companies Use New Tools in Their Virtual Environments
There are a number of reasons why a company that already owns an existing agent-based tool would spend the money, time, and effort to implement new tools just for its virtual environment. Some are simply political, but I won’t address those here. I mean, I’m not going to tell you how to solve for the fact that the desktop admin doesn’t like the application packaging/SCCM team and therefore didn’t include them in the project. Those issues definitely exist, but we’re here to talk about the technical arguments for changing management tools.
In my experience, there are generally 3 reasons why VDI and session-based computing environments are managed differently than physical environments.
- Application Packaging is Hard
Calm down… don’t get worked up… because if your company has an actual, dedicated, application packaging team you are probably going to scoff at this section. But the reality is that the average IT person, Windows Admin, or Virtualization Pro has about as much of a chance of successfully packaging and distributing a complex application on their first try as I do of making a complicated Rube Goldberg machine work flawlessly the first time.
Getting the system set up, agents deployed, and applications packaged correctly for silent install and flawless execution are not easy things to accomplish. If they were, then why do large companies need dedicated application packaging teams?
Even if you’re large enough to have a dedicated team, your org is probably still looking for a simpler, easier, and faster way to package and manage your applications. This is how you become more agile and competitive as a business, not to mention the costs you can save by reducing the number of people and the level of IT expertise needed for app packaging/delivery.
Technologies like application layering, which deliver apps by attaching virtual disks either at logon or by creating layered images for distribution by image sharing tools like Citrix PVS or MCS, make app management extremely simple for IT staff. They also minimize the need for consultants or dedicated application packaging resources. Why? Because layering just requires an ordinary install that almost anyone in IT can do. There’s no longer any special “packaging” needed. And this same technology can be applied to the operating system and the patching/updates Windows requires as well.
- Storage is Still Expensive
An advantage of block-based image sharing tools like PVS, in combination with app layering, is a reduced storage requirement from a capacity perspective. They allow for generally a 10:1 (or more) reduction in the amount of writable storage that an organization needs to support a virtualized environment.
This storage reduction is not available natively when using a typical agent-based management model. Instead, each VM is a “full” persistent VM with its own copy of the OS, apps, etc. So, instead of using only a couple of GB per machine, a traditionally managed VM may need 30, 40 or more GB of writable, high performance storage.
App layering adds to this savings, as the app layer virtual disks are typically mounted read-only and shared by hundreds if not thousands of VMs. When you compare attaching a single app layer virtual disk to 1,000 VMs with installing the same 1 GB application 1,000 times on 1,000 VMs using SCCM, you can easily see the storage savings.
At this point, some may ask, “Why not use the block-based image sharing WITH SCCM to deliver apps and do inventory, since SCCM can be deployed IN the image you know.” This is true. SCCM and other traditional agents can be deployed in the image and even have their unique ID reset for deployment.
The issue isn’t with the agent. It’s that the image is updated and/or the desktops are set as non-persistent and refreshed at each logoff. In both of these cases, the “desktop” can essentially be destroyed and rebuilt constantly. This may happen after each logoff, or whenever the image is updated (patched). This creates an environment where the app delivery system is constantly trying to install apps to these “new” desktops. Imagine at each login the agent kicking in and having to deliver additional apps… every… single… time… Ignore the failure rate potential here and just think of network, CPU and disk load (not to mention the user experience at login) as packages are installed over and over again.
Basically, the agent-based install models do not play well with non-persistent desktop and image sharing tools. For IT shops looking to get storage savings to keep CapEx down, using your existing agent-based management tool is going to increase the upfront capital costs for the entire project.
- CPU, Network, and Disk Need to be Over-Provisioned
One of the most discussed — and sometimes most overstated — issues is the impact of agent-based management tools on resource consumption in virtual environments. Don’t get me wrong, if you distribute a patch and a bunch of new updates to 1000’s of machines simultaneously in your virtual environment, this CAN cause major issues. BUT, this is also something that COULD be solved with extra capacity designed in (more $) and smart scheduling of these updates (more complexity).
The real problem though, is that this issue often isn’t thought of until it’s too late. Generally, this happens after the pilot is done and tons of VMs are rolled out and a big update brings the storage to its knees or causes machines to start failing as the host is starved for other resources.
The basics are this: A virtualization host and its external infrastructure are built on resource sharing. No VM has full access to the host’s CPU, memory, network or disk, because rarely do VMs (even heavily used desktops) use 100% of any given resource all the time. And when they do “peak,” it tends to be at different times. This allows for spikes in individual VMs but pretty level overall host utilization.
Agent-based app distribution models assume that a Windows machine is essentially a standalone machine with full access to all the resources it needs. So, if an application is pushed and installed by agent, let’s assume it uses 20% of CPU and averages 150 IOPS for 10 minutes. If your design assumes 100 VMs per host and all of them receive the update at the same time, all attempting to use the resource numbers above, you are probably going to swamp the server CPU, swamp the disk (or connection to the disk) and possibly cause some of the VMs to fail/bluescreen when the storage is not responsive.
Can this happen? Sure thing. Does it? Yup. You can avoid it, but it means building in extra capacity to handle that load ($$$) or being very careful to schedule your updates so only a percentage of the machines are updated at any given time. And this, of course, increases the maintenance window size and level of expertise needed since all the machines cannot be done at once.
For a tool set that is already complicated and hard to use, this is usually the last straw for most admins. Which is why they jump to new tools like Citrix App Layering in combination with the tools provided by Citrix or VMware to avoid these issues altogether.
Just Not Designed for the Virtual World
The reality is that most agent-based application and image management systems were not designed for the virtual world and for sure not the cloud. They were designed when a PC sat under your desk and had complete access to its own isolated resources.
In the virtual environment (on-prem or cloud) resources are at a premium. And as companies strive to get higher and higher densities from machines in order to reduce costs, the margins available for performance hits become smaller and smaller.
Couple that with the new dynamic world of mobile Windows work spaces, where machines are constantly created and destroyed, apps are delivered and removed, and users can log in and log out of any machine and get their “stuff”, and you can see why so many IT architects are concluding that agent-based management can’t always make the leap to the cloud. In my case, I look to other technologies like App Layering. If you are looking for an alternative for your application and image management, you can download and try Citrix App Layering today from your Citrix Cloud portal.