This is the moment many have been waiting for, it is the moment of deciding the delivery methods and automation!

Let´s call our example Jim here, looks like he works for sales department. Below illustration is from forward path report which is virtually fully customizable decision making engine where you can define decision trees, set workflows and execute scripts attached to those workflows. It is incredibly powerful when you use grouping functionality and calculation capability.

In this example applications are grouped by organizational units, Sales where Jim works too. One big difference to other AppDNA reports is the RAG status. Here it doesn´t mean the same as in other reports. Here it means something you have set for your deployment scenarios yourself. For example it can be an indication of your preferences.

Additionally there is custom columns, meaning you can set values for defining platforms, delivery methods, packaging types or anything else you ever imagine. Good way for using the custom columns would be something like below, but the point is, use it in a way that serves your purpose.

How does all this relate to deciding the delivery methods? It is the Outcome column. Let´s say that we have defined Jim´s scenario. Remember, he needed old school methods (except the one app). This is the point for expected user experience. For example, the first application on the list may very well work on top of 2003R2 and could be published from XenApp farm, but Jim needs this app while he is flying from New York to Melbourne, not going to happen. If it works on 2003R2, it would probably work on Windows XP, once again Jim wouldn´t be happy carrying two laptops and both options would anyway be something that shouldn´t be used. Yes, I know. There is XenClient and there is always the way to carry datacenter in the backpack, but the point here is that applications should not be delivered by any means possible. If the application doesn´t technically fit to the user scenario, we need to re-evaluate things instead of just delivering somehow. As a result this application drops to a workflow “Re-evaluate”. As you can see, the rest of the applications are set for different workflows and the RAG status follows the workflow. Green being most preferred and red means need for plan B.

 

It wouldn´t be cool without automation, right? Once again the Outcome column is the key as each one of these outcomes can represent a workflow and workflow means we need to do something.

One of the key features in AppDNA is automation, which relies on scripts that can be associated with these outcomes. In this example the scripts are described below. In short it would work as follows:

 

Re-evaluate:

Application did not fit to the scenario, we need to stop working with the application and notify application owner and tell why, but we also save a report to a file share where application owner can see the reasons why we are not delivering. The report can be even application remediation report if the workflow notification would be sent to packaging team to start manual repackaging.

Workflow 1.0 

Application fit´s to the scenario and we want to automate App-V sequencing for windows 7. Each application falling to this workflow would be processed by calling a virtual machine where we call the sequencer and push the installation commands while controlling the sequencer and all other tasks from command line, automatically. Eventually, the application could be even published automatically.

Workflow 1.1

Application was not good for sequencing, but as the import format was MSI and has been analyzed to be good to go as is, all we need to do is publish the application. We just send notification to a person who will continue the workflow. Once again, it would be possible to script the publishing if we wanted to.

 

As a result, we have processed all of the applications to a point we could. If our scripts were top of the art, we would already have half of the applications published to the new environment and customer would be very happy of the fast progress. Remember, in manual process you can in average deliver one application per day per packager, and that´s slow. Also we have created clear path for applications needing some actions and they are already moving forward in the process with detailed remediation reports so packagers are not required to invent the wheel again. Instead they just follow the reports, fix the apps and send them to publishing. Then, the ones we can´t deliver are in a specific workflow for re-evaluation.

This also means that we have made a dramatic cost and effort reduction for the project as we immediately cut off half of the applications, they were just unnecessary workload for the packagers.

So, in the end we processed all the applications without manually touching any of them and didn´t worry about technical issues at all. Of course, all the applications are not available for users before remediation, but you get the point…

As all of this is based on scripts, you can use your imagination what else you could do. I may think about automating XenDesktop site creation, virtual machine creation, creating machine catalogs, desktop delivery groups, Active Directory group creation, adding users to groups and application publishing. But what if customer is moving to clouds and there is no infrastructure where to push these things? Go ahead and include it here!

I would be happy to share how it could be done, but unfortunately I can´t do that for the reason it would be custom script and it is your task to make your own scripts for your own environment. All I say, it is possible.

We are not done yet, there is the applications we could not process. Let´s say that there is remarkable redevelopment needed to make them work. Would you like to know how much money you need to spend on remediating those apps? Of course! Would you like to know if it would be cheaper to replace that application than fixing it? Of course!

As we know whole bunch of things of the applications, issues, workforce, cost and timeframes, we can of course calculate the remediation cost per application. We can even make rules based on different factors like the image version the application is going to be implemented, who is going to use the application or how many users the application has. It is just about setting the calculations correctly. I am not going to touch that area deeper than this because it is huge area and may be worth of another series of posts.

Just look the below illustration and think where we ended up and what kind of decisions you would do with this kind of report?