A perplexing question: If you recently shipped the product and you now know something is broken, do you tell anyone about it?
I was in the lab yesterday working with two nice folks on the Citrix test team developing some App Streaming scripts for show and tell at the upcoming Citrix Sales Summit. We encountered some things in scripts that did not work the way they should. Technically, they did not work the way that I expected them to work and conveniently, I’m architect of this gig, so I get to call that busted. This blog describes the situation so you won’t be surprised when you encounter the problem. It also provides workarounds and then debates the merit of “full and public disclosure”.
Background on scripts
Scripts come in 4 forms, a result of a 2 by 2 matrix of these capabilities.
- Pre-Launch vs Post-Exit and
- Inside isolation vs Outside isolation
Pre-Launch scripts run before the FIRST application of an execution image (sandbox) is run and post-exit scripts run AFTER the final application of a sandbox terminates. Inside and outside of isolation is pretty self explanatory. Scripts are further elaborated upon because there can be PROFILE level scripts, and there can be TARGET level scripts and targets can choose whether they want to use the profile scripts at execution or the target scripts and there can be an unlimited number of scripts of any of these types.
The configuration data in the Target tells the isolation system whether to run the profile level scripts or the target level scripts. Both profile and target level scripts can exist, but even if the target has scripts, that doesn’t mean that they will be used. The profile/target setting tells the client which set of scripts to run.
Scripts are BINARY. They might be .BAT files or they might also be .EXE; the Streaming Client treats them all as binary. There is no concept of editing scripts because they are assumed to be executable content or perhaps DLLs. To edit scripts, you are forced to delete them from the profile, edit them some place else and then add then back to the profile. That’s a hassle, but it is not what this post is about.
What’s the bug?
“Stale scripts” – If as an administrator, you update the scripts, you expect the streaming client to use the updated scripts when the user later runs the applications from that profile. This is broken for profile level scripts. Read on…
Target level scripts are global to the target, or more precisely, to the particular revision of the target. Let’s assume you are using Target level scripts – all is good. The scripts themselves are stored with the .CAB file that provides the execution content for the target and this is hard to mess up because the CAB filename is versioned and the expanded contents of the CAB in the RadeCache are also versioned via the naming of the RadeCache subdirectory. Target level scrips here are not the bug. To notice, you cannot update the scripts for a Target without bumping the revision on the Target, so all is good. Part of this is a positive fallout of all of the target contents being stored in a CAB file for easy transport.
Profile level scripts are SUPPOSED to be global to the profile. Well, they are global to the profile. In theory, profile level scripts are global and they have effect over all execution Targets. This provides a SINGLE POINT OF MAINTENANCE for scripts no matter how many execution targets your profile may have. For example, if you need to NET USE a network drive before running an application and if you have 4 execution targets, you will want to use profile level scripts because this is a single set of files to maintain. Then, if the server location changes for the “net use”, you have to update only the profile level scripts and all is good for ALL of the Targets. This is how it is supposed to work.
My experience yesterday says that this is not how it is working (Streaming Client 1.2 == XenApp 5.0). Step 1: Ask the testers to write a CPR. Step 2: Get the development team to fix it. Step 3: Wait 6 months to a year to get it out the door. Hum. That last step sucks.
If I keep quiet, maybe nobody will notice! Hey, scripts are kinda rare anyway and this will only come up if the admin is using profile level scripts and will only show itself if they UPDATE their profile level scripts without also reving the Targets, so let it be. My scruples can’t do it. If I were getting hit by some bazaar bug, I’d like to hit google and find out that someone else has see the issue before and knows how to work around it. This situation is different only because the person hitting the strange behavior is the one who has self-motivation to make things appear bug free and polished as possible. They are polished! Trust me – but I’m still telling you about this odd behavior.
Details on the bug
When the Streaming Client decides to run scripts, for Target level scripts, it extracts ALL of the scripts from the execution CAB on the network server and places them into the streaming execution cache on the execution machine. These land in RadeCache\GUID_version\Scripts. They all land in a single directory and we don’t guarantee where that directory will be, but we do insist that all of the scripts be PRESENT in the single directory before ANY of the scripts are run. This makes it possible to have DLLs or data files as “scripts” by defining them in the streaming profiler, but marking them disabled. I digress – back to the bug.
When the Streaming Client decides to run Profile level scripts, in concept it simply runs the scripts from the network server and calls it good. The scripts are already all in the same directory and the network server is accessible so local copy isn’t needed. Run it from the server, done. I expect where things “changed” in 1.2 client is that HTTP is now a delivery method for Scripts which means that the scripts must be copied to local machine before they can be executed.
In this case, there is no extract because the scripts are merely files located on the network server in the “Scripts” subdirectory beneath where the profile top directory is located. Side note – just placing files in this directory will not convince the streaming client to treat the files as scripts. It will only execute scripts that were defined in the Streaming Profiler and will only copy for execution Scripts that it believes are scripts. This was done on-purpose for complicated reasons primarily centered around digital signatures. If it ain’t signed, don’t run it. That’s good stuff – if only people had Crypto infrastructures in place. More digression.
The Streaming Client is SUPPOSED to copy the contents of the server side Scripts directory to the client, somewhere, and execute the scripts from there. Where on the client machine? I don’t care! But don’t put it into the execution Target!
The Streaming Client is copying the contents of the profile level scripts into the scripts directory for the Target inside the RadeCache. This is bad.
Consider this sequence:
- Admin creates a Profile with single Target and a single pre-launch script defined at the profile level.
- User runs the application.
- Streaming Client copies profile level script to the Target RadeCache\guid_v\Scripts directory.
- Script runs. Client machines goes about its merry business and eventually user closes the application.
- Admin UPDATES the profile level script. Admin does not update the execution target.
- User runs the application.
- Streaming client says “WOW! I already have the script, there is no need to copy it from server” (bug).
- Streaming Client has now run a “stale” version of the pre-launch script and nobody is aware.
- Next setp – Possibly: Admin updates the execution Target.
- User runs application and now gets the PROPER profile level script, must be magic!
Where the code broke:
Joe’s rules of the Streaming Client – NOTHING goes into the RadeCache\guid_v directory that did not come from the CAB file on the network server. Scripts at the profile level need to be copied local for execution – true. They do not need to go into the RadeCache directory on the client. Is another directory needed for the profile level scripts? Yup! Where should it go? I don’t care, but it should be someplace writable by the Streaming Service and not writable by ordinary users. It should also “hang around” between executions so that the streaming client can diff the local scripts with the network server to see if anything was updated. This may require putting file date/time stampts into the XML data and accompanying files that describes the profile so that a network side diff isn’t needed. Conveniently, we already have this date/time data in other files with the profile so the delivery method of network vs. http will not be an issue. What it will mean long term is that merely updating files on the network server to update the profile level scripts is “not good enough” if you expect them to get executed on clients. The Streaming Profiler needs a chance to store file information about the scripts so that it will know when profile level scripts are updated – to trigger clients to bring down the new versions of files.
The writings above are what I normally put into the Citrix defect tracking system. This is what I’d call a good bug report – one which developers can actually use to see the bug, know how to fix it and testers can then write additional testcases so this doesn’t get out the door busted a second time.
- Use only Target level scripts. This is fine if you have only one target. If you want profile level scripts, this is no good.
- Update the Targets when updating the profile level scripts. Works, but causes target upgrades unnecessarily.
- Write a script to erase all files in the script directory and run this script as the final pre-launch script and the final post-exit script. This will force the streaming client to pull down the script content from the network server on each run. I haven’t technically tried it, but this should be the preferred workaround. Its a neat script too! Batch file with: “echo Y|del .” . This is probably better entered as “if exist ..\scripts echo Y|del ..\scripts\.” . To be clear, this would not be needed for Target level scripts.
- Erase the RadeCache\GUID_v\Scripts directory via software management system on client workstations when you update the profile level scripts. This is much like “3”, but prevents copying of script files on each launch, so it is better.
Thoughts on full disclosure
Why did I paint my horse ugly? If I were an admin and was encountering something like this, I would want to know the cause and I would want to know the workarounds. This post should help. Let me know if it was useful.
Citrix Systems – Product Architect, Application Streaming