I previously wrote two blog articles on properly configuring Citrix Profile Management and Folder Redirection and architecting it so that it scales for large environments.  If you have not yet read the previous two articles, then I would suggest you read them first.  You can find them at the links below:



In this third and final installment of this series, I will provide some guidelines on IOPS and network bandwidth requirements for the file servers or NAS devices hosting Citrix Profile Management and Redirected Folders, which should help you determine how many users you can place on a single file server or NAS device.

First, I need to put out the standard disclaimer that your mileage will vary!  Trying to determine how many users you can get on a file server is like trying to determine how many users can you get on a XenApp server, or how many virtual desktops can you get on a single server?  Based upon what your users are doing, the applications they are running and the hardware specifications of the server, the number of XenApp or XenDesktop users per server can vary greatly. Thus, the number of users per file server can vary greatly as well based upon all of the same variables. With that being said, I will provide some real world numbers from what I have personally seen at customers in the field and I will also provide data based on testing that I have performed as well as benchmarks of my own personal IOPS consumption on my desktop.

File Server Performance and Scalability

Before we get into the requirements for Profile Management and Folder Redirection, let’s first go over some basic details about file server performance and scalability.  There are a lot of things that will affect how well a file server or NAS appliance will perform and scale.  Some of the key elements that will determine the scalability include:

  • How many IOPS does the physical storage support? IOPS capability is driven by the following:
    • How many disks/spindles are in the RAID set?
    • What kind of RAID is used (RAID 1, 5, 6, 10, etc…)
    • How large is the cache on the RAID controller and how is it configured?
    • Are writes optimized or first sent to tiered SSD storage?
  • How much RAM does the server have for CIFS read caching?
  • How fast are the Network Adapters?
  • How many CPU cores are there?
  • What version of CIFS/SMB is being used 1.0, 2.0, or 2.1?
  • Has the CIFS protocol and TCP been properly tuned?
  • Check out my previous blog on CIFS tuning /blogs/2010/10/21/smb-tuning-for-xenapp-and-file-servers-on-windows-server-2008/

If you are using Microsoft Windows 2008 R2 file servers or clusters (physical or virtual), then please make sure that you do the following at a minimum:

  1. Give the file server at least 32 GB RAM (preferably 64 GB).
  2. Give the file server at least 2+ cores/vCPUs (preferably 4+ if it will host a lot of users).
  3. Implement all of the SMB tuning recommendations from my CIFS tuning blog mentioned above.
  4. If the file server is physical make sure you are teaming/bonding multiple NICs.
  5. If you are using local storage (hopefully not!), make sure you have as many 15k SAS disks as possible in the server and that you have a RAID card with at least 1 GB of battery backed cache.  For a Windows file server I would recommend the cache be split between 25% read and 75% write.

If you are using an enterprise class NAS device from NetApp or EMC, hopefully it is safe to assume that the RAM, CPUs, and RAID configuration have been optimized.  However, if you are using a NAS device, it is still critical that you verify the version of CIFS being used (make sure it supports SMB 2) and make sure that CIFS and, if necessary, TCP have been properly tuned.

As you can see, there are a lot of variables that will determine how well file services will perform.  I don’t want to turn this blog into an article on designing and tuning storage subsystems and protocol stacks for file servers, so I will simply focus on the two key metrics that you need to determine.

  1. How many IOPS do you require per user and what is the read/write ratio?
  2. How much bandwidth do you require per user?

For the rest of this article we will assume that the team managing and providing file services has tuned their infrastructure properly and all you need to provide them is the amount of IOPS your users will generate as well as the amount of network bandwidth they will consume.

How Many IOPS Do We Need?

So, the biggest question that everyone wants to know is how many IOPS do you really need for Folder Redirection and Profiles?  I decided to tackle this question by examining data from three sources:

  1. How many IOPS do I use on my own desktop?
  2. How many IOPS are used in an automated test using the LoginVSI medium workload?
  3. How many IOPS and how much network bandwidth are actually used on a live production system by a real customer?

It is important to remember that for my tests I am only tracking the IOPS generated against the disks of the file server hosting the redirected profile and folder data.  I am not tracking local IOPS generated against the actual Windows 7 desktops’ C: drives.

My Own Desktop

For my first piece of analysis, I decided to examine the IOPS usage of my own laptop.  For this test, I redirected all of my folders, profile and my home directory to a dedicated disk hosting nothing else so that I could track total IOPS usage against the disk to see how much I generate.

I ran my test for 64 minutes and during that time opened many of the applications that I typically use and I made sure that I performed as many as actions as possible.  For the entire 64 minutes I took no breaks and I was extremely active and actually ran more apps and did more tasks in that short time frame than I would typically do because I wanted the workload to be the heaviest and worst case scenario that I would generate using my typical applications.   Before the start of my test, I made sure that everything was rebooted so that none of my data would be held in memory cache anywhere.

Here is a summary of my actions:

  • I Iogged on and immediately opened Windows Media Player and began playing my AC/DC playlist from my home directory.  The MP3’s played for the entire test.
  • I opened Internet Explorer and Firefox to some of the regular pages I typically check and left both browsers and multiple tabs open the entire time.  I frequently went back to both browsers to view various web sites for the entire duration of the test.
  • I opened Microsoft Communicator and connected to my corporate server.
  • I opened Outlook to my corporate Citrix Exchange Server.  I have a 1.3 GB OST file and use Outlook in cached mode and I have a 1.7 GB PST file.
  • I sent/received mail.
  • I opened items from my inbox and did my normal email and calendar tasks.
  • I opened email from my PST file.
  • I cleaned up items from my inbox and moved items from my inbox to my PST.
  • I emptied my Deleted items, which had over 3000 emails in it.
  • I closed my corporate Outlook and ran Outlook two more times using the MAPI profiles for my personal email accounts both of which have 1 GB+ PST files.
  • I printed several emails to PDF files in my home directory.
  • After checking my personal email, I reopened Outlook to my Corporate Exchange MAPI Profile.
  • Outlook remained open for the remainder of the test and I frequently used it.
  • I opened and edited several Excel Spreadsheets from my home directory.
  • I opened and edited several Word documents from my home directory.
  • I downloaded a 108 MB file from ShareFile to my home directory.
  • I opened Quicken and updated my accounts and ran several reports.  My Quicken file is in my home directory.
  • I saved a 6 MB PowerPoint file from my Outlook to my home directory and viewed it.
  • I logged off.

The table below shows my total IOPS against the disk hosting my profile, redirected folders and home directory:

Avg. Total IOPS Avg. Read IOPS Avg. Write IOPS Max Read IOPS Max Write IOPS
5.7 3.1 2.6 189 36

The Disk backing my profile, home directory and redirected folders was a single SATA 7200 RPM disk hosting nothing else.  My workload was the only item running at the time, so all of the IOPS were generated by my usage. I only averaged 5.7 total IOPS with a read/write ratio of 55/45%.  While there were periodic spikes, for the most part, none of the spikes were sustained and I rarely went beyond 30 IOPS for more than a few seconds.

After conducting some more detailed analysis of my workload, I was ultimately able to determine that the majority of my IOPS were generated by Outlook as it was reading and writing to my offline cache file (Outlook.ost) and to the many large PST files that I have.

LoginVSI Medium Workload

For my next test, I decided to use the LoginVSI Medium work load.  If you are not familiar with LoginVSI, then you should really check it out.  It was developed by a company called Login Consultants.  They are a group a highly skilled virtualization consultants that have written some excellent tools and white papers.  Check them out here:


I reconfigured the Medium workload so that all of the I/O was redirected to a single file server and ran out of each user’s home directory.  I placed the shared directory on the same file server as well so that all I/O generated by the workload would be tracked, including the IE content and media files. On this single file server I also placed all of the test users’ profiles, redirected folders and their home directories.

The medium workflow is an automated script that logs on and performs the following actions over the course of approximately 13 minutes:

  1. Logs on
  2. Open Outlook using PST files
  3. Opens and creates/edits Word documents
  4. Opened and creates/edits Excel documents
  5. Opened and creates/edits PowerPoint documents
  6. Opens IE
  7. Opens flash and media players
  8. Logs off

My test ran for a total of 15 minutes and included 3 test users each running the complete Medium work load one time.  The table below details the total IOPS generated by all 3 users combined.

Avg. Total IOPS Avg. Read IOPS Avg. Write IOPS Max Read IOPS Max Write IOPS
7.2 3.5 3.7 127 57

The next table details the total network utilization generated against the file server, which had a single NIC running at 1 Gig.

Avg. RX Mbps Avg. TX Mbps Max RX Mbps Max TX Mbps
.48 Mbps 1.14 Mbps 20.75 Mbps 26.99 Mbps

When looking at network utilization it is important to remember that the network operates in full duplex.  For a 1 Gb NIC, this means that you can both send and receive 1 Gb at the same time.  However, since we are limited to a maximum speed of 1 Gig in any single direction, we will take the highest number from any single direction and use that to determine our average bandwidth per user. Based on the data from this test, the average IOPS and bandwidth per user is shown in the following table.

Avg. Total IOPS Read/Write Ratio Avg. Bandwidth
2.4 49/51% .36 Mbps

Real World Customer

While my first two tests provided valuable data, they were based on more limited and controlled scenarios and also included home directory usage as opposed to just the usage for Citrix Profile Management and Redirected Folders.  My final piece of data involves real world numbers generated by a XenDesktop customer that has a production environment that has been running for well over a year and a half.  Here are some of the details about the environment:

  • XenDesktop is being used with a peak average of approximately 450 concurrent users per day.
  • All desktops are Windows 7 non-persistent delivered via Provisioning Services.
  • Citrix Profile Management and Folder Redirection have been implemented according to the best practices defined in my first blog in this series.
  • A dedicated Windows 2008 R2 virtual file server is being used to host the Profile Management and Folder Redirection shares.  Home Directories are on another server.
  • Most of the users are standard office productivity workers.  There are lots of different applications being used, but the most common applications would be Office 2010 with Lync, Internet Explorer, Firefox, Media Players, Adobe Acrobat, etc…

I tracked performance usage over the course of several days between 06:00 – 18:00 to make sure that each day provided a similar usage pattern.  Every day was almost exactly the same from a usage perspective, which shows that the data is valid.  Also, I collected the data over a 12 hour period to make sure that I was able to get all the highs and lows so that I could zero in on the periods of highest usage.  For this customer there were two particular time periods that provided the best data:

  1. 07:00 – 9:30 (This is the period where most users logon)
  2. 10:30 – 16:00 (This is the time period with the greatest number of users and highest usage)

Now let’s take a look at the data for each of these time periods.

07:00 – 9:30 – The logon period

During the above logon period we started with 103 desktops already logged on at 07:00 and over the course of the next 150 minutes an additional 243 users logged on at a steady and consistent rate of about one user every 37 seconds.  We ended with 346 users logged on by 09:30.  Average number of users connected during the time frame was 225.

Avg. Total IOPS Avg. Read IOPS Avg. Write IOPS Max Read IOPS Max Write IOPS
127 50 77 962 663
Avg. RX Mbps Avg. TX Mbps Max RX Mbps Max TX Mbps
4.58 Mbps 15.92 Mbps 230 Mbps 360 Mbps

Based on the logon data time frame, the average IOPS and bandwidth per user is shown in the following table.

Avg. Total IOPS Read/Write Ratio Avg. Bandwidth
.5 40/60% .07 Mbps

10:30 – 16:00 – The sustained usage period

During the sustained usage period we started with 364 logged on desktops and peaked at 444 logged on desktops. By 11:00 AM we hit peak usage and stayed at the peak until a few users began to log off around 15:30.  During the entire period, our average number of connected users was 420.

Avg. Total IOPS Avg. Read IOPS Avg. Write IOPS Max Read IOPS Max Write IOPS


78 394 1058 3925
Avg. RX Mbps Avg. TX Mbps Max RX Mbps Max TX Mbps
5.95 Mbps 6.43 Mbps

159 Mbps

348 Mbps

Based on this sustained real world data, the average IOPS and bandwidth per user is shown in the following table.

Avg. Total IOPS Read/Write Ratio Avg. Bandwidth
1.1 17/83% .02 Mbps

Making Sense of the IOPS Numbers

Now that we have a real customer example, let’s try and make some sense of the IOPS numbers.  During the logon part of the day where the most logons are occurring, the file server is averaging a 40/60 split on read vs. write IOPS.  This value is actually in line with the results from the LoginVSI test and the results from the 64 minute heavy workload on my own desktop.  My desktop was 60/40, LoginVSI was 50/50 and the real world customer was 40/60 read vs. write.

However, as we look at the file server during the 5+ hours of sustained activity in the middle of the day, we see a major shift in the read/write ratio.  Our ratio was actually a 17/83% split on reads vs. writes.  So why does the workload shift to more write operations as the day progresses?  The answer lies in the read caching capabilities of the file server and the read cache of each desktop connecting to the file server.  Once a file is read from disk, Windows loads it into system cache RAM.  The file is actually cached in two places; on the file server and on the Windows client that read it from the file server.  So, as the day progresses, more and more of the read operations are being handled by the System Cache RAM on the virtual desktop that originally read the file or by the System Cache RAM of the file server.  This is why it is important to give your file server as much memory as possible. This is the same principal that allows our Provisioning Server to operate effectively as it caches vDisks. You can get more details about Windows Caching from the following two documents that I have previously written.



This native read caching capability of Windows is why we recommend that the cache on a RAID controller be configured to a 25/75 ratio of read to write caching.  Since Windows has a built in caching mechanism for reads, we want to dedicate more of the RAID controller’s resources to write operations.

Also, when you look at the IOPS numbers for the logon period vs. the sustained usage period, you will notice that the IOPS load is much lighter during the logon period and we are not really seeing any kind of a massive file server IOPS hit during logon known as the logon storm.  I am sure that many of you are wondering why or thinking that the numbers must be wrong.  However, I can explain it very easily by saying that we have effectively eliminated the logon storm impact because we have properly configured Profile Management and Folder Redirection per the recommendations in my first blog of this series.  Most importantly, we have redirected all folders, including AppData, and thus the initial load of reading the profile from the file server is quite minimal.  In fact, after 18+ months of usage, here are the statistics for the profile and redirected folders on the file server.

Folder Type Number of Users Avg. Size per User
Profile Folder 5617 14.6  MB
Redirected Folders 5617 89 MB

The key thing to take note of here is that the average size of our Citrix Profile Management profile is less than 15 MB.  For those that have been working with profiles for any length of time, you know that is an incredibly small number.  We achieved this by properly configuring Profile Management with all the necessary exclusions and by redirecting all folders available by default to Windows 7 via Group Policy.  You will notice that the average size of our redirected folders is 89 MB.  As mentioned earlier, in this customer environment the Documents, Videos and Music folders are not redirected to this file server.  Those folders are redirected to the users’ home directories hosted on other file clusters.  When we look more deeply into it, we find that approximately 80%+ of all the data in the Redirected Folders is from AppData.  If we did not redirect AppData, then each user on average would download 70MB+ or more every time they logon.  That would indeed place a much greater load on the filer server and would create a logon storm scenario from an IOPS perspective.  I do not want to get into the pros/cons of redirecting AppData because you can read my detailed reasons on why it should always be redirected for non-persistent desktops in part one of this blog. We currently have zero compatibility issues for redirected AppData at this customer and we have zero performance issues.  I have heard some people claim it slows applications down, but I can guarantee you that you if you design your file services infrastructure properly, it absolutely will not have any negative impact.  If you really want to slow your users down, then try letting AppData roam or Stream and watch what happens to your performance and file servers IOPS load!!!  The reality is that 80%+ of all the files in AppData never actually get read throughout the day, so reading them needlessly or downloading them needlessly not only adds network overhead, but more importantly, it pollutes the System Cache memory on your file server and on your Windows 7 virtual desktop with AppData files that do not need to be read and cached.  This will decrease the Cache/Copy Read Hit% on your file servers and will needlessly increase your read IOPS!

Making Sense of the Network Numbers

So much network bandwidth do we need to support properly configured Citrix Profiles and Folder Redirection?  Well, during the LoginVSI test we averaged .36 Mbps per user.  My real world customer consumed .07 Mbps per user during the peak logon time.  The main difference is that for the LoginVSI workload all users were highly active during the sample data period and in a real world scenario, you will always have users that are idle or performing tasks that do not require a network transfer against the file server and thus the average will come down quite a bit.  These numbers are actually in line with a great file server scalability article that Microsoft wrote for their File Server Capacity Tool.  If you have not already checked it out, you can find it here:


While the Microsoft test was designed to simulate home directory usage, the numbers should be quite similar to Profiles and Folder Redirection usage.  The Microsoft test averaged .22 Mbps per user.

For IOPS we have RAID Controllers using battery backed memory to cache write operations, and memory on RAID controllers and memory within Windows acting as a read cache.  This allows us to tolerate spikes in IOPS usage without negatively affecting users.  For this reason we do not worry about short spikes of IOPS because the caching will take care of it.  However, if a file needs to be sent over the network, then there is no caching that can help it.  It must have the proper amount of bandwidth available or else it will cause the user to experience session slowness or other delays.  So network utilization will by nature have many spikes and it is important that you are not running your NICs so hot that they do not have the proper head room available to tolerate spikes. With that in mind, I would take our peak average of .36 Mbps during the LoginVSI test and use that as the baseline for planning.

So How Much Bandwidth and IOPS do I Need?

The first thing that you have to determine is what type of users do you have?  In my real world customer example, we have much lower IOPS and network throughput when compared to my one hour of heavy personal usage and compared to the LoginVSI medium workload.  The primary difference is that my real world example was for true office productivity users over an entire day, not just the heaviest hour of each user within the day.  Think of it this way, in most corporate office environments people are taking breaks, going to lunch, sitting in meetings, talking on the phone, chatting with co-workers, etc… There are major amounts of time in a given day where a user is completely idle and consuming zero file server IOPS and zero file server bandwidth.  If you work in an office, then I challenge you to take a walk around the cubicles and offices and look at how many computers are logged on, but do not have a user actively banging away at the keyboard.  I have done this exercise many times at many different customers, and I usually find that well over 50% of all the computers are technically idle at any given moment.  Now, it is also important to note that there are many environments where the usage will be much higher.  Think of a call center environment.  In such a situation, it could easily be that 80% of the users are a truly active at the same time. However, in all my years of doing these types of studies, the IT folks and managers within a company always over estimate their true concurrent usage by almost 100%!  The bottom line is that you must know your users!  It is only after you truly know your users and their usage patterns that you can accurately plan and design an environment to meet their needs! Also, please be aware of PST and OST usage and limit it, if possible.  For non-persistent virtual desktops, the Exchange servers should be in the same data center as the virtual desktops, so offline caching should not typically be enabled.

So, with all that being said, let’s get down to some real world numbers and formulas that can be used to estimate the IOPS and network bandwidth requirements for a company with typical Office Productivity users where Citrix Profile Management and Folder Redirection have been properly implemented per the recommendations of this blog series.  The table below has my recommendations.

Share IOPS Read/Write Ratio Network Bandwidth
Profiles and Folder Redirection 1.5 35/65 .25 Mbps
Home Directory 1.5 35/65 .25 Mbps

So, if you were deploying a brand new VDI environment in a data center to support 10,000 users and you were migrating the home directories, profiles and redirected folders to a new NAS infrastructure, your NAS would need to support 30,000 IOPS and 5 Gbps of network throughput.

IOPS = (10K x 1.5 Profile/Folder IOPS) + (10K x 1.5 Home Directory IOPS)

Bandwidth = (10K x .25 Mbps Profile/Folder Bandwidth) + (10K x .25 Mbps Home Directory Bandwidth)

The final thing to consider is whether or not to co-locate profiles and redirected folders on the same file cluster or NAS that hosts home directories.  Typically, most customers have already deployed home directories for their users, so I recommend against co-locating them. Rather than risk overloading the existing file services infrastructure that hosts home directories, I would recommend setting up a dedicated file services cluster or NAS environment for profiles and redirected folders.  By splitting the load across multiple file services infrastructures, you will get better performance.  Additionally, it is often much harder to scale and tune existing home directory infrastructure to support large numbers of profiles and redirected folders.  So, I like to keep it separate. However, this does not mean that you must keep it separate.  If your environment is small to medium and you know that you have the bandwidth and IOPS to support it, you can safely co-locate them.  Just remember that the servers hosting profiles and redirected folders must be co-located in the same data center and preferably on the same core switching infrastructure as the virtual desktops.  You could also safely host home directories, profiles and redirected folders on the same server if you are deploying new NAS infrastructure and want to consolidate a user’s data to a single point.  Just make sure your NAS has the IOPS and network throughput to support the users and that it is co-located with the desktops.

I hope you found this to be a useful and informative series and I wish you success in configuring Citrix Profile Management and Redirected Folders!


Dan Allen