I’ve been recently challenged on several projects by the networking teams on preventing possible broadcast flooding up the network. Their main vision on this topic is based on old fashioned hardware vendor recommendations for Windows based traffic networks. I wanted to give you some feedback from the field and explain the largest network subnets I have implemented with no issues, especially when talking of Provisioning Services traffic.
There is no golden rule for this although based on your design, your infrastructure will cope with the load. Considering you may split into 2 different VLANs your virtual desktop networks, isolating vDisk streaming traffic from your service/VDA network, these could be considered as baseline architectural references for you to design and scale upon:
Provisioning Services network:
It’s specially recommended to isolate your PVS traffic from all others by means most commonly of a dedicated VLAN. Hence, isolating vDisk streaming traffic will improve your performance as having less possible degradation due to other traffics travelling within the same subnet. In such scenario, your broadcast traffic for your vDisk provisioning will be limited to DHCP Discovery’s and PXE boot’s in case you didn’t choose TFTP.
Depending on your service requirements, you may encounter higher condensed broadcasts on peak times or a more spread broadcast traffic along the day, being very tight to your business model. In any case, the possible impact due to broadcast will be very limited. Even on peak times, a well designed isolated network will minimize impact on other services, whilst VM’s boot storm should be consistently booted up across the boot up window, this being controlled by the brokers, being again very little impact broadcast-wise as broadcast packets will be linked to the number of concurrent virtual desktops the brokers will be booting up at a time.
4096 virtual desktops
On a case where you will prefer to have a single network to be shared within your Datacenter infrastructure, you would be safe to host 4096 virtual desktops. Meaning you could have a 20 bit netmask (255.255.240.0). Consider 4094 desktops as your top limit as you will need to reserve you subnet and broadcast IP’s. Be also careful of:
- Not extending this VLAN accross Datacenters, streaming traffic is highly sensitive to latency.
- You will require a guaranteed bandwidth on such deployment, which could be around 3 Gbps for those desktops. Beware, there other design concerns as amount of PVS servers, load which each PVS could cope with, failover and rebalancing scenarios, and others.
There are other ways which may help you to isolate your traffic being less risky and easier to keep it within boundaries whilst keeping large throughputs. This is by means of keeping your streaming traffic within an internal network on your blade enclosure. Having your PVS server/s streaming vDisks purely to the VMs running on those blade servers on that same enclosure. Keep in mind:
- There could be vDisk replication challenges on large deployments, depending the Store method of choice.
- Just having one may be a potential risk against failure, requiring most likely a higher PVS server’s density on your infrastructure, in case you deploy 2 PVS servers per enclosure.
Infrastructure traffic isolation
I’m not very fan of promoting vendor specific technologies, but so far is the only solution I know could provide a higher level element which will encapsulate all your blades/chassis communication (most common deployments nowadays). With Cisco UCS you could potentially isolate all your desktops streaming traffic across chassis within your Fabric Interconnect, being an encapsulated traffic with bandwidth reservation, making it an ideal choice to deploy a PVS based virtual desktop infrastructure reducing the impact on your current Datacenter networks.
PVS offers several ways to provision vDisks other than network discovery, mainly based on BDM. BDM will allow you to create a bootable disk or ISO to be attached to your VM’s including the bootstrap file. Therefore, being pre-unicast traffic as VM’s will know the PVS list of server to contact on a unicast based traffic.
Virtual Desktop Service network:
Talking of your VDA or service network, which will probably be your virtual desktops default gateway across your desktop talk to the outter world (including your VDA registration, ICA traffic, etc…). The possible impact due to broadcast will be exposed to your applications requirements and demands.
Considering this as your baseline references, I would raise a thought and what should be your concerns regarding broadcast on Windows based networks. Microsoft Active Directory, DNS and networking traffic shouldn’t be pegging on broadcast traffic unless certain applications will produce it or may communicate based on NetBIOS (which by default on latest Windows versions isn’t even necessary). Hence, at such point, again you should understand your applications requirements.
Hope you may find it of use for yourself and you read me back next time.