Domain Name System (or DNS) has rightfully been called the “linchpin of today’s internet”. While there have been many incremental improvements to DNS over the years, what was said in 1991, remains true today.

DNS was originally invented to solve a particular set of problems that had exhibited themselves as the young Internet was starting its original growth spurt (discussed below). As we will see, elegant solutions were designed and implemented to overcome the original design flaws for naming computers on the Internet.

In 2018, a new set of problems has arisen that require a new design. As you will see, performance, availability, measurability, and reporting are the new drivers for a new DNS. A Predictive DNS.

A short walk down memory lane

Originally, DNS was developed to solve a problem that plagued the 1960’s architected ARPANET. Up to that point, people just kept names and addresses of all the servers they knew about locally. The UNIX hosts table/etc/hosts, was compiled from HOSTS.TXT that was updated a couple of times a week.

When TCP/IP arrived, its protocols allowed a massively expanding number of machines to be connected to the internet. As thousands of machines started to join the internet, updating the HOSTS.TXT became problematic. Because every machine needed to be in the HOSTS.TXT file, its size became an issue. Furthermore, constantly sending the updated file to all the machines was becoming a traffic hog in itself. Initially, the file was maintained by a single machine at the Stanford Research Institute (SRI) and this was being hammered by all the traffic and processing required to keep things up to. Clearly, this approach was not scalable for 3 reasons:

  1. The constant addition of machines meant a flat name space and a single machine to keep it up to date did not scale well
  2. Name collisions became common. This was a serious problem. If a new machine was introduced with the same name as an existing major mail server, it could disrupt mail service to much of the Internet.
  3. As more hosts were added, keeping a consistent HOSTS.TXT file on each of them became nearly impossible. The updated HOSTS.TXT file simply could not be shared fast enough to all servers that needed it. [1]

Something had to change. Paul Mockapetris solved the problems by designing a system that ensured the uniqueness of names, decentralization of management, and allowed for data to be made globally available. He created a distributed, hierarchical database that was managed locally by local name servers. These name server programs were responsible for managing the local domain and subdomains, and also knew enough to send traffic outside the domain when requested.

The internet authority partitioned its top-level domains into categories.

Domain name Meaning
.com Commercial Organizations
.edu Educational Institutions
.gov Government Institutions
.mil Military Groups
.net Major Network Support Centers
.org Organizations other than above (often not for profit)
Country Code Each Country has its own code

DNS has not changed significantly since that time even through all the growth in Internet .

And yet new issues have emerged.

The new problem: internet eats the world.

Since that time, the internet has eaten the world. Everything happens on the internet and, subsequently, everything uses DNS. However, DNS was not designed for the performance, availability and security at the level that current uses demand of it.

Performance is everything on the internet. Whether it’s video startup time, web page load, first person shooter or download of a new app, performance directly correlates with success. If your solution does not perform, customers leave and find your competitors. That is money out of your pocket and into someone else’s.

Performance can’t be tuned in one place. Rather, it’s a function of many decisions and elements in an architecture. DNS is one of many elements in the chain of latency. However, it is also the one that is closest to the user.

In this simple waterfall diagram, you can see where the DNS calls are, and more importantly what the latency is. Since DNS latency includes both latency from the user to the resolver and, in the case of cache misses, from the resolver to the authoritative name server it can vary greatly as a percentage of the whole page load, but generally will be from 10%-30% of the whole payload latency.

A modern complex website might have 100-300 components to retrieve each of which has a DNS lookup, each of which takes time. This is clearly an area where performance gains might be sought.

To improve DNS performance, it’s important to be able to measure the performance of DNS transactions for all the potential users and locations that might use a particular website. However, the Internet is broken into Autonomous System Numbers (ASN’s), with users from over 49,000+ ASN’s all potentially trying to hit a website. In order to improve DNS performance for all the users it is necessary to monitor how well its performing from all these ASNs globally. This is a massive problem. As has been shown elsewhere, only a Real User Monitoring solution can economically achieve this. Simply put: synthetic monitoring alone is not a good solution when faced with the enormity of monitoring every corner of the internet. That is not to say synthetic monitoring does not have its place. It most certainly does. In fact, when synthetic is combined with RUM, a monitoring solution can have the advantages of both with the shortcomings of neither.

Like all things “as-a-service,” DNS as a service relies on a network. It is essential that a DNS provider operates a low-latency network, allowing fast resolution of DNS records wherever users are situated. Of course, it is even better if a provider can offer multiple networks (redundancy) for this service.

This problem of DNS provider availability is an important one. Much has been written about Denial of Service (DOS) attacks, man-in-the-middle attacks and other forms of cybercrime, but the fact remains that most large outages of Internet sites and apps comes from lack of redundancy. It is common knowledge that for a site to be up 24/7 you have to plan for failure.

Another problem is security. Massive DOS attacks on DNS infrastructure have been well documented as outages. Redundant DNS is the simplest means of thwarting this. Modern architectures need to support dual DNS, natively.

Administration of DNS is changing too. As the demands of performance are increasingly felt by the business, administrators have been forced to programmatically make changes in real time. This requires a method to automate DNS changes (e.g. an API that allows administrators to interact remotely and push all changes at once).

DNS Reporting is critical. Too few DNS providers offer reporting capabilities that go far enough to be useful. The ability to solve a problem is directly related to the ability to find the problem and this is why deep reporting is so important.

Paul Mockapetris looked at the failures of the original HOSTS.TXT architecture and crafted a new solution that defined DNS for the next 30 years. He formulated requirements from the problems he was seeing at the time.

And this is what Citrix is doing today: looking at the new problems that exist now and constructing systems that help mitigate those problems.

What are the requirements for the new DNS?

  • Highly Performant – business now requires it
  • Measurable in every corner of the internet
  • Complete Active-Active Redundancy
  • Complete support for all automation
  • Deep, real-time reporting

The Solution: Predictive DNS

Citrix has launched Predictive DNS as part of its overall traffic management solution to address these requirements.

Implementing the Citrix DNS solution-as-a-service — across 5 completely segregated Anycast networks — provides unparalleled performance.

These 5 fully redundant Anycast networks maximize uptime. By pairing this architectural feature with Real User Monitoring (RUM) alongside synthetic monitoring, Predictive DNS ensures that the performance is optimized for each request. By creating both a UI and a full API to manage and create zones as well as update and support A, AAAA, CNAME, MX, NS, PTR, SOA, SPF, SRV, TXT records, Citrix allows harried administrators the most freedom in implementing their automation. And by supporting dual Active-Active DNS stack, its allows for multiple authoritative DNS sources, thereby breaking the chain of single homed DNS. Finally, this is all wrapped in industry leading reporting that allows tuning and rapid deployment of new services.

We hope this brief overview of Predictive DNS has been informational. More to the point – if you would like to give it a try, contact us at using the form below and one of my colleagues will get back with you immediately.


This guest post comes to us from Pete Mastin, a consultant to Citrix with deep experience in business and product strategy, as well as experience in various uses of AI, CDN’s, IP services and Cloud technologies. He designs and executes go-to-market strategies for businesses, as well as overseen the implementation of highly scalable, multi-homed, global SaaS systems. He often speaks at conferences such as NAB (National Association of Broadcasters), Streaming Media, The CDN/Cloud World Conference (Hong Kong), Velocity, Content Delivery Summit, Digital Hollywood and Interop.