Actionable Intelligence is what many claim to be the core reason to getting Bin Laden. Essentially, immediate and accurate data that helps deal with a situation at hand with a high probability of success.
In the IT industry, specifically around applications, and now cloud services, this continues to be an elusive target. This was also the case some decades ago in the network world as well – that is, prior to the emergence of NetFlow.
Many of you might remember the early 90s when IP became dominant (thanks to Cisco and others) collapsing many point to point protocols (like SNA) into an end to end, scalable and ubiquitous connectivity layer. While there were multiple positive aspects to this, a key issue that evolved in the later 90s as most network devices transitioned to IP everywhere was the realization of a lack of visibility and therefore lack of control. Specifically, prior to TCP/IP, one could get hop to hop visibility where as with TCP/IP as an end to end protocol, existing tools were finding it hard to triage issues as they had limited visibility into the intermediate areas. So thus came about NetFlow which we can all agree revolutionized network performance visibility and monitoring – one can even argue that it accelerated mainstream deployments of the IP network worldwide.
So why did NetFlow make such an impact?
The answer was simple: Network management tools could now get the information they were lacking in real time. But more importantly, this was information that emanated directly from the network real estate that was processing IP – routers and switches. Essentially, the popularity of NetFlow stemmed from the fact that in place real estate was being leveraged to gain visibility – and that same real estate which in many cases caused obfuscation (for beneficial reasons) could now undo it from a visibility perspective. Since then, we all know what happened – pretty much every major network vendor has something like NetFlow. For example, there is Juniper jFlow and the open approach sFlow. Almost every network performance monitoring tool vendor supports this as well.
So why hasn’t something like NetFlow happened for actionable app intelligence?
First, many approaches over the last decade have indeed tried to address this as applications (web, app, db) took center stage over the last decade or so. However, they always have been plagued with the requirement to invasively get at data via agents or code instrumentation or limited functionality taps (e.g. span ports).
Second, the high cost of the overall app intelligence solution (from taps, to instrumentation to high end analytical tools) prevented mainstream adoption of a single approach. The market as many of you can tell is highly fragmented – with incremental approaches popping up every few years.
Finally, a key reason for no mainstream approach to have emerged for app intelligence like NetFlow did for network visibility is that the core principle of what made NetFlow successful could not be achieved – so far. That is, leveraging in-place, ubiquitous real estate to emit relevant and real time data – what better place to source data than what is already in “line of sight” performing other value-added functions.
Timing is right
The landscape however has changed over the last decade as we have seen the emergence of an “app network” real estate in L4-7 ADCs, WOCs and App Firewalls and Security solutions. This “app fluent” real estate already processes application traffic acting to optimize, secure and make highly available end user apps and enterprise services. With the emergence of cloud, this footprint is even more required and being more pervasively deployed in the “line of sight” of any application, desktop or cloud service.
No rocket science here. It’s a case of simply opening up the application fluent footprint that already is parsing and acting on application traffic to emit out application data rich in specific L4-7 metrics so that any mainstream ops tool can quickly correlate to triage issues and drill into whether it’s a WAN issue or Web-app tier issue or a DB issue or a LAN issue.
With no taps. With no agents. With no code instrumentation. With not requiring heavy weight, expensive correlation tools.
The key is for this app network real estate to emit this data without inducing any latency and overhead on its primary app delivery functions. The one big difference though from NetFlow, you can say a lesson learned, is to not make this proprietary. The whole premise of ubiquitous access to real time app data from the NW real estate is dependent on multiple app networking vendors all supporting the same approach via adopting a common standard – hence the reason that AppFlow is setup to be an open standard from day one.
Steve Shah will be covering AppFlow in detail and is a must read if you want to delve further into what makes AppFlow tick.
Over the upcoming days and months, you will see multiple tools and vendors emerge to embrace AppFlow but ultimately any technology or standard is only successful by customer adoption. So while we believe this is one solution that is in the right direction, we look forward to your feedback, your active engagement at appflow.org and let’s keep the momentum going towards actionable app intelligence.