Last week we discussed those excellent troubleshooting utilities which can act as savior in support/escalation cases (/blogs/2012/01/27/excellent-diagnostictroubleshooting-utilities-on-netscaler/). This week’s focus is on the Technical support tools we have in Diagnostics module.

Many times when you open up a support case, first thing you would be asked is to provide the “Tech Support File”. What is this file and how you get it? Single place where you can do all this is “Diagnostics” page on the UI. The tech support tools have many options and let us walk over the important ones here.

Generate Support File: This option will generate the support file which collects all the relevant data for debugging and analysis. This file typically includes:

  • Newnslog files from “/var/nslog/”
  • Dmesg files from “/var/nslog/”
  • Process core files from “/var/core/”
  • Kernel core files from “/var/crash/”
  • Messages fles from “/var/log/”
  • Ns.log files from “/var/log/”
  • Other user process logs from “/var/log/”
  • Configuration files from “/nsconfig/”
  • User monitor modules from “/nsconfig/monitors/”
  •  Important configuration files from “/etc/”
  • .recovery file from “/flash/”
  • Various command output from “shell”
  • Nsapimgr output from “shell”
  • Downloaded objects from “/var/download/”
  • Profiler log files from “/var/nsproflog/”
  • Sync log files from “/var/nssynclog/”

 

Surprised with the amount of data being collected :). This is to ensure that you are not asked for individual files repeatedly which slows down the whole process. Run the single utility and get all the data in single zipped file and download it from same window.

Apart from these files and data collected, in some cases you need to take some live traces to debug the issue and that is the next utility available:

Start new trace: This utility helps you with collecting the network trace from live NetScaler. We have seen that many of those connection level or latency kinds of issues require a trace file for debugging. Collecting trace is simple but collecting it the right way is important to help with analysis. NetScaler can run multiple Gbps of traffic in live environment thus collecting a trace with GB worth of data is no good. Most of the trace analysis tools die opening up the file and you are asked to provide smaller trace file. It is important to collect relevant flows in the trace file which requires attention otherwise one can spend multiple days going over the capture. The utility here helps you collect right trace file and you just need to choose the correct options.

  • Packet Size: always ensure you change the default packet size of 164 to the size you need otherwise keep it 0 which means that we will collect packet of all sizes on wire

 

  • Duration: this is very important to collect short duration files for efficient analysis

 

  • Trace file format: this will let you toggle between “nstrace” and “tcpdump”. Our guideline is to collect it in “nstrace” format such that it keep the NetScaler specific information intact

 

  • Filter Expression: this is of huge importance because it lets you pin point to the core problem area. You can add expressions to do the filtering. The expression qualifiers could be
    • Source IP
    • Source Port
    • Destination IP
    • Destination Port
    • IP Address
    • Port
    • Service Type
    • Idle time
    • State
    • Service name
    • Virtual server name
    • Connection ID
    • Interface
    • VLAN

 

These are whole lot of options to make the filter much more effective and you can do the Boolean computation on these. There is additional option for tracing filtered connection peer traffic. This is very useful when you specify only a source IP as filter but also want to see the server side connections for this source IP originated client connections.

  • Capturing Modes: there are bunch of these capturing modes which further help you to collect the trace which is specific to the problem scenario.

 

This section also provides other options to download the trace and core files individually. The other interesting option it provides is to run “Back Trace” on a live system.

You need to provide the kernel and core files to run gdb utility to collect back trace. This also comes handy where uploading the huge core file takes time and looking into plain back trace can provide engineering clues on the nature of crash.

Overall these are bunch of rich tools and utilities provided to make your job simpler 🙂