In the third of this series of articles, we’re taking a look at how you can use a packet trace to help identify what your automated scanning tool is looking for and how it is identifying a particular vulnerability on your system. While not always definitive, this method can provide useful information on what information the tool is looking for and what information the system under test is responding with.

This is part three of a series. You can find the previous articles here and here

HTTP Request/Response Transcripts & Packet Traces:

Once running and publically visible services have been identified, it is common for scanning tools to send a series of relevant packets, often HTTP requests, to a server listening on the open port. How the server responds to these packets determines how the tool will report on the vulnerabilities affecting the service. If the tool receives a response that is different from the one it expects to get back for a pass, it will report a failure of the test and a positive hit for that vulnerability on that server.

In some cases, this behaviour can be correct (i.e. the scanner is testing for a specific response and all other responses genuinely are indicators a vulnerability), but in others it will be a false positive (i.e. the scanner is testing for a specific response, but other responses may not necessarily indicate a vulnerability). This latter case causes false positives in the tool reports.

For example, if the tool discovers a service using TCP port 80 and sends an HTTP request for a specific page or file to it (e.g. foo.jsp), it may be expecting to receive back an HTTP error 404 if the file does not exist on the server. Any other HTTP response could mean the tool reports a fail and a report of the service being vulnerable to the issue. If the web server responds with another error, say an error 403 (Forbidden) or an error 302 (Found) then, unless the tool has been configured to handle these cases, it could incorrectly report back a fail and therefore a vulnerability: A false positive. With tests like these, it is possible to use tools like Wireshark or tcpdump to follow the HTTP conversation and compare it to the test to identify the cause of the failure.

The default option for Wireshark is to capture all traffic on a specific network interface and it is possible that a large amount of traffic (and therefore a large number of packets) will be captured by the application. One trick to minimising the number of packets captured (and therefore the amount of searching required to find the right ones!) is to only start the capture when the test is just about to start and to stop it as soon as possible after the test has finished. Wireshark has an option to ‘Follow TCP Stream’ built in and, once a packet from the right conversation has been found, this can used to strip out all other traffic for the view. The screenshot below shows the Wireshark window after this filter has been applied.


Wireshark will also pop out a window that shows an ASCII representation of the conversation, providing a much easier way to follow the communication between the client (in red) and the server (in purple).


The screenshot above shows the client’s initial HTTP GET request to the test server for /index.html. The server then responds with an HTTP 200 OK and some additional information about the webserver, including the type and version number of the webserver present. So far, so good. A scanner looking for this file specifically on a webserver being tested will report a result that will be valid, or true positive.

Alternatively, if the client requests a file (in the case of the example below, foo.html) that does not exist on the server, the server responds with an HTTP error 404 telling the client that the file is Not Found.


In some cases, the test is designed to pass if the file is not found, and the positive test case for a pass is the tool seeing the error 404 that the webserver responds with. If the server responds with any other response, such as an error 403 Forbidden (used by a webserver on which authentication has been set up), the tool could interpret the result as a failure of the test case and will report that the vulnerability is present on the server. This information is often not displayed by the tool, so looking under the covers in this way can be very useful in identifying the cause of a failure.

I hope you found this series useful. This is only a brief introduction into how automated scanning tools work and how the results can be validated. Look out for more articles with more tips in the future.

Blog Banners -- FOOTER-2