CCIE or Null!

My journey to CCIE!

Archive for the ‘Packet Analysis’ Category

Wireshark tid-bit: Quickly gathering the contents of a PCAP.

with one comment

I don’t know about you but when I find myself performing packet captures and analyzing PCAPs I usually only know the symptoms of the issue I am attempting to troubleshoot. IE: Connection timeouts, slow response, long transfer times, etc. I usually don’t know much more than that, only in rare occasions do I get a heads up and insight into the behaviors of the application I am trying to troubleshoot. For all the other situations I need to rely on the PCAPs and interpret what and how the applications are communicating. Whether or not the application is behaving properly and performance is as it should be or if there is indeed something amiss somewhere.

Now for me the easiest way to do this is by using the reviewing the ‘Summary’ page under the ‘Statistics’ menu. A sample summary page is below:

Statistics

A few great call-outs from this screen:

  • Packet Size Limit -Knowing whether or not the packets within the capture were sliced after the first so many bytes is important to know, as sometimes you might not see the entire TCP header or wireshark will start classifying the packets as malformed. Although you will also see a ‘Truncated’ message within the packet indicating the packet was sliced.
  • First Packet, Last Packet, & Elapsed time -Matching up the time of a packet capture with when the particular issue occurred is crucial, after all you don’t want to find yourself analyzing the wrong capture. The Elapsed time is important to make note of as this give you the ability to establish a baseline, knowing how long a process takes can you help you identify an issue or identify expected behavior in the future.
  • Avg. Packet Size – Depending on what you are trying to troubleshoot the average packet size can be a quick indicator in regards to whether or not your fully using the MTU an your network. If you are troubleshooting data transfers normally you would expect the Avg. Packet to be quite large. If you see exceptionally small packet sizes data transfers may take a lengthy amount of time due to the increase TCP overload and normal L3 forwarding. Same goes for the Avg Mbit/sec, if you have large packets flowing you can expect to see a higher throughput rate, and the opposite for lower packet size rate.

The next spot that is worth checking out is the ‘Conversations’ which is also found under ‘Statistics’ this quaint little window gives you a brief overview of any Source/Destination devices identified within the capture. From an L2 Ethernet perspective up to a L4 TCP/UDP Perspective allowing you see what end points are really involved with this communication along with how much data was sent, the length of time the connection, etc. It’s not completely unheard of for applications to communicate with other devices (Web Servers, DB Servers, File Servers, Other App Servers) to perform whatever tasks it is trying to perform and it could be very possible this third server may or may not be slowing down the process.

Wireshark Conversation

So by using these two windows in Wireshark you’ve identified the following:

  1. The length of time the process take. – Found in the elapsed time of the capture, as long as the entire process was captured that is.
  2. The endpoints involved with this communication. – Remember it is important to cut down as much background noise as possible.
  3. How much data is transferred and at what size & rate. – This can very helpful when working data transfers.

Written by Stephen J. Occhiogrosso

December 8, 2014 at 9:00 AM

Wireshark Tid-bit: What does the IP Identification field tell us?

with one comment

There are many different fields in the various headers we get to examine during packet analysis, one of the most overlooked field is the IP Identification field. This simple 16-bit field is displayed in Hex and has a few different uses, most importantly:

  1. Identifies fragmented packets.
  2. Identifies the individual packets that the sender transmits.

How does this help us?

  • Well, by reviewing the IP Identification numbers you can easily identify which packet was dropped in the conversation, by comparing the packet captures from two different capture points.
  • This field can also give us a glimpse at how busy the end-devices are. The IP Identification field will increase by ‘1’ for every packet from the sender. Remember the IP ID Value is specific to each individual and not to a specific conversation. If you are following a specific conversation we may see consecutive IP ID #’s or we could see large jumps in the IP ID # intervals. Depending on the numbers this could tell us if the end-devices could be overloaded, or under-utilized and depending on the situation that could point us to a smoking gun.
  • If the packets get fragmented they will have the same IP ID number, the Fragment Offset field will also be set as well. This is helpful in following a conversation over particular link changes.
  • Seeing the same IP ID #’s in the same packet capture could also identify switching or routing loops within our network. The IP ID #’s will always increase, seeing the duplicate numbers means were are seeing the same packet more than once. The first thing you want to do is verify your capture point is functioning properly and make sure your capture point is in the right spot. Once you verified that it’s time to go hunting for the loop.

Quick Example:

IP Ident-2

By reviewing the IP ID numbers of the packets what can we tell about this conversation with Wireshark.org?

  • All the IP ID #’s are unique, no routing/switching loops
  • The IP ID #’s are pretty consecutive on both sides of the conversation. Showing both endpoints are not being highly utilized at this point in time. In fact there are one or two gaps on the 192.168.1.4 side of the conversation showing that endpoint is a little busier than 162.159.241.165

Written by Stephen J. Occhiogrosso

December 2, 2014 at 9:00 AM

Wireshark tid-bit: Packets larger than the MTU size.. why, how?

with one comment

Ever so often when I was doing some packet analysis I would come across systems that were sending packets larger the Ethernet MTU of the segment. Or so I thought those packets were getting transmitted, eventually I finally figured out why I was seeing packets with an increased packet size.

The answer was large segment/send offload (LSO) – When this feature is enabled it is the responsibility of NIC Hardware to chop up the data ensuring why it conforms to the MTU of media/network segment.

LSO

 

Now that we know why we are seeing these large packets, the next part of the question is how are we seeing these large packets in Wireshark. Well, Wireshark relies on WinPCAP or LibPCAP depending on your platform, these two tools capture the packets just before the packets hit the NIC Card and get transferred to the actual network.

WinPcap

 

The above image is from Winpcap.org, showing the kernel level NPF just above NIC Drive, thus explaining how Wireshark is able to see the larger traffic. Before it hits the NIC Driver and gets segmented due to its LSO capabilities.

Winpcap/Libpcap Architecture

Winpcap.org – Winpcap Internals

 

 

Written by Stephen J. Occhiogrosso

November 25, 2014 at 9:00 AM

Use your router to perform packet captures.

with one comment

For quite some time now Cisco routers have had a feature known as EPC embedded packet capture, this feature allows you to perform packet captures directly on the interfaces making this one of the most useful features Cisco could implement (in my own opinion of course), a few things you can do with this feature:

  1. Listen to VoIP calls to verify quality
  2. Verify packet markings
  3. Troubleshooting connectivity issues
  4. and so on.

The best part of this, is the fact the configuration is fairly simple and you can begin capturing packets with just a few simple commands.

Here is a quick outline of the process:

  1. Create a buffer point
  2. Create a capture point
  3. Tie the buffer point & the capture together
  4. Start the capture
  5. Stop the capture

Now to dive a little deeper!

The buffer point is memory used to hold the packets captured and as we all know Cisco devices have a finite amount of memory on them and we would not want a packet capture to completely utilize all the space and cause other issues. So this where we limit what gets put in the buffer. On top of that we can also set filters based off an access-list in order to limit the packets we will capture, since in many cases we would not want to capture all the data traversing the router.

Buffer Point2

Buffer Point Options

Buffer Point Options

Next we need to specify the capture point, this is where we specify the interface we want to capture on and in which direction we want to capture in. One thing you will need to specify is if you want to capture CEF or process switched traffic, this option in itself I think I just awesome (Of course most of the time you will capture CEF switched packets) but it’s nice to have the option. One thing I want to throw out there is yes you can capture on sub-interfaces.

Capture Point Setup

You have the option to capture both IPv4 & IPv6 traffic

You have the option to capture both IPv4 & IPv6 traffic

Now that we have created the capture point and the buffer point, we need to tie them together, this done with the following command:

Associate

All we need to do now is start then stop the capture, with the following command:

Capture Start & Stop

You can either manually stop the capture, or use one of the various options in the buffer point to trigger the capture to automatically stop.

Once the capture is complete you will most likely want to export the capture off the router and to your own workstation for deeper analysis in Wireshark or Omnipeek. You can export the capture off box by using protocols like TFTP or FTP, or even export the file directly to flash and come back to it later.

Buffer export options

Buffer export options

Just think about the doors this opens for troubleshooting those odd random issues where you only wish you could capture the traffic for further analysis now you can!

Also think about the other cool things you can do with EPC:

  • Utilize EPC on multiple interfaces at the same time, with multiple capture & buffer points. (Check out this page from Cisco)
  • Use EPC in conjunction with kron jobs or EEM applets to perform on-demand packet captures.
  • Much more

Written by Stephen J. Occhiogrosso

February 4, 2014 at 12:25 PM

Let’s check out an IPv6 header.

with one comment

I touched on the IPv6 addressing scheme a few weeks ago before and I wanted to continue the trend into a few more IPv6 related posts but that last IPS post spiked my interest, so I had to publish that one.  Now we know the addressing scheme is different in IPv6 but what about the packet format? Obviously the packet headers will be larger because the source and destination addresses within that header are now 128 bits but let’s see what else we have in the IPv6 header:

IPv6 Packet Header

Now that doesn’t look too intimidating right? I think that looks a little simpler compared to the IPv4 packet header. Now let’s see what we got going on here:

  • Version: This field is in an IPv4 packet and simply tells us what version of IP we are running. Since this is an IPv6 packet it’s going to have a value of 6
  • Traffic Class: This is the equivalent of the DiffServ/DSCP portion of the IPv4 packet which carries the QoS markings of the packet. Just like in IPv4 the first 6 bits are designated for the DSCP value, and the next 2 bits are for ECN (Explicit Congestion Notifications) capable devices.
  • Flow Label: This field is 20 bits long and is defined in RFC 6437, I’ll admit finding information about the flow label is tough, but the RFC state this field could be used as a ‘hash’ for the routing devices look at and make forwarding decisions based on the field’s value. Its intention is for stateless ECMP (Equal Cost Multi-Path) or LAG mechanisms, but we will have to see how different vendors implement this feature. I’d take guess that IPv6 CEF will use the flow label, but I’ll have to wait and see.
  • Payload Length: Specifies the size of the data payload following the IPv6 header.
  • Next Header: This field is 8-bits and specifies the layer 4 transport protocol which follows the IP header. These values are hex format as well, you’ll notice ICMPv6 has a value of 0x3a, IPv6 protocol numbers use the same numbers that were used in IPv4. IANA’s list of protocol numbers can be found here.
  • Hop Limit: This is also an 8-bit field and replaces the TTL field that was in the IPv4 header. Each hop decrements the hop limit value by 1 and when the hop limit reaches Zero the packet is discarded.
  • Source/Destination: This should go without saying but it tells you the source IPv6 address of the packet and the destination IPv6 address this packet is destined to. As you would expect both of these field are 128-bits each.

So there is a snappy run down of the IPv6 IP Packet header, I think it is actually simpler than the IPv4 IP packet headers but don’t tell that to a Cisco router. Remember these packet headers are considerably larger than their IPv4 counterparts so it takes more processor power to process IPv6 packets which is not a problem for the ISR G2’s we have todays but it is something you might want to keep in mind when run IPv6 on older hardware.

Now back to CCIE: R/S Labbing I go!

Written by Stephen J. Occhiogrosso

January 10, 2013 at 10:02 AM

Let’s look at Generic Routing Encapsulation (GRE)

with one comment

Generic Routing Encapsulation, not much to say about it, its generic right? Well, it’s pretty tough to find a network that does not utilize GRE tunnels in one way, shape, or form. Whether they are Encrypted GRE tunnels or GRE tunnels riding over IPSec tunnels for branch connectivity chances are GRE is being used somewhere in your network. After all GRE provides us with various features that traditional IPSec tunnels do not offer us. Such as allowing multicast traffic to traverse the tunnel providing us scalability with the use of a routing protocol. It provides us these features because of how it interacts with the IP packet, some people refer to GRE as a layer 3.5 protocol because of how it encapsulates the existing layer 3 IP packet.

Below we can see how GRE interacts with the IP packet:

Notice you can see two sets of IP address in that packet, the first set of IP addresses (10.0.0.1 & 10.0.0.2) are the tunnel addresses, the second set of addresses under the GRE header is the real data that was sent through the tunnel, and it’s simple ICMP echo request from 1.1.1.1 to 2.2.2.2.

And for a closer at the fields within the GRE encapsulation.

Also notice the protocol type at the bottom, GRE can be used to encapsulate more than just IP Packets SNA, IPX, Appletalk and quite a few others can also be encapsulated with GRE.

More information can be in RFC 2784 which explains the IP Packet fields in greater detail then I covered here.

Now, as far as how the configuration goes for a normal GRE tunnel goes it is fairly minimal:

However, there are a few things that can be a little rough to get the hang of at first. The first thing unlike physical interface you can declare tunnel interfaces from configuration mode and you can start at any number. Now like any other interface you will want to assign your tunnel its own IP address since these GRE tunnels operate around layer 3 they will need IP addresses on the same subnet so both tunnel interfaces can communicate. The only other requirements for configuring a GRE tunnel is specifying the tunnel source and the tunnel destination. The tunnel source can be a physical interface or an IP address just keep in mind the tunnel source needs to be local on the router. The tunnel destination is going to the address of the remote router you are terminating the tunnel to. You will want to make sure the tunnel source and tunnel destination configured have connectivity to each other since it is between these addresses the GRE tunnel will run over.

There you go, GRE in a nutshell!

Written by Stephen J. Occhiogrosso

February 6, 2012 at 7:52 AM