CCIE or Null!

My journey to CCIE!

Packet Flow with FirePower.

leave a comment »

As I was going through some CiscoLive365 sessions (Remember CiscoLive365 is great!) just this last weekend I came across the slides for BRKSEC-2028 – Deploying Next Generation Firewall with ASA & Firepower services. Unfortunately there is no video for this session yet but the presentation slides are there and luckily the slides are detailed enough so you can easily follow along with the content. One the slides that stood out of to me was where the new FirePower modules (Hardware or Software) falls into the order of operations as traffic passes through the ASA. Screenshot below:

SourceFire Packet Flow

I think the big call-outs here are:

  1. The FirePower module will not actually drop the traffic itself, the traffic gets ‘marked’ if the traffic is to be dropped. All the traffic that passes to the FirePower module will indeed get passed right back to the ASA and it is the responsibility of the Cisco ASA to actually drop the traffic.
  2. Even existing connections still get inspected if the security policy demands.
  3. ACL’s and XLate entries will filter traffic before the traffic even makes it to the FirePower module.
  4. This is only slightly different from how the existing IPS Module inspects traffic from ASA. In regards to the flow path.

Definitely some good information to know when building out your new policies.

Written by Stephen J. Occhiogrosso

December 10, 2014 at 9:00 AM

Wireshark tid-bit: Quickly gathering the contents of a PCAP.

leave a comment »

I don’t know about you but when I find myself performing packet captures and analyzing PCAPs I usually only know the symptoms of the issue I am attempting to troubleshoot. IE: Connection timeouts, slow response, long transfer times, etc. I usually don’t know much more than that, only in rare occasions do I get a heads up and insight into the behaviors of the application I am trying to troubleshoot. For all the other situations I need to rely on the PCAPs and interpret what and how the applications are communicating. Whether or not the application is behaving properly and performance is as it should be or if there is indeed something amiss somewhere.

Now for me the easiest way to do this is by using the reviewing the ‘Summary’ page under the ‘Statistics’ menu. A sample summary page is below:

Statistics

A few great call-outs from this screen:

  • Packet Size Limit -Knowing whether or not the packets within the capture were sliced after the first so many bytes is important to know, as sometimes you might not see the entire TCP header or wireshark will start classifying the packets as malformed. Although you will also see a ‘Truncated’ message within the packet indicating the packet was sliced.
  • First Packet, Last Packet, & Elapsed time -Matching up the time of a packet capture with when the particular issue occurred is crucial, after all you don’t want to find yourself analyzing the wrong capture. The Elapsed time is important to make note of as this give you the ability to establish a baseline, knowing how long a process takes can you help you identify an issue or identify expected behavior in the future.
  • Avg. Packet Size – Depending on what you are trying to troubleshoot the average packet size can be a quick indicator in regards to whether or not your fully using the MTU an your network. If you are troubleshooting data transfers normally you would expect the Avg. Packet to be quite large. If you see exceptionally small packet sizes data transfers may take a lengthy amount of time due to the increase TCP overload and normal L3 forwarding. Same goes for the Avg Mbit/sec, if you have large packets flowing you can expect to see a higher throughput rate, and the opposite for lower packet size rate.

The next spot that is worth checking out is the ‘Conversations’ which is also found under ‘Statistics’ this quaint little window gives you a brief overview of any Source/Destination devices identified within the capture. From an L2 Ethernet perspective up to a L4 TCP/UDP Perspective allowing you see what end points are really involved with this communication along with how much data was sent, the length of time the connection, etc. It’s not completely unheard of for applications to communicate with other devices (Web Servers, DB Servers, File Servers, Other App Servers) to perform whatever tasks it is trying to perform and it could be very possible this third server may or may not be slowing down the process.

Wireshark Conversation

So by using these two windows in Wireshark you’ve identified the following:

  1. The length of time the process take. – Found in the elapsed time of the capture, as long as the entire process was captured that is.
  2. The endpoints involved with this communication. – Remember it is important to cut down as much background noise as possible.
  3. How much data is transferred and at what size & rate. – This can very helpful when working data transfers.

Written by Stephen J. Occhiogrosso

December 8, 2014 at 9:00 AM

Wireshark Tid-bit: What does the IP Identification field tell us?

leave a comment »

There are many different fields in the various headers we get to examine during packet analysis, one of the most overlooked field is the IP Identification field. This simple 16-bit field is displayed in Hex and has a few different uses, most importantly:

  1. Identifies fragmented packets.
  2. Identifies the individual packets that the sender transmits.

How does this help us?

  • Well, by reviewing the IP Identification numbers you can easily identify which packet was dropped in the conversation, by comparing the packet captures from two different capture points.
  • This field can also give us a glimpse at how busy the end-devices are. The IP Identification field will increase by ‘1’ for every packet from the sender. Remember the IP ID Value is specific to each individual and not to a specific conversation. If you are following a specific conversation we may see consecutive IP ID #’s or we could see large jumps in the IP ID # intervals. Depending on the numbers this could tell us if the end-devices could be overloaded, or under-utilized and depending on the situation that could point us to a smoking gun.
  • If the packets get fragmented they will have the same IP ID number, the Fragment Offset field will also be set as well. This is helpful in following a conversation over particular link changes.
  • Seeing the same IP ID #’s in the same packet capture could also identify switching or routing loops within our network. The IP ID #’s will always increase, seeing the duplicate numbers means were are seeing the same packet more than once. The first thing you want to do is verify your capture point is functioning properly and make sure your capture point is in the right spot. Once you verified that it’s time to go hunting for the loop.

Quick Example:

IP Ident-2

By reviewing the IP ID numbers of the packets what can we tell about this conversation with Wireshark.org?

  • All the IP ID #’s are unique, no routing/switching loops
  • The IP ID #’s are pretty consecutive on both sides of the conversation. Showing both endpoints are not being highly utilized at this point in time. In fact there are one or two gaps on the 192.168.1.4 side of the conversation showing that endpoint is a little busier than 162.159.241.165

Written by Stephen J. Occhiogrosso

December 2, 2014 at 9:00 AM

Wireshark tid-bit: Packets larger than the MTU size.. why, how?

leave a comment »

Ever so often when I was doing some packet analysis I would come across systems that were sending packets larger the Ethernet MTU of the segment. Or so I thought those packets were getting transmitted, eventually I finally figured out why I was seeing packets with an increased packet size.

The answer was large segment/send offload (LSO) – When this feature is enabled it is the responsibility of NIC Hardware to chop up the data ensuring why it conforms to the MTU of media/network segment.

LSO

 

Now that we know why we are seeing these large packets, the next part of the question is how are we seeing these large packets in Wireshark. Well, Wireshark relies on WinPCAP or LibPCAP depending on your platform, these two tools capture the packets just before the packets hit the NIC Card and get transferred to the actual network.

WinPcap

 

The above image is from Winpcap.org, showing the kernel level NPF just above NIC Drive, thus explaining how Wireshark is able to see the larger traffic. Before it hits the NIC Driver and gets segmented due to its LSO capabilities.

Winpcap/Libpcap Architecture

Winpcap.org – Winpcap Internals

 

 

Written by Stephen J. Occhiogrosso

November 25, 2014 at 9:00 AM

But I’ve got an ‘Excellent Signal’!!?

leave a comment »

Ever so often I find myself troubleshooting some type of wireless related issue, and while wireless issue’s vary from

  • Slow performance
  • Clients can’t connect
  • Poor voice performance
  • Or even random disconnects, the list is endless.

However one of the common things I hear during the troubleshooting process is without a doubt along the lines of:

“But it says I have an excellent signal with five bars!”

ExcellentSignal

And…. my favorite question in response to that statement is:

“What is your data rate?” (usually with this same expression)

Data rate2

 

Signal strength is only a small piece to the puzzle what determining whether or not you have a good quality signal strength. The signal strength indicator itself could even be misleading, just because a client is registering ‘5 bars’ with a good RSSI and SNR does not necessarily mean the AP on the other end of the connection is seeing a similar RSSI & SNR to the WLAN Client. Do I hear a transmit power mismatch, or a highly reflected RF environment?

Nowadays WLAN clients comes in all shapes and sizes (Phones, Tablets, wireless scanners, VoIP handsets) long gone are the days of wireless is just for laptops. With this wide array of hardware clients, you can guarantee each of these devices have a wireless transmitter with different specifications, and while it is impossible to take into account every WLAN client, the client audience should be considered when designing a WLAN or deploying AP’s.

Consider the an access point is transmitting at it’s max power rating, you can guarantee the wireless phone or VoIP handset does not have that same power level. It’s like two people trying to communicate with each other that across a football field and only one person has a mega-phone. The other guy without the megaphone will need to probably repeat himself a few times for the other person to understand him (Think of that as Data Retries).

One of the better ways to identify a proper Wireless connection would be to verify the the data rate, and see review the data rate statistics. Many of the different WLAN Client software have this functionality, telling us what percentage of the data was transmitted/received at a specific data rate. Now shifting data rates is common in a WLAN, but seeing 90% of data operating at the 1, 2, or 5.5 Mbps data rate is just poor performance.

A while back I posted about Understanding a wireless connection, and I wanted to dive a bit deeper and expand on the concept (albeit years later, but hey better late then never right?)

Written by Stephen J. Occhiogrosso

November 17, 2014 at 9:00 AM

End-Of-Sale date announced for various Cisco IPS’s

with one comment

Since Cisco started announcing the Sourcefire FirePower  (Hardware & Software) modules earlier this year I have been wondering what was going to happen to their existing IPS line. Looks like the End Of Sale announcement was recently made, with an EoS date in April of next year.

IPS EoX Announce

The EoX announce affects both the IPS Modules and IPS 4xxx sensor platforms. IOS IPS will remain available, but I do wonder for how long?

This will be an interesting shift for those of us that have used the original Cisco IPS software for a long time now. As we know there is more that goes into this than just buying a module or IPS Appliance. we will also need a central management server to quickly and easily manage these types of devices. Monitoring signature updates and keeping singatures in-sync company wide can be a massive problem if you don’t have it under your thumb. So far I have not seen any support added for SourceFire in Cisco Security Manager, so FireSIGHT will be the way to go.

I wonder if anything will happen to the CX Modules, since they ran their own Next-Generation IPS Signature set. Time will tell I suppose.

The full document can be found here from Cisco.com.

Written by Stephen J. Occhiogrosso

November 10, 2014 at 10:00 AM

Monitoring OTV – Overlay Transport Virtualization

leave a comment »

If there is anything I find more enjoyable then doing some type of network design or writing on whiteboard, it’s thinking about  network management and creating some new alert or poller that let’s me know when something changes that shouldn’t. It would seem over the last few years Data Center technologies have really become popular:

  • Leaf/Spine models – A scalable growth model
  • Virtual Switches – VMWare NSX / Cisco Nexus 1000v, ASA1000v, VSG, etc
  • vPC – Active/Active forwarding to trick spanning-tree
  • OTV – another way to extend Layer-2 connectivity across physical locations

And one the thoughts I have in the back of my mind:

Yea, this stuff is great but I can’t monitor this stuff natively in my management systems. How I am going to know when something goes wrong?

Now, from here I am going to focus on Overlay Transport Virtualization (OTV)

Luckily, Cisco publishes a lot of their MIB structures. For example we can find the OTV MIB File here, this gives us a lot of information and tells us what information is exposed via SNMP.

The first thing, this tells us is where to find the OTV information

1.3.6.1.4.1.9.9.810

One thing I want to point out is the fact, I was unable this find this using a MIB Browser, “810” just was not listed. In order to find this I had download all the SNMP MIB Tables from one my OTV VDC’s. At this point I am not sure if it was an issue with the SNMP MIB Browser I was using or if it is an NX-OS thing.

Now, we will want to take a deeper look at what CISCO-OTV-MIB can tell us. We can find this info here.

That might look a little intimidating and it may look like a lot of reading but luckily it is to the point and tells us the following:

  • What OID’s (Object Identifiers) are available
  • What each OIDs is with a proper description
  • The OID Syntax or variable type (IE: Gauge, Interger, String, etc)
  • What possible values the OIDs can return and the meaning of those returned values.

For example:

OTV-MIB-Overlay-State

The above snippet from the CISCO-OTV-MIB structures tells us, cotvOverlayVpnState is an Interger and will return a value of 0, 1, or 2 where:

  • 0 = Other
  • 1 = Down
  • 2 = Up

So this tells us, if this specific OID returns a value of 1 we have problem our OTV Overlay interface is down.

We can even go another step further and poll an additional OID to identify the reason the Overlay interface.

OTV-MIB-Overlay-Down

cotvOverlayVpnDownReason is another Integer, they will return a value of 0-19 depending on the specific reason regarding why the Overlay is down , the MIB file goes further and describes each of the down reasons. (Small snippet below)

OTV-MIB-Overlay-Down-Description

 

Now, by polling for these specific OID’s in your Network Management Server (NMS) you can much easily get the status of your OTV Overlay interface, and if the Overlay does happen to be down you can also find out why just as quick.

In the event you have multiple OTV Overlays running, you can poll this information on an overlay by overlay basis. If we go up a level in the MIB file, we’ll notice it contains a sequence for each OTV Overlay:

OTV-MIB-Overlay-Sequence

And within each sequence is where can find a lot of the information we probably need:

  • The name of the Overlay
  • The state / status of the Overlay
  • The reason the Overlay is down
  • Much more

Now, that we know how to monitor the Overlay interface that really solves half the problem in my mind. We may also want to monitor OTV Adjacencies, or an AED Status. Information that is also contained in the MIB file to poll the additional data.

Another important thing that may be worth monitoring depending on your deployment is multicast. I’ll cover multicast monitoring in a future post, but if your running OTV in multicast mode and you start losing PIM neighbors or mroutes start getting pruned you may definitely have some issues going on.

Written by Stephen J. Occhiogrosso

November 6, 2014 at 10:00 AM

Follow

Get every new post delivered to your Inbox.

Join 539 other followers

%d bloggers like this: