Archive for the ‘Troubleshooting’ Category
By default syslog run over UDP port 514, UDP as well all know is unreliable. Now lets say you have a couple of core devices and you wanted to ensure the syslog messages from these devices successfully arrived to your syslog server or NMS well in that case I would say your best bet would be to configure syslog to use TCP to send syslog messages to that destination. This is conveniently configured in on simple line:
So now Syslog messages to 10.1.1.1 will be handled by TCP over port 8080, with that said you can even specify a different port number if you see it necessary, just remember to configure your syslog server to accept the syslog messages over the specific port that you can configure.
You can also use this configuration to change the default UDP port used by syslog if your environment calls for it. So instead of the default port UDP 514, syslog messages will be sent over UDP port 5514 in this example:
I while back I mentioned it is possible to debug a single IPSec tunnel using crypto conditions, this functionality also extends outside of crypto conditions. IOS routers offer the functionality to create debug conditions and limit debug output to specific interfaces, ip addresses, and more see the following list:
You can also stack these debug conditions on top of each other:
You will see as you specify additional conditions it tells how many conditions have been set. Issuing the command show debug condition will display a list of debug conditions that been set.
One thing you will want to keep in mind if you do not remove these conditions is when you finished troubleshooting your problem you should remove these debug conditions are else they will affect any other debugs you run in the future. You can remove these debug conditions by issuing the command no debug condition. One thing to keep in mind is that you have the ability to remove the debug conditions in any order, using the above configuration as an example lets say you want to keep seeing debug messages pertaining to IP address 192.168.1.254 and also pertaining to any interface and not just when it includes interface fa0/0, you can issue the command the no debug condition condition 1 or no debug condition interface fa0/0 depending which method you find simplier.
While conditional debugging is a very powerful tool when troubleshooting, you will still want remember that debugging can be very processor intensive and still might hinder performance on a production router, after all the debug is still running using debug conditions just limit the messages you will see.
The fact that Windows tracert operates differently from the traceroute command in Cisco devices will almost always lead to an interesting discussion. Mainly because many people are not aware of the real differences between these two utilities, they are merely aware of the spelling differences and how could you blame them both commands give you very similar results and perform the same function.
Let’s look at a tracert from a Microsoft Windows workstation:
What to know here is the Window’s tracert utility is relying on ICMP echo requests. (So this is nothing more then an extension of a simple ping)
The other key difference between Windows and Cisco, is when the destination is reached it replies back with an ICMP echo reply:
Now when we look at traceroute utility from a Cisco device:
Now the packets:
What we see here is a UDP packet with a destination port of 33434 (The source port is almost always random), not an ICMP echo packet.
Now with Cisco the destination will not answer back with reply packet but surprising enough a destination unreachable packet. See below (Notice the source of the IP packet, it’s the destination of our traceroute)
So to recap, Windows Tracert utility relies on ICMP Type 8 (Echo Request) and Type 0 (Echo Reply) packets, while Cisco replies on a UDP probe packet with a destination port of 33434, and ICMP Type 3 (Destination Unreachable) packet.
Apparently Cisco switches have the capability of performing TDR tests on cables to test cable length and pin-outs. This is something I stumbled upon while browsing through the Cisco Learning Network and I just had to give it a whirl. After all sooner or later you are going to run into a cable issue, it’s just a matter of time. Whether it was not crimped, punched down correctly, or someone ignored the IEEE standard and made a cable too long. The bottom line is some user somewhere will not connect to the network and then it will be up to you to troubleshoot why (and to fix it).
To test a cable simply issue the following command: test cable-diagnostics tdr int %IntefaceType/Number% from privileged mode, and include the interface in question.
After a few seconds issue the following command to see the results: Sh cable-diagnostics tdr int %IntefaceType/Number%
The results are as follows:
Pretty straight forward, it tells you the interface the cable is plugged into, the speed of the connection, the local and remote pair of the cables, cable length (notice the “+/-” of a few meters), and finally the status.
Here is a description of the results from Cisco’s website (Found here):
This command is currently available many different switch models 2900′s, 3500′s, 3700′s, and further (Including both 10/100 MB and 1 GB ports). I would also like to mention if you perform this test on a live port, communication is not interrupted at all. The Cable Length measurement is actually very accurate, don’t let the “+/-” factor confuse you, I’ve ran this on multiple cables and the length calculation is right on the mark. This is a definitely a nifty feature to keep in your when you run into those wired connectivity issues.
I figured I would write a post concerning some features built-in to most Cisco routers nowadays that can be lifesavers in identifying network congestion and who/what is causing it.
The first feature I want to mention is NetFlow, this nifty little feature will identify network traffic by the protocol as well as determine how much throughput each protocol is using giving you a clear view of the traffic traveling your network. You configre it on a per interface basis, specify the address you want the Netflow information sent to, and also the port you want it sent out on. 2055 is the default port used by the SolarWinds Netflow Analyzer in this case (Free Tool)
You can issue the sh ip cache flow command to see the output. While this output can be duanting at first it is actually fairly simply to understand once you realize what each column signifies. A nice shortcut for analyzing netflow is to find a free tool that will do it for you.
Their is more information displayed but from this point it looks almost identical to the sh ip flow top-talkers command shown below, the important thing here is the breakdown of the major protocols.
The next really cool feature is called top talkers after you configure this you can quickly see which end devces on your network are taking up the most bandwidth.
The configuration is as follows:
A fairly straight forward configuration, first you enable top top talkers and then configure the parameters you want. You can set top-talkers to sort by the amount of bytes from each end device or by the amount of packets. You can also configure the amount of devices you want to see, anything from 1 device to 200 device I usually prefer to simply see the top 10 devices (well 8 in this case)
You view the top talkers with the sh ip flow top-talkers command:
As you can see the output is placed nicely in a few columns, identifying the source interface and IP address, the destination interface and IP address, the protocol number (Pr column), the source and destination ports (keep in mind these are in hex format and need to be converted to decimal), and lastly the amount of bytes transferred in this case.
So whether someone has introduced a new program, or a users decides to try and download the entire internet you should be able to easily identify it. Those two built-in features alone can help you troubleshoot any network congestion your network experiences with your Cisco devices.