Thumbnail: logo

What is an interface symbol error?

by on under blog

Input symbol errors can happen when an interface is not able to decode correctly the bits on the wire.

For instance, from this regular “show interface” we can deduct that upon the 41 packets inbound, there are 1461 errors:

RP/0/RSP0/CPU0:9k-LAB# **show interface TenGigE1/0/0/6** ... **41 packets input, 5210 bytes**, 0 total input drops 0 drops for unrecognized upper-level protocol Received 0 broadcast packets, 41 multicast packets 0 runts, 0 giants, 0 throttles, 0 parity **1461 input errors**, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

How is possibile that we have 35 times more errors than the number of packets received?

If we dig more at the controller level details, we can verify the specific error type we are encountering:

RP/0/RSP0/CPU0:9k-LAB# **show controllers tenGigE 1/0/0/6 stats** Ingress: Input total bytes = 5210 Input **good** bytes = 5210 ... Input good pkts = 41 Input unicast pkts = 0 Input multicast pkts = 41 **Input error symbol = 1461**

As per CCO definition  a “Symbol error means the interface detects an undefined (invalid) Symbol received. Small amounts of symbol errors can be ignored. Large amounts of symbol errors can indicate a bad device, cable, or hardware”.

Symbols works at the physical layer, as they define the way bits are mapped into symbols on the media. In our example we received 5210 bytes total (41680 bits), of which 1461 bad symbols, that makes a 3.5% of errors on the total throughput.

Noting that all total bytes equals good bytes, we can conclude that symbol errors are corrected and recovered on the receiving interface / linecard as they do not reach any critical threshold.

Symbols are the product of the physical level encoding: like the Gigabit Ethernet standard, which maps each 8 bits from MAC upper layer to 10 bits on the wire.



© 2018 Matteo Malvica. Illustrations by Sergio Kalisiak.