2 Networking Concepts
Before considering how to configure Cisco routers and switches, you must be introduced to basic networking concepts you’ll need to understand in order to grasp the advanced concepts
discussed in later chapters. The OSI Reference Model is the best place to start, since it will help you understand how information is transferred between networking devices. Of the seven layers in the OSI Reference Model, be especially sure to understand how the bottom three layers function, since most networking devices function at these layers. This chapter discusses information flow, as well as Cisco’s three-tiered hierarchical model, which is used to design
scalable, flexible, and easy-to-troubleshoot-and-maintain networks.
OSI Reference Model
The International Organization for Standardization (ISO) developed the Open Systems Interconnection (OSI) Reference Model to describe how information is transferred from one machine to another, from the point when a user enters information using a keyboard and mouse to when that information is converted to electrical or light signals transferred along a piece of wire or radio waves transferred through the air. It is important to understand that the OSI Reference Model describes concepts and terms in a general manner, and that many network protocols, such as IP and IPX, fail to fit nicely into the scheme explained in ISO’s model. Therefore, the OSI Reference Model is most often used as a teaching
and troubleshooting tool. By understanding the basics of the OSI Reference Model, you can apply these to real protocols to gain a better understanding of them as well as to more easily troubleshoot problems. Advantages
ISO developed the seven-layer model to help vendors and network administrators gain a better understanding of how data is handled and transported between networking devices, as well as to provide a guideline for the implementation of new networking standards and technologies. To assist in this process, the OSI Reference Model breaks the network communication process into seven simple steps. It thus
? Defines the process for connecting two layers, promoting interoperability between vendors.
? Separates a complex function into simpler components.
? Allows vendors to compartmentalize their design efforts to fit a modular design, which eases implementations and simplifies troubleshooting.
A PC is a good example of a modular device. For instance, a PC typically contains the following components: case, motherboard with processor, monitor, keyboard, mouse, disk drive, CD-ROM drive, floppy drive, RAM, video card, Ethernet card, etc. If one component breaks, it is very easy to figure out which component failed and replace the single component. This simplifies your troubleshooting process. Likewise, when a new CD-ROM drive becomes available, you don’t have to throw away the current
computer to use the new device— you just need to cable it up and add a software driver to your
operating system to interface with it. The OSI Reference Model builds upon these premises. Layer Definitions
There are seven layers in the OSI Reference Model, shown in Figure 2-1: application, presentation, session, transport, network, data link, and physical. The functions of the application, presentation, and session layers are typically part of the user’s application. The transport, network, data link, and physical
layers are responsible for moving information back and forth between these higher layers. Each layer is responsible for a specific process or role. Remember that the seven layers are there to help you understand the transformation process that data will undergo as it is transported to a remote networking device. Not every networking protocol will fit exactly into this model. For example, TCP/IP has four layers. Some layers are combined into a single layer; for instance, TCP/IP’s application layer
contains the functionality of the OSI Reference Model’s application, presentation, and session layers.
The following sections go into more detail concerning the seven layers of the OSI Reference Model. Application Layer
The seventh layer, or topmost layer, of the OSI Reference Model is the application layer. It provides the
interface that a person uses to interact with the application. This interface can be command-line-based or graphics-based. Cisco IOS routers and switches have a command-line interface (CLI), whereas a web
browser uses a graphical interface.
Note that in the OSI Reference Model, the application layer refers to applications that are network-aware.
There are thousands of computer applications, but not all of these can transmit information across a network. This situation is changing rapidly, however. Five years ago, there was a distinct line between applications that could and couldn’t perform network functions.
A good example of this was word processing programs, like Microsoft Word—they were built to
perform one process: word processing. Today, however, many applications—MicrosoftWord, for
instance—have embedded objects that don’t necessarily have to be on the same computer. There are
many, many examples of application layer programs. The most common are telnet, FTP, web browsers, and e-mail.
The sixth layer of the OSI Reference Model is the presentation layer. The presentation
layer is responsible for defining how information is presented to the user in the interface that they are using. This layer defines how various forms of text, graphics, video, and/or audio information are presented to the user. For example, text is represented in two different forms: ASCII and EBCDIC. ASCII (the American Standard Code for Information Interchange, used by most devices today) uses seven bits to represent characters. EBCDIC (Extended Binary-Coded Decimal Interchange Code, developed by IBM) is still used in mainframe environments to represent characters. Text can also be shaped by different elements, such as font, underline, italic, and bold.
There are different standards for representing
graphical information—BMP, GIF, JPEG, TIFF,
and others. This variety of standards is also true
of audio (WAV and MIDI) and video (WMV,
AVI, and MPEG). There are literally hundreds
of standards for representing information that
a user sees in their application. Probably one
of the best examples of applications that have
a very clear presentation function is a web
browser, since it has many special marking codes that define how data should be represented to the user.
The presentation layer can also provide encryption to secure data from the application layer; however, this it not common with today’s methods of security,
since this type of encryption is performed in software and requires a lot of CPU cycles to perform.
The fifth layer of the OSI Reference Model is the session layer. The session layer is
responsible for initiating the setup and teardown of connections. In order to perform these functions, the session layer must determine whether data stays local to a computer or must be obtained or sent to a remote networking device. In the latter case, the session layer initiates the connection. The session layer is also responsible for differentiating among multiple network connections, ensuring that data is sent across the correct connection as well as taking data from a connection and forwarding it to the correct application.
The actual mechanics of this process, however,
are implemented at the transport layer. To set up
connections or tear down connections, the session
layer communicates with the transport layer.
Remote Procedure Call (RPC) is an example of
an IP session protocol; the Network File System
(NFS), which uses RPC, is an example application
at this layer.
The fourth layer of the OSI Reference Model is the transport layer. The transport layer
is responsible for the actual mechanics of a connection, where it can provide both
reliable and unreliable delivery of data. For reliable connections, the transport layer is responsible for error detection and correction: when an error is detected, the transport layer will resend the data, thus providing the correction. For unreliable connections, the transport layer provides only error detection—error correction is left up to one of the
higher layers (typically the application layer). In this sense, unreliable connections attempt to provide a best-effort delivery—if the data makes it there, that’s great, and
if it doesn’t, oh well!
Examples of a reliable transport protocol are
TCP/IP’s Transmission Control Protocol (TCP)
and IPX’s SPX (Sequenced Packet Exchange)
protocol. TCP/IP’s User Datagram Protocol (UDP)
is an example of a protocol that uses unreliable
connections. Actually, IPX and IP themselves
are examples of protocols that provide unreliable
connections, even though they operate at the
network, and not transport, layer. In IPX’s case,
if a reliable connection is needed, SPX is used. For IP, if a reliable connection is needed, TCP is used at the transport layer. The transport layer together with its mechanics is discussed in more depth in the section ―Transport Layer‖ later in this chapter.
The third layer of the OSI Reference Model is the network layer. The network layer provides quite a few functions. First, it provides for a logical topology of your network using logical, or layer-3, addresses. These addresses are used to group machines together. As you will see in Chapter 3, these addresses have two components: a network component and a host component. The network component is used to group devices together. Layer-3 addresses allow devices that are on the same or different media types to communicate with each other. Media types define types of connections, such as Ethernet, Token Ring, or serial. These are discussed in the section ―Data Link Layer‖
later in this chapter.
To move information between devices that
have different network numbers, a router is used.
Routers use information in the logical address to
make intelligent decisions about how to reach a
destination. Routing is discussed in more depth
in Chapters 9, 10, and 11.
Examples of network layer protocols include AppleTalk, DECnet, IPX, TCP/IP (or IP, for short), Vines, and XNS. The network layer is discussed in much more depth in the section ―Network Layer‖ later in this chapter.
Data Link Layer
The second layer in the OSI Reference Model is the data link layer. Whereas the
network layer provides for logical addresses for devices, the data link layer provides for physical, or hardware, addresses. These hardware addresses are commonly called Media Access Control (MAC) addresses. The data link layer also defines how a networking device accesses the media that it is connected as well as defining the media’s frame type.
This includes the fields and components of the data link layer, or layer-2, frame. This communication is only for devices on the same data link layer media type (or same piece of wire). To traverse media types, Ethernet to Token Ring, for instance, typically a router is used.
The data link layer is also responsible for taking bits (binary 1’s and 0’s) from the
physical layer and reassembling them into the original data link layer frame. The data link layer does error detection and will discard bad frames. It typically does not
perform error correction, as TCP/IP’s TCP protocol does; however, some data link
layer protocols do support error correction functions.
Examples of data link layer protocols and standards for local area network (LAN) connections include IEEE’s 802.2, 802.3, and 802.5; Ethernet II; and ANSI’s FDDI.
Examples of WAN connections include ATM, Frame Relay, HDLC (High-Level Data Link Control), PPP (Point-to-Point Protocol), SDLC (Synchronous Data Link Control), SLIP (Serial Line Internet Protocol), and X.25. Bridges, switches, and network interface controllers or cards (NICs) are the primary networking devices functioning at the data link layer, which is discussed in more depth in the section
―Data Link Layer‖ later in this chapter.
The first, or bottommost, layer of the OSI Reference Model is the physical layer. The
physical layer is responsible for the physical mechanics of a network connection, which include the following:
? The type of interface used on the networking device
? The type of cable used for connecting devices
? The connectors used on each end of the cable
? The pin-outs used for each of the connections on the cable
The type of interface is commonly called a NIC. A NIC can be a physical card that you put into a computer, like a 10BaseT Ethernet card, or a fixed interface on a switch, like a 100BaseTX port on a Cisco Catalyst 1900 series switch. The physical layer is also responsible for how binary information is converted to a physical layer signal. For example, if the cable uses copper as a transport medium, the physical layer defines how binary 1’s and 0’s are converted into an electrical signal by
using different voltage levels. If the cable uses fiber, the physical layer defines how 1’s
and 0’s are represented using an LED or laser with different light frequencies.
Data communications equipment (DCE) terminates a physical WAN connection and provides clocking and synchronization of a connection between two locations and connects to a DTE. The DCE category includes equipment such as CSU/DSUs, NT1s, and modems. Data terminal equipment (DTE) is an end-user device, such as a router or a PC, that connects to the WAN via the DCE device. In some cases, the function of the DCE may be built into the DTE’s physical interface. For instance, certain Cisco
routers can be purchased with built-in NT1s or CSU/DSUs in theirWAN interfaces. Normally, the terms DTE and DCE are used to describe WAN components, but they are sometimes used to describe LAN connections. For instance, in a LAN connection, a PC, file server, or router is sometimes referred to as a DTE, and a switch or bridge as a DCE.
Examples of physical layer standards include the following cable types: Category-3, -5, and -5E; EIA/TIA-232, -449, and -530; multimode and single-mode fiber (MMF and SMF); Type-1; and others. Interface connectors include the following: AUI, BNC, DB-9, DB-25, DB-60, RJ-11, RJ-45, and others. A hub and a repeater are examples of devices that function at the physical layer.
Fiber Cabling LANs typically use either copper or fiber-optic cabling. Copper cabling is discussed in more depth in the section ―Ethernet‖ later in this chapter.
Fiber-optic cabling uses light-emitting diodes (LEDs) and lasers to transmit data. With this transmission, light is used to represent binary 1’s and 0’s: if there is light
on the wire, this represents a 1; if there is no light, this represents a 0.
Fiber-optic cabling is typically used to
provide very high speeds and to span connections
across very large distances. For example, speeds
of 100Gbps and distances of over 10 kilometers
are achievable through the use of fiber—copper
cannot come close to these feats. However, fiberoptic
cabling does have its disadvantages: it is
expensive, difficult to troubleshoot, difficult to install, and less reliable than copper. Two types of fiber are used for connections: multimode and single-mode. Multimode fiber has a fiber thickness of either 850 or 1300 nanometers (nm), and the light signal is typically provided by an LED. When transmitting a signal, the light source is bounced off of the inner cladding (shielding) surrounding the fiber. Multimode fiber can achieve speeds in the hundreds of Mbps range, and many signals can be generated per fiber. Single-mode fiber has a fiber thickness of 1300 or 1550 nm and uses a laser as the light source. Because lasers provide a higher output than LEDs, single-mode fiber can span over 10 kilometers and have speeds up to 100Gbps. With single-mode fiber, only one signal is used per fiber.
The last few years have seen many advances in the use and deployment of fiber. One major enhancement is wave division multiplexing (WDM) and dense WDM (DWDM). WDM allows more than two wavelengths (signals) on the same piece of fiber, increasing the number of connections. DWDM allows yet more wavelengths, which are more closely spaced together: more than 200 wavelengths can be multiplexed into a light stream on a single piece of fiber.
Obviously, one of the advantages of DWDM is that it provides flexibility and transparency of the protocols and traffic carried across the fiber. For example, one wavelength can be used for a point-to-point connection, another for an Ethernet connection, another for an IP connection, and yet another for an ATM connection. Use of DWDM provides scalability and allows carriers to provision new connections without having to install new fiber lines, so they can add new connections in a very short period when you order them.
Let’s talk about some of the terms used in fiber and how they affect distance and
speed. First, you have the cabling, which provides the protective outer coating as well as the inner cladding. The inner cladding is denser to allow the light source to bounce off of it. In the middle of the cable is the fiber itself, which is used to transmit the signal. The index of refraction (IOR) affects the speed of the light source: it’s the
ratio of the speed of light in a vacuum to the speed of light in the fiber. In a vacuum, there are no variables that affect the transmission; however, anytime you send something across a medium like fiber or copper, the media itself will exhibit properties that will affect the transmission, causing possible delays. IOR is used to measure these differences: basically, IOR measures the density of the fiber. The more dense the fiber is, the slower the light travels through the fiber.
The loss factor is used to describe any signal loss in the fiber before the light source gets to the end of the fiber. Connector loss is a loss that occurs when a connector joins
two pieces of fibers: a slight signal loss is expected. Also, the longer the fiber, the greater the likelihood that the signal strength will have decreased when it reaches the end of the cable. This is called attenuation. Two other terms, microbending and
macrobending, describe signal degradation.
Microbending is when a wrinkle in the fiber, typically where the cable is slightly bent, causes a distortion in the light source. Macrobending is when there is leakage
of the light source from the fiber, typically from a bend in the fiber cable. To overcome this problem over long distances, optical amplifiers can be used. They are similar to
an Ethernet repeater. A good amplifier, such as an erbium-doped fiber amplifier (EDFA), coverts a light source directly to another light source, providing for the best reproduction of the original signal. Other amplifiers convert light to an electrical signal and then back to light, which can cause a degradation in signal quality. Two main standards are used to describe the transmission of signals across a fiber: SONET (Synchronous Optical Network) and SDH (Synchronous Digital Hierarchy). SONET is defined by the Exchange Carriers Standards Association
(ECSA) and American National Standards Institute (ANSI) and is typically used in North America. SDH is an international standard used throughout most of the world (with the exception of North America). Both of these standards define the physical layer framing used to transmit light sources, which also includes overhead for the transmission. There are three types of overhead:
? Section overhead (SOH) Overhead for the link between two devices,
such as repeaters
? Line overhead (LOH) Overhead for one or more sections connecting
network devices, such as hubs
? Path overhead (POH) Overhead for one or more lines connecting two
devices that assemble and disassemble frames, such as carrier switches or a router’s fiber interface
Typically, either a ring or point-to-point topology is used to connect the devices. With carrier MAN networks, the most common implementation is through the use of rings. Autoprotection switching (APS) can be used to provide line redundancy: in case of failure on a primary line, a secondary line can automatically be utilized. Table 2-1 contains an overview of the more common connection types for SONET and SDH. Please note that SONET uses STS and that SDH uses STM to describe the signal.
Wireless Wireless transmission has been used for a very long time to transmit data by using infrared radiation, microwaves, or radio waves through a medium like air. With this type of connection, no wires are used. Typically, three terms are used to group different wireless technologies: narrowband, broadband, and circuit/packet data. Whenever you are choosing a wireless solution for your WAN or LAN, you should always consider the following criteria: speed, distance, and number of devices to connect.
Narrowband solutions typically require a license and operate at a low data rate. Only one frequency is used for transmission: 900 MHz, 2.4 GHz, or 5 GHz. Other technologies—household wireless phones, for instance—also use these technologies.
Through the use of spread spectrum, higher data rates can be achieved by spreading the signal across multiple frequencies. However, transmission of these signals is typically limited to a small area, like a campus network.
The broadband solutions fall under the heading of the Personal Communications Service (PCS). They provide lower data rates than narrowband solutions, cost about the same, but provide broader coverage. With the right provider, you can obtain national coverage. Sprint PCS is an example of a carrier that provides this type of solution.
Circuit and packet data solutions are based on cellular technologies. They provide lower data rates than the other two and typically have higher fees for each packet transmitted; however, you can easily obtain nationwide coverage from almost any cellular phone company.
Wireless is becoming very popular in today’s LANs, since very little cabling is
required. Three basic standards are currently in use: 802.11a, 802.11b, and 802.11g, shown in Table 2-2.
Of the three, 802.11b has been deployed the most, with 802.11g just introduced as a standard. One advantage that 802.11b and 802.11g devices have over 802.11a
is that 802.11b and 802.11g can interoperate,
which makes migrating from an all-802.11b
network to an 802.11g network an easy and
painless process. Note that 802.11g devices
are compatible with 802.11b devices (but not
vice versa) and that 802.11a devices are not
compatible with the other two standards. Also
note that the speeds listed in Table 2-2 are optimal
speeds based on the specifications—the actual
speeds that you might achieve in a real network vary according to the number of devices you have, the distance that they are from the base station, and any physical obstructions or interference that might exist.
One of the biggest problems of wireless networks is security. Many wireless networks useWired Equivalency Privacy (WEP) for security. This is an encryption protocol that uses 40-bit keys, which is weak by today’s standards. Many vendors use 128-bit keys
to compensate this weakness; however, weaknesses have been found in this protocol, and WEP is used with other security measures to provide a more secure wireless network. The 802.1x/EAP (Extensible Authentication Protocol) is used to provide authentication services for devices: it authenticates devices to an authentication server (typically a RADIUS server) before the device is allowed to participate in the wireless network. Cisco has developed an extension to this called LEAP, or lightweight EAP. LEAP centralizes both authentication and key distribution (for encryption) to provide scalability for large wireless deployments.
Table 2-3 is a reminder of the devices that function at various OSI Reference Model layers.
Data Link Layer
Layer 2 of the OSI Reference Model is the data link layer. This layer is responsible for defining the format of layer-2 frames as well as the mechanics of how devices communicate with each other over the physical layer. Here are the components the data link layer is responsible for:
? Defining the Media Access Control (MAC) or hardware addresses
? Defining the physical or hardware topology for connections
? Defining how the network layer protocol is encapsulated in the data link layer frame
? Providing both connectionless and connection-oriented services
Normally, the data link layer does not provide connection-oriented services (ones that do error detection and correction). However, in environments that use SNA
(Systems Network Architecture) as a data link layer protocol, SNA can provide sequencing and flow control to ensure the deliver of data link layer frames. SNA was developed by IBM to help devices communicate in LAN networks (predominantly Token Ring) at the data link layer. In most instances, it will be the transport layer that provides for reliable connections.
Make sure to remember that the primary function of the data link layer is to regulate how two networking devices connected to the same media type communicate with each other. If the devices are on different media types, the network layer typically plays a role in the communication of these devices.
Data Link Layer Addressing
The data link layer uses MAC, or hardware, addresses for communication. For LAN communications, each machine on the same connected media type needs a unique MAC address. A MAC address is 48 bits in length and is represented as a hexadecimal number. Represented in hex, it is 12 characters in length. To make it easier to read, the MAC address is represented in a dotted hexadecimal format, like this: FFFF.FFFF.FFFF. Since the MAC addresses uses hexadecimal numbers, the values used range from 0–9
and A–F, giving you a total of 16 values for a single digit. For example, a hexadecimal
value of A would be 10 in decimal. There are other types of data link layer addressing besides MAC addresses. For instance, Frame Relay uses Data Link Connection Identifiers (DLCIs). I’ll discuss DLCIs in more depth in Chapter 16.
The first six digits of a MAC address are associated with the vendor, or maker, of the NIC. Each vendor has one or more unique sets of six digits. These first six digits are commonly called the organizationally unique identifier (OUI). For example, one of
Cisco’s OUI values is 0000.0C. The last six digits are used to uniquely represent the
NIC within the OUI value. Theoretically, each NIC has a unique MAC address. In reality, however, this is probably not true. What is important for your purposes is that each of your devices has a unique MAC address on its NIC within the same physical
or logical segment. A logical segment is a virtual LAN (VLAN) and is referred to as a broadcast domain, which is discussed in Chapter 8. Some devices allow you to change
this hardware address, while others won’t.
Each data link layer frame contains two MAC addresses: a source MAC address of the machine creating the frame and a destination MAC address for the device or devices intended to receive the frame. There are three general types of addresses at the data link layer, shown in Table 2-4. A source MAC address is an example of a unicast address—only one device can create the frame. However, destination MAC addresses can be any of the addresses listed in Table 2-4. The destination MAC address in the data link layer frame helps the other NICs connected to the segment to figure out if they need to process the frame when they receive it or to ignore it. The following sections covers each of these address types in more depth.
A frame with a destination unicast MAC address is intended for just one device on a
segment. The top part of Figure 2-2 shows an example of a unicast. In this example, PC-A creates an Ethernet frame with a destination MAC address that contains PC-C’s
address. When PC-A places this data link layer frame on the wire, all the devices on the segment receive. Each of the NICs of PC-B, PC-C, and PC-D examine the destination MAC address in the frame. In this instance, only PC-C’s NIC will process
the frame, since the destination MAC address in the frame matches the MAC address of its NIC. PC-B and PC-D will ignore the frame.
Unlike a unicast address, a multicast address represents a group of devices on a segment.
The multicast group can contain anywhere from no devices to every device on a segment. One of the interesting things about multicasting is that the membership of a group is dynamic—devices can join and leave as they please. The detailed process of multicasting is beyond the scope of this book, however.
The middle portion of Figure 2-2 shows an example of a multicast. In this example, PC-A sends a data link layer frame to a multicast group on its segment. Currently, only PC-A, PC-C, and PC-D are members of this group. When each of the PCs receives the frame, its NIC examines the destination MAC address in the data link layer frame. In this example, PC-B ignores the frame, since it is not a member of the group. However, PC-C and PC-D will process the frame.
A broadcast is a data link layer frame that is intended for every networking device on the same segment. The bottom portion of Figure 2-2 shows an example of a broadcast. In this example, PC-A puts a broadcast address in the destination field of the data link layer frame. For MAC broadcasts, all of the bit positions in the address are enabled, making the address FFFF.FFFF.FFFF in hexadecimal. This frame is then placed on the wire. Notice that in this example, when PC-B, PC-C, and PC-D receive the frame, they all process it.
Broadcasts are mainly used in two situations. First, broadcasts are more effective than unicasts if you need to send the same information to every machine. With a unicast, you would have to create a separate frame for each machine on the segment; with a broadcast, you could accomplish the same thing with one frame. Second, broadcasts are used to discover the unicast address of a device. For instance, when you
turn on your PC, initially, it doesn’t know about any MAC addresses of any other
machines on the network. A broadcast can be used to discover the MAC addresses of these machines, since they will all process the broadcast frame. In IP, the Address Resolution Protocol (ARP) uses this process to discover another device’s MAC
address. ARP is discussed in Chapter 3.
Ethernet is a LAN media type that functions at the data link layer. Ethernet uses the Carrier Sense Multiple Access/Collision Detection (CSMA/CD) mechanism to send information in a shared environment. Ethernet was initially developed with the idea that many devices would be connected to the same physical piece of wiring. The acronym CSMA/CD describes the actual process of how Ethernet functions. In a traditional, or hub-based, Ethernet environment, only one NIC can successfully send a frame at a time. All NICs, however, can simultaneously listen to information on the wire. Before an Ethernet NIC puts a frame on the wire, it will first sense the
wire to ensure that no other frame is currently on the wire. If the cable uses copper,
the NIC can detect this by examining the voltage levels on the wire. If the cable is fiber, the NIC can also detect this by examining the light frequencies on the wire. The NIC must go through this sensing process, since the Ethernet medium supports multiple access—another NIC might already have a frame on the wire. If the NIC doesn’t sense a frame on the wire, it will go ahead and transmit its own frame;
otherwise, if there is a frame on the wire, the NIC will wait for the completion of the transmission of the frame on the wire and then transmit its own frame. If two or more machines simultaneously sense the wire and see no frame, and each places its frame on the wire, a collision will occur. In this situation, the voltage levels
on a copper wire or the light frequencies on a piece of fiber get messed up. For example, if two NICs attempt to put the same voltage on an electrical piece of wire, the voltage level will be different than if only one device does so. Basically, the two original frames become unintelligible (or undecipherable). The NICs, when they place a frame on the wire, examine the status of the wire to ensure that a collision does not occur: this is the collision detection mechanism of CSMA/CD.
If the NICs see a collision for their transmitted frames, they have to resend the frames. In this instance, each NIC that was transmitting a frame when a collision occurred creates a special signal, called a jam signal, on the wire, waits a small random time period, and senses the wire again. If no frame is currently on the wire, the NIC will then retransmit its original frame. The time period that the NIC waits is measured in microseconds, a delay that can’t be detected by a human. Likewise, the time period
the NICs wait is random to help ensure a collision won’t occur again when these NICs
retransmit their frames.
The more devices you place on a segment, the more likely you are to experience collisions. If you put too many devices on the segment, too many collisions will occur, seriously affecting your throughput. Therefore, you need to monitor the number of collisions on each of your network segments. The more collisions you experience, the less throughput you’ll get. Normally, if your collisions are less than one percent
of your total traffic, you are okay. This is not to say that collisions are bad—they are
just one part of how Ethernet functions.
Because Ethernet experiences collisions, networking devices that share the same medium (are connected to the same physical segment) are said to belong to the same collision, or bandwidth, domain. This means that, for better or worse, traffic generated by one device in the domain can affect other devices. Chapter 7 discusses how bridges and switches can be used to solve collision and bandwidth problems on a network segment.
IEEE’s Version of Ethernet
There are actually two variants of Ethernet: IEEE’s implementation and the DIX
implementation. Ethernet was developed by three different companies in the early 1980s: Digital, Intel, and Xerox, or DIX for short. This implementation of Ethernet has evolved over time; its current version is called Ethernet II. Devices running TCP/IP typically use the Ethernet II implementation.
The second version of Ethernet was developed by IEEE and is standardized in the IEEE 802.2 and 802.3 standards. IEEE has split the data link layer into two components: MAC and LLC. These components are described in Table 2-5. The top part of the data link layer is the LLC, and its function is performed in software. The bottom part of the data link layer is the MAC, and its function is performed in hardware.
The LLC performs its multiplexing by using Service Access Point (SAP) identifiers. When a network layer protocol is encapsulated in the 802.2 frame, the protocol of the network data is placed in the SAP field. When the destination receives the frame, it examines the SAP field to determine which upper-layer network layer protocol
should process the frame. This allows the destination network device to differentiatebetween TCP/IP and IPX network layer protocols that are being transmitted across the data link layer connection. Optionally, LLC can provide sequencing and flow control to provide a reliable service, as TCP does at the transport layer. However, most data link layer implementations of Ethernet don’t use this function—if a reliable
connection is needed, it is provided by either the transport or application layer. IEEE 802.3 As mentioned earlier, IEEE 802.3 is responsible for defining the framing used to transmit information between two NICs. A frame standardizes the fields in the frame and their lengths so that every device understands how to read the contents of the frame. The top part of Figure 2-3 shows the fields of an 802.3 frame. Table 2-6 shows the fields found in the 802.3 frame. The field checksum sequence (FCS) value is used to ensure that when the destination receives the frame, it can verify that the frame was received intact. When generating the FCS value, which is basically a checksum, the NIC takes all of the fields in the 802.3 frame, except the FCS field, and runs them through an algorithm that generates a four-byte result, which is placed in the FCS field.
When the destination receives the frame, it takes the same fields and runs them through the same algorithm. The destination then compares its four-byte output with what was included in the frame by the source NIC. If the two values don’t match,
then the frame is considered bad and is dropped. If the two values match, then the frame is considered good and is processed further.
IEEE 802.2 IEEE 802.2 (LLC) handles the top part of the data link layer. There are two types of IEEE 802.2 frames: Service Access Point (SAP) and Subnetwork Access Protocol (SNAP). These 802.2 frames are encapsulated (enclosed) in an 802.3 frame when being sent to a destination. Where 802.3 is used as a transport to get the 802.2 frames to other devices, 802.2 is used to define which network layer
protocol created the data that the 802.2 frame will include. In this sense, it serves as a multiplexing function: it differentiates between TCP/IP, IPX, AppleTalk, and other network-layer data types. Figure 2-4 shows the two types of 802.2 frames. Table 2-7 lists the fields found in an 802.2 SAP frame.
When a destination NIC receives an 802.3 frame, the NIC first checks the FCS to verify that the frame is valid and then checks the destination MAC address in the 802.3 frame to make sure that it should process the frame (or ignore it). The MAC sublayer strips off the 802.3 frame portion and passes the 802.2 frame to the LLC sublayer. The LLC examines the destination SAP value to determine which upper-layer protocol should have the encapsulated data passed to it. Here are some examples of SAP values: IP uses 0x06 (hexadecimal) and IPX uses 0x0E. If the LLC sees 0x06 in the SAP field, it passes the encapsulated data up to the TCP/IP protocol stack running on the device.