Now that you have examined the components of the average network server and several different types of workstations, you have come to the fabric that incorporates them all into what is known as a local area network (LAN).
A network may, at first glance, appear to be deceptively simple. A series of computers are wired together so that they can communicate with each other. However, each device on the network must be able to communicate with any other device at many different levels, even to perform the simplest tasks. Further, this must all be accomplished using a single transmission channel for all of the devices. Multiply these myriad levels of communication by dozens, hundreds, or even thousands of devices on a network, and the logistics become staggeringly complex.
In this chapter, you will concentrate on an examination of the basic network types in use today. You will learn about the actual physical medium that connects networked devices, but you will also learn about the basic communications methods used by each network type at the bottom two levels of the OSI reference model. These methods are integrally related with the physical medium, as they impose numerous restrictions on the way that you can construct the network fabric. You will also read about some of the new network technologies that are just coming into general acceptance in the marketplace. These offer increased network throughput to accommodate the greater demands of today's application software and data types in a variety of ways. Even if your network has no need for these technologies today, it is important to keep a finger on the pulse of the industry in order to facilitate the performance of network upgrades at a later time.
The selection of a network type is one of the first major decisions to be made in setting up a network. Once constructed, a network type can be very difficult and expensive to change later, so it is a decision that should be made correctly the first time. Consideration must be paid to the needs of your organization right now, as well as the future of the network. Knowing more about the basic communications methods utilized by a network gives you a greater understanding of the hardware involved and the problems that it may be subject to. Network communications problems can be very difficult to troubleshoot--even more so when you are unaware of what is actually going on inside the network medium.
As you have seen in chapter 3, "The OSI Model: Bringing Order to Chaos," the basic OSI model for network communications consists of seven layers, each of which has its own set of terms, definitions, and industry jargon. It can be very difficult to keep track of all of the terminology used in networking at the various levels, and this chapter will hopefully help you understand many of the terms that are constantly used.
First of all, keep in mind that this chapter is concerned primarily with the lowest levels of the OSI reference model: the physical and data link layers. Everything discussed here is completely independent of any concerns imposed by applications and operating systems (OS) either at the server or workstation level. An Ethernet LAN, for example, can be used to connect computers running NetWare, Windows NT, numerous flavors of UNIX, or even minicomputer OSs. Each of these has its own communications protocols at higher levels in the OSI model, but Ethernet is completely unconcerned with them. They are merely the "baggage" that is carried in the data section of an Ethernet packet. Network types such as Ethernet, token ring, and FDDI are simply transport mechanisms--postal systems, if you will--that carry envelopes to specific destinations irrespective of the envelopes' contents.
The packet is the basic unit used to send data over a network connection. At this level, it is also referred to as a frame. The network medium is essentially a single electrical or optical connection between a series of devices. Data must ultimately be broken down into binary bits that are transmitted over the medium using one of many possible encoding schemes designed to use fluctuations in electrical current or pulses of light to represent 1s and 0s. Since any device on the network may initiate communications with any other device at any time, it is not practical for a single device to be able to send out a continuous stream of bits whenever it wants to. This would monopolize the medium, preventing other devices from communicating until that one device is finished or, alternatively, corrupting the data stream if multiple devices were trying to communicate simultaneously.
Instead, each networked device assembles the data that it wants to transmit into packets of a specific size and configuration. Each packet contains not only the data that is to be transmitted but also the information that is needed to get the packet to its destination and reconstitute it with other packets into the original data. Thus, a network type must have some form of addressing--that is, a means by which every device on the network can be uniquely identified. This addressing is performed by the network interface located in each device, usually in the form of an expansion card known as a network adapter or a network interface card (NIC). Every network interface has a unique address (assigned by either the manufacturer or the network administrator) that is used as part of the "envelope" it creates around every packet. Other mechanisms are also included as part of the packet configuration, including error-checking information and the data necessary to assemble multiple packets back into their original form once they arrive at their destination.
The other responsibility of the network, at this level, is to introduce the packets onto the network medium in such a way that no two network devices are transmitting onto the same medium at precisely the same time. If two devices should transmit at the same time, a collision occurs, usually damaging or destroying both packets. The mechanism used to avoid collisions while transmitting packets is called media access control (MAC). This is represented in the lower half of the data link layer of the OSI reference model, also known as the MAC sublayer. A MAC mechanism allows each of the devices on a network an equal opportunity to transmit its data, as well as providing a means of detecting collisions and resending the damaged packets that result.
Thus, the network types covered in this chapter each consist of the following attributes:
Many of the following network types utilize widely different means of realizing these three attributes. Although they are all quite capable of supporting general network use, each is particularly suited to a different set of network requirements. It is also possible to connect networks of differing types into what is technically known as an internetwork--that is, a network of networks. You may find, therefore, that while Ethernet is completely suitable for all of the networked workstations within a particular building, an FDDI link (which actually comprises a network in itself) would be a better choice for use as a network backbone for connecting all of the servers that use higher speeds.
The growing trend today is towards heterogeneous networks, an amalgam of varying network types interconnected into a single entity. This and the increasing popularity of wide area network (WAN) links between remote sites has made it necessary for the LAN administrator to have knowledge of all of these network types. It is only in this way that the proper ones can be chosen to satisfy the particular needs of an installation.
With over 40 million nodes installed around the world, Ethernet is, by far, the most commonly found network type in use today. As an open standard from the very outset, its huge popularity has led to a gigantic market for Ethernet hardware, thus keeping the quality up and the prices down. The Ethernet standards are mature enough for them to be very stable, and compatibility problems between Ethernet devices produced by different manufacturers are comparatively rare.
Originally conceived in the 1970s by Dr. Robert M. Metcalfe, Ethernet has had a long history and has been implemented using a number of different media types and topologies over the years, which makes it an excellent platform with which to learn about low-level networking processes. One of the keys to its longevity was a number of remarkably foresighted decisions on the parts of its creators. Unlike other early network types that ran at what are today perceived to be excessively slow speeds, such as StarLAN's 1 Mbps and ARCnet's 2.5 Mbps, Ethernet was conceived from the outset to run at 10 Mbps.
It is only now, 20 years later, that a real need for greater speed than this has been realized, and the Ethernet specifications are currently being revised to allow network speeds of 100 Mbps as well. A number of competing standards are vying for the approval of the marketplace in this respect, but it is very likely that "Fast Ethernet," in some form or other, will be a major force in the industry for many years to come.
Of course, as so often seems to be the case in the computing industry, nomenclature is never easy, and what is generally referred to as Ethernet actually bears a different and more technically correct name. The original Ethernet standard was developed by a committee composed of representatives from three large corporations: Digital Equipment Corporation, Intel, and Xerox. Published in 1980, this standard has come to be known as the DIX Ethernet standard (after the initials of the three companies). A revision of the standard was later published in 1985, which is known as Ethernet II. This document was then passed to the Institute of Electrical and Electronics Engineers (IEEE) for industry-wide standardization. The IEEE is a huge organization of technical professionals that, among other things, sponsors a group devoted to the development and maintenance of electronic and computing standards. The resulting document, ratified in 1985, was officially titled the "IEEE 802.3 Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications." This should make it clear why most people in the industry retained the name Ethernet despite the fact that nearly all of the hardware sold today is actually 802.3 compliant.
In most ways, the 802.3 standard is a superset of the DIX Ethernet standard. While the original standard specifies only the use of thick Ethernet coaxial cabling and Ethernet II adds the thin coaxial variant, 802.3 adds the capability of using other cable types, such as unshielded twisted pair (UTP) and fiber optic, which have all but eclipsed thick Ethernet, or thicknet, in common network use. Other aspects of the physical layer re-main the same in both standards, however. The data rate of 10 Mbps and the Baseband Manchester signaling type (the way 1s and 0s are conveyed over the medium) remain unchanged, and the physical layer configuration specs for thicknet and thinnet are identical in both standards.
One source of confusion, however, is the existence of the SQE Test feature in the 802.3 standard, which is often mistakenly thought to be identical to the heartbeat feature defined in the Ethernet II document. Both of these mechanisms are used to verify that the medium access unit (MAU) or transceiver of a particular Ethernet interface is capable of detecting collisions, or signal quality errors (SQE). A test signal is sent on the line where the collision occurred from the MAU or transceiver to the Ethernet interface following every packet transmission. The presence of this signal verifies the functionality of the collision detection mechanism, and its absence can be logged for review by the administrator. No signal is sent out over the common, or network, medium. Use of SQE Test and heartbeat are optional settings for every device on the network, and problems have often been caused by their use, particularly when combined. The essential difficulty is that the heartbeat function was only defined in the Ethernet II standard. It does not exist in Ethernet I and equipment of that type may not function properly when transceivers using heartbeat are located on the same network. In addition, the 802.3 standard specifically states that the heartbeat signal should not be used by transceivers connected to 802.3-compliant repeaters. In other words, an Ethernet II NIC connected to an 802.3 repeater must not utilize the heartbeat feature or a conflict with the repeater's jam signal may occur.
There are other differences between the two standards, but they are, for the most part, not consequential in the actual construction and configuration of a network. The original DIX Ethernet standards cover the entire functionality of the physical and data link layers, while the IEEE standard splits the data link layer into two distinct sublayers: logical link control (LLC) and media access control (see fig. 7.1).
Figure 7.1
The OSI data link layer has two sublayers.
The top half of the OSI model's data link layer, according to the IEEE specifications, is the LLC sublayer. The function of the LLC is to effectively isolate all of the functions that occur below this layer from all of the functions occurring above it. The network layer (that is, the layer just above the data link layer in the OSI model) must be sent what appear to be error-free transmissions from the layer below. The protocol used to implement this process is not part of the 802.3 standard. In the IEEE implementation of the OSI model, the LLC is defined by the 802.2 standard, which is also utilized by other network types defined in IEEE standards, as well as by the 802.3 standard. Utilizing a separate frame within the data field of the 802.3 frame, the LLC defines error-recovery mechanisms other than those specified in the MAC sublayer, provides flow control that prevents a destination node from being overwhelmed with delivered packets, and establishes logical connections between the sending and receiving nodes.
The other half of the data link layer, the MAC sublayer, as mentioned earlier, arbitrates access to the network medium by the individual network devices. For both of the standards, this method is called Carrier Sense Multiple Access with Collision Detection (CSMA/CD). This protocol has remained unchanged ever since the original DIX standard and is equally applicable to all media types and topologies used in Ethernet installations. The way this protocol works is discussed in the following section.
CSMA/CD: The Ethernet MAC Mechanism. When a networked device, also referred to as a station, a node, or a DTE (data terminal equipment), wants to send a packet (or series of packets), it first listens to the network to see if it is currently being utilized by signals from another node. If the network is busy, the device continues to wait until it is clear. When the network is found to be clear, the device then transmits its packet.
The possibility exists, however, for another node on the network to have been listening at the same time. When both nodes detect a clear network, they may both transmit at precisely the same time, resulting in a collision (also known in IEEE-speak as an SQE, or signal quality error). In this instance, the transceiver at each node is capable of detecting the collision and both begin to transmit a jam pattern. This can be formed by any sequence of 32-48 bits other than the correct CRC value (cyclical redundancy check, an error-checking protocol) for that particular packet. This is done so that notification of the collision can be propagated to all stations on the network. Both nodes will then begin the process of retransmitting their packets.
To attempt to avoid repeated collisions, however, each node selects a randomized delay interval, calculated using the individual station's network address (which is certain to be unique), before retransmitting. If further collisions occur, the competing nodes will then begin to back off, that is, increase the range of delay intervals from which one is randomly chosen. This process is called truncated binary exponential backoff. As a result of its use, repeated collisions between the same two packets become increasingly unlikely. The greater the number of values from which each node may select, the lesser the likelihood that they will choose the same value.
It should be noted that collisions are a normal occurrence on a network of this type, and while they do result in slight delays, they should not be cause for alarm unless their number is excessive. Perceptible delays should not occur on an Ethernet network until the utilization of the medium approaches 80%. This figure means that a transmission is occurring somewhere on the network 80% of the time. A typical Ethernet network should have an average utilization rate of 30-40%, meaning that the cable is idle 60-70% of the time, allowing stations to initiate transmissions with relative ease.
A collision can only occur during the brief period after a given station has begun to transmit and before the first 64 bytes of data are sent. This is known as the slot time, and it represents the amount of time that it takes for the transmission to be completely propagated around the network segment. 64 bytes will completely fill the wire, end-to-end, ensuring that all other stations are aware of the transmission in progress and preventing them from initiating a transmission themselves. This is why all Ethernet packets must be at least 64 bytes in length. A transmitted packet smaller than 64 bytes is called a runt and may cause a collision after the packet has completely left the host adapter, which is known as a late collision. The packets involved in a late collision cannot be retransmitted by the normal backoff procedures. It is left to the protocols operating at the higher levels of the OSI model to detect the lost packets and request their retransmission, which they often do not do as quickly or as well, possibly resulting in a fatal network error.
Late collisions do not always involve runts. They can also be caused by cable segments that are too long, faulty Ethernet interface equipment, or too many repeaters between the transmitting and receiving stations. When a segment displays an inordinately large number of collisions for the traffic that it is carrying, it is likely that late collisions are a cause of the excess. While a certain number of transmissions are expected on an Ethernet network, as we have discussed, late collisions indicate the existence of a serious problem that should be addressed immediately.
A packet can be retransmitted up to 16 times before it is discarded and actual data loss occurs. Obviously, the more highly trafficked the network, the more collisions there will be, with network performance degrading accordingly. This is why the Ethernet standards all impose restrictions on the number of nodes that can be installed on a particular network segment, as well as on the overall length and configuration of the medium connecting them. A segment that is too long can cause a packet's transmission to its destination to take longer than the 600 nanoseconds prescribed in the specifications, thus causing the collision detection mechanisms to react when, in fact, no collision has occurred. Too many nodes will naturally result in an increased number of collisions, and in either case, network performance will degrade considerably.
The Capture Effect. Although large numbers of collisions can be dealt with by an Ethernet network with no loss of data, performance can be hampered in ways beyond the simple need to repeatedly transfer the same packets. One phenomenon is known as the capture effect. This can occur when two nodes both have a long stream of packets to be sent over the network. When an initial collision occurs, both nodes will initiate the backoff process and, eventually, one of them will randomly select a lower backoff value than the other and win the contention. Therefore, let's say that Node A has done this, and has successfully retransmitted its packet, while Node B has not. Now, Node A attempts to transmit its second packet while Node B is still trying to retransmit its first. A collision occurs again, but for Node A, it is the first collision occurring for this packet, while for Node B, it is the second. Node A will randomly select from the numbers 0 and 1 when calculating its backoff delay. Node B, however, will be selecting from the numbers 0, 1, 2, and 3 because it is the second collision for this packet, and the number of possible values increases with each successive backoff.
Already, the odds of winning the contention are in Node A's favor. Each iteration of the backoff algorithm causes longer delay times to be added to the set of possible values. Therefore, probability dictates that Node B is likely to select a longer delay period than Node A, causing A to transmit first. Once again, therefore, Node A successfully transmits and proceeds to attempt to send its third packet, while Node B is still trying to send its first packet for the third time, increasing its backoff delay factor each time. With each repeated iteration, Node B's chances of winning the contention with Node A are reduced, as its delay time is statistically increasing, while that of Node A remains the same. It becomes increasingly likely, therefore, that Node B will continue to lose its contentions with Node A until either Node A runs out of packets to send, or Node B reaches 16 transmission attempts, after which its packet is discarded and the data lost.
Thus, in effect, Node A has captured the network for sixteen successive transmissions, due to its having won the first few contentions. Various proposals for a means to counteract this effect are currently being considered by the IEEE 802 committee, among them a different backoff algorithm called the binary logarithmic access method (BLAM). While the capture effect is not a major problem on most Ethernet networks, it is discussed here to illustrate that complex ways that heavy traffic patterns can affect the performance of an Ethernet network. Other MAC protocols, such as that used by token-ring networks, are not subject to this type of problem, as collisions are not a part of their normal operational specifications. This is another reason why it is important for the proper network type to be selected for the needs of a particular organization.
The Ethernet/802.3 Frame Specification. The process of sending data from one device on a network to another is, as stated earlier, a very complex affair. A request will originate at the highest level of the OSI model and, as it travels down through the layers to the physical medium itself, be subject to various levels of encoding, each of which adds a certain amount of overhead to the original request. Each layer accepts input from the layer above and encases it within a frame designed to its own specifications. This entire construction is then passed on to the next layer, which includes it, frame and all, in the payload area of its own specialized frame. When the packet reaches the physical layer, it consists of frames within frames within frames and has grown significantly in size. An average packet may ultimately contain more bits devoted to networking overhead than to the actual information being transmitted.
By the time that a request has worked its way down to the data link layer, additional data from every layer of the network model has been added. The upper layers add information regarding the application generating the request. Moving farther down, information concerning the transport protocol being utilized by the network operating system (NOS) is added. The request is also broken down into packets of appropriate size for transport over the network, with another frame added containing the information needed to reassemble the packets into the proper order at their destination. The LLC sublayer adds its own frame to provide error and flow control. Several other processes are interspersed throughout, all adding data that will be needed to process the packet at its destination. Once it has reached the MAC sublayer, all that remains to be done is to see that the packet is addressed to the proper destination, the physical medium is accessed properly, and the packet arrives at its destination undamaged.
The composition of the frame specified by the IEEE 802.3 standard is illustrated in figure 7.2. The frame defined by the original Ethernet specification is slightly different but only in the arrangement of the specified bits. The functions provided are identical to those of the 802.2 and 802.3 specifications in combination. The functions of each part of the frame are explicated in the following list.
Figure 7.2
The fields that comprise the IEEE 802.3 Frame Format.
The 802.3 standard has been revised over the years to add several different media types as they have come into popularity. Only thick Ethernet is part of the original DIX Ethernet specification. The IEEE standard defines not only the types of cabling and connectors to be used, but also imposes limitations on the length of the cables in an individual network segment, the number of nodes that can be installed on any one segment, and the number of segments that can be joined together to form a network.
For these purposes, a network is defined as a series of computers connected so that collisions generated by any single node are seen on the network medium by every other node. In other words, when Node A attempts to transmit a packet to Node Z and a collision occurs, the jam pattern is completely propagated around the network and may be seen by all of the nodes connected to it. A segment is defined as a length of network cable bounded by any combination of terminators, repeaters, bridges, or routers. Thus, two segments of Ethernet cabling may be joined by a repeater (which is a signal amplifying and retiming device, operating purely on an electrical level, that is used to connect network segments), but as long as collisions are seen by all of the connected nodes, there is only one network involved. This sort of arrangement may also be described as forming a single collision domain--that is, a single CSMA/CD network where a collision occurs if two nodes transmit at the same time.
Conversely, a packet-switching device such as a bridge or a router may be used to connect two disparate network segments. These devices, while they allow the segments to appear as one entity at the network layer of the OSI model and above, isolate the segments at the data link layer, preventing the propagation of collisions between the two segments. This is more accurately described as an internetwork, or a network of networks. Two collision domains exist because two nodes on opposite sides of the router can conceivably transmit at the same moment without incurring a collision.
Thick Ethernet. The original form in which Ethernet networks were realized, thick Ethernet, is also known colloquially as thicknet, "frozen yellow garden hose," or by its IEEE designation: 10Base5. This latter is a shorthand expression that has been adapted to all of the media types supported by the Ethernet specification. The "10" refers to the 10 Mbps transfer rate of the network, "Base" refers to Ethernet's baseband transmitting system (meaning that a single signal occupies the entire bandwidth of the medium), and the "5" refers to the 500 meter segment length limitation.
Thicknet is used in a bus topology. The topology of a network refers to the way in which the various nodes are interconnected. A bus topology means that each node is connected in series to the next node (see fig. 7.3). At both ends of the bus there must be a 50-ohm terminating resistor, so that signals reaching the end of the medium are not reflected back.
Figure 7.3
This is a basic 10Base5 thicknet network.
The actual network medium of a thicknet network is 50-ohm coaxial cable. Coaxial cable is so named because it contains two electrically separated connectors within one sheath (see fig. 7.4). A central core consisting of one connector is wrapped with a stiff insulating material and then surrounded by a woven mesh tube that is the second connector. The entire assembly is then encased in a tough PVC or Teflon insulating sheath that is yellow or brownish-orange in color. The Teflon variant is used for plenum-rated cable, which may be required by fire regulations for use in ventilation ducts, also known as plenums. The overall package is approximately 0.4 inches in diameter and more inflexible than the garden hose it is often likened to.
Figure 7.4
Here is a cutaway view of a coaxial cable.
As a network medium, coaxial cable is heavy and difficult to install properly. Installing the male N-type coaxial connectors at each end of the cable can be a difficult job, requiring the proper stripping and crimping tools and a reasonable amount of experience. With all coaxial cables, the installation is only as good as the weakest connection, and problems may occur as the result of bad connections that can be extremely subtle and difficult to troubleshoot. Indeed, with thicknet, it is usually recommended that the cable be broken in as few places as possible and that all of the segments used on a single network come from the same cable lot (that is, from a single spool or from spools produced in the same batch by a single manufacturer). When forced to use segments of cable from different lots, the 802.3 specification recommends that the segments used should be either 23.4, 70.2, or 117 meters long, to minimize the signal reflections that may occur due to variations in the cable itself. The specification also calls for the network to be grounded at only one end. This causes additional installation difficulties, as care must be taken to prevent any of the other cable connectors from coming in contact with a ground.
The sheer size of the thicknet cable makes it an excellent conducting medium. The maximum length for a thicknet segment is 500 meters, much longer than any other copper medium. It also provides excellent insulation against electromagnetic interference and attenuation, making it ideal for industrial applications where other machinery may inhibit the functionality of thinner network media. Thicknet has also been used to construct backbones connecting servers at distant locations within the same building. Electrical considerations, however, preclude its use for connections between buildings, as is the case with any copper-based medium.
Media Access Components. All Ethernet types utilize the same basic components to attach the network medium to the Ethernet interface within the computer. This is another area in which the 802.3 standard differs from the DIX Ethernet standard, but the differences are only in name. The components are identical, but they are referred to by different designations in the two documents. Both are provided here, as the older Ethernet terminology is often used, even when referring to an 802.3 installation.
Thicknet is an exemplary model for demonstrating the different components of the interface between the network cable and the computer. The relative inflexibility of the cable prevents it from being installed so that it directly connects to the Ethernet interface, as most of the other medium types do. Components that are integrated into the network adapter in thinnet or UTP installations are separate units in a thicknet installation.
The actual coaxial cable-to-Ethernet interface connection is through a medium dependent interface (MDI). Two basic forms of MDI exist for thicknet. One is known as an intrusive tap because its installation involves cutting the network cable (thereby interrupting network service), installing standard N connectors on the two new ends, and then linking the two with a barrel connector that also provides the connection that leads to the computer. This method is far less popular than the non-intrusive tap, which is installed by drilling a hole into the coaxial cable and attaching a metal and plastic clamp that provides an electrical connection to the medium. This type of MDI can be installed without interrupting the use of the network, and without incurring any of the signal degeneration dangers that highly segmented thicknet cables are subject to.
The MDI is, in turn, directly connected to an MAU. This is referred to as a transceiver in the DIX Ethernet standard, as it is the unit that actually transmits data to and receives it from the network. In addition to the digital components that perform the signaling operations, the MAU also has analog circuitry that is used for collision detection. In most thicknet installations, the MAU and the MDI are integrated into a single unit that clamps onto the coaxial cable.
The 802.3 specification allows for up to 100 MAUs on a single network segment, each of which must be separated from the next by at least 2.5 meters of coaxial cable. The cabling often has black stripes on it to designate this distance. These limitations are intended to curtail the amount of signal attenuation and interference that can occur on any particular area of the network cable.
The thicknet MAU has a male 15-pin connector that is used to connect to an attachment unit interface (AUI) cable, also known as a transceiver cable. This cable, which can be no more than 50 meters long, is then attached, with a similar connector, to the Ethernet interface on the computer, from which it receives both signals and power for the operation of the MAU. Other AUI cables are available that are thinner and more manageable than the standard 0.4 inch diameter ones, but they are limited to a shorter distance between the MAU and the Ethernet interface, often 5 meters or less.
While thicknet does offer some advantages in signal strength and segment length, its higher cost, difficulty of installation and maintenance, and limited upgrade capabilities have all but eliminated it from use except in situations where its capabilities are expressly required. As with thinnet, the other coaxial network type in use today, thicknet is and always will be limited to 10 Mbps. The new high-speed standards being developed today are designed solely for use with twisted-pair or fiber-optic cabling. Despite the obsolescence of the medium itself, however, it is a tribute to the designers of the original Ethernet standard that the underlying concepts of the system have long outlived the original physical medium on which it was based.
Thin Ethernet. Thin Ethernet, also known as thinnet, cheapernet, or 10Base2 (despite the fact that its maximum segment length is 185 and not 200 meters), was standardized in 1985 and quickly became a popular alternative to thicknet. Although still based on a 50-ohm coaxial cable, thinnet, as the name implies, uses RG-58 cabling, which is much narrower (about 3/16 of an inch) and more flexible (see fig. 7.5), allowing the cable to be run to the back of the computer where it is directly attached to the Ethernet interface. The cable itself is composed of a metallic core (either solid or stranded), surrounded by an insulating, or dielectric layer, then a second conducting layer made of aluminum foil or braided strands, which functions both as a ground and as insulation for the central conductor. The entire construction is then sheathed with a tough insulating material for protection. Several different types of RG-58 cable exist, and care should be taken to purchase one with the appropriate impedance (approximately 50 ohms) and velocity of propagation rating (approximately 0.66). A network adapter for a thinnet network has the AUI, MAU, and MDI integrated into the expansion card, so there are no separate components to be purchased and accounted for.
Figure 7.5
This is a thinnet cable with a BNC connector attached.
Unlike thicknet, which may be tapped for attachment to a computer without breaking the cable, individual lengths of thinnet cabling are used to run from one computer to the next in order to form the bus topology (see fig. 7.6). At each Ethernet interface, a "T" connector is installed. This is a metal device with three Bayonet-Neill-Concelman-type connectors (BNC): one female for attachment to the NIC in the computer, and two males for the attachment of two coaxial cable connectors (see fig. 7.7). The cable at each machine must have a female BNC connector installed onto it, which is attached to the "T." Then a second length of cable, similarly equipped, is attached to the third connector on the "T" and used to run to the next machine. There are no guidelines in the standard concerning cable lots or the number of breaks that may be present in thinnet cabling. The only rule, in this respect, is that no cable segment may be less than 0.5 meters long.
Figure 7.6
This is a basic 10Base2 thinnet network.
Figure 7.7
This is a thinnet BNC "T" connector.
Thinnet cables of varying lengths can be purchased with connectors already attached to them, but it is far more economical to buy bulk cable on a spool and attach the connectors yourself. Some special tools are needed, such as a stripper that exposes the bare copper of the cabling in the proper way and a crimper that squeezes the connectors onto the ends of the cable, but these can be purchased for $50-75 or less. Attaching the connectors to the cable is a skill that should be learned by watching the procedure done. It requires a certain amount of practice, but it is worth learning if you are going to be maintaining a thinnet network. This is because the single largest maintenance problem with this type of network is faulty cable connections.
Since thinnet requires no hub or other central connection point, it has the advantage of being a rather portable network. The simplest and most inexpensive Ethernet network possible can be created by installing NICs into two computers, attaching them with a length of thinnet cable, and installing a peer-to-peer operating system such as Windows for Workgroups. This sort of arrangement can be expanded, contracted, assembled, and disassembled at will, allowing a network to be moved to a new location with little difficulty or expense.
Thinnet cabling can be installed within the walls of an office, but remember that there always must be two wires extending to the T connector at the back of each computer. This often results in installations that are not as inconspicuous as might be desired in a corporate location. Thinnet cabling can also be left loose to run along the floor of a site, allowing for easy modification of the cabling layout, but this exposes the connectors to greater abuse from everyday foot traffic. Loose connectors are a very common cause of quirky behavior on thinnet networks, and it can often be extremely difficult to track down the connection that is causing the problem. The purchase of a good quality cable tester is highly recommended.
It is also important to note that, unlike thicknet, the thinnet cabling must extend directly to the NIC on the computer. A length of cabling running from the T connector to the NIC, also known as a stub cable, is not acceptable in this network type, although it may seem to function properly at first. The 802.3 specification calls for a distance of no more than 4 centimeters between the MDI on the NIC and the coaxial cable itself. The use of stub cables causes signal reflections on the network medium, resulting in increased numbers of packet retransmissions, thus slowing down the performance of the network. On highly trafficked segments, this can even lead to frame loss if the interference becomes too great.
Like thicknet, thinnet must be terminated at both ends of the bus that comprises each segment. 50-ohm terminating resistors built into a BNC connector are used for this purpose. The final length of cable is attached to the last machine's T connector along with the resistor plug, effectively terminating the bus. Although it is not specified in the standard, a thinnet network can also be grounded, but as with thicknet, it should only be grounded in one place. All other connectors should be insulated from contact with an electrical ground.
Due to the increased levels of signal attenuation and interference caused by the narrower gauge cabling, thinnet is limited to a maximum network segment length of 185 meters, with no more than 30 MAUs installed on that segment. As with thicknet, repeaters can be used to combine multiple segments into a single collision domain, but it should be noted that the MAUs within the repeater count towards the maximum of 30.
Unshielded Twisted Pair. In the same way that thinnet overtook thicknet in popularity in the late 1980s, so the use of unshielded twisted-pair cabling (UTP) has come to be the dominant Ethernet medium since its addition to the 802.3 standard in 1989. This revision of the standard is known as the 802.3i 10BaseT specification. Other UTP-based solutions did, however, exist prior to the ratification of the standard--most notably LattisNet, a system developed by Synoptics that at one time was on its way toward becoming an industry standard itself. LattisNet is not compatible with 10BaseT, though, as the latter synchronizes signals at the sending end and the former at the receiving end.
A UTP or 10BaseT Ethernet network is an adaptation of the cabling commonly used for telephone systems to LAN use. The T in 10BaseT refers to the way in which the two or more pairs of wires within an insulated sheath are twisted together throughout the entire length of the cable. This is a standard technique used to improve the signal transmission capabilities of the medium.
The greatest advantages to 10BaseT are its flexibility and ease of installation. Thinner than even thinnet cable, UTP cabling is easily installed within the walls of an existing site, providing a neat, professional-looking installation in which a single length of cable attaches each DTE device to a jack within a wall plate, just as a telephone is connected. Some sites have even adapted existing telephone installations for the use of their computer networks.
Many different opinions exist concerning the guidelines by which 10BaseT cabling should be installed. For example, the EIA/TIA-569 standard for data cable installation states that data cable should not be run next to power cables, but in most cases, this practice does not show any adverse effect on a 10BaseT network. This is because any electrical interference will affect all of the pairs within the cable equally. Most of the interference should be negated by the twists in the cable, but any interference that is not should be ignored by the receiving interface because of the differential signaling method used by 10BaseT.
Another common question is whether or not the two pairs of wires in a standard four-pair UTP cable run that are unused by Ethernet communications may be used for another purpose. The general consensus is that these may be used for digital telephone connections but not for standard analog telephone because of the high ring voltage. Connections to other resources (such as minicomputers or mainframes) are also possible, but using the cable for other connections may limit the overall length of the segment. The only way to know this for sure is to try using the extra pairs under maximum load conditions, and then test to see if problems occur.
Unlike both thicknet and thinnet, 10BaseT in not installed in a bus topology. Instead, it uses a distributed star topology, in which each device on the network has a dedicated connection to a centralized multiport repeater known as a concentrator or hub (see fig. 7.8). The primary advantage to this topology is that a disturbance in one cable affects only the single machine connected by that cable. Bus topologies, on the other hand, are subject to the "Christmas light effect," in which one bad connection will interrupt network communications not only to one machine but to every machine down the line from that one. The greater amount of cabling needed for a 10BaseT installation is offset by the relatively low price of the cable itself, but the need for hubs containing a port for every node on the network adds significantly to the overall price of this type of network. Two devices can be directly connected with a 10BaseT cable that provides signal crossover, without an intervening hub, but only two, resulting in an effective, if minimal, network.
Figure 7.8
This is a basic 10BaseT UTP network.
While the coaxial cable used for the other Ethernet types is relatively consistent in its transmission capabilities, allowing for specific guidelines as to segment length and other attributes, the UTP cable used for 10BaseT networks is available in several grades that determine its transmission capabilities. Table 7.1 lists the various data grades and their properties. IBM (of course) has its own cable designations. These are listed in the section on Token Ring networks later in this chapter. The 802.3i standard specifies the maximum length of a 10BaseT segment to be 100 meters from the hub to the DTE, using Category 3 UTP cable, also known as voice grade UTP. This is the standard medium used for traditional telephone installations, and the 802.3i document was written on the assumption that many sites would be adapting existing cable for network use. This cable typically is 24 AWG (American Wire Gauge, a standard for measuring the diameter of a wire), copper tinned, with solid conductors, 100-105-ohm characteristic impedance, and a minimum of two twists per foot.
Category | Speed | Used For |
2 | Up to 1 Mbps | Telephone Wiring |
3 | Up to 16 Mbps | Ethernet 10BaseT |
4 | Up to 20 Mbps | Token Ring, 10BaseT |
5 | Up to 100 Mbps | 10BaseT, 100BaseT |
In new installations today, however, the use of Category 5 cabling and attachment hardware is becoming much more prevalent. A Category 5 installation will be much less liable to signal crosstalk (the bleeding of signals between the transmit and receive wire pairs within the cable) and attenuation (the signal lost over the length of the cable) than Category 3, allowing for greater segment lengths and, more importantly, future upgrades to the 100 Mbps Fast Ethernet standards now under development. The 100-meter segment length is an estimate provided by the specification, but the actual limiting factor involved is the signal loss from source to destination, measured in decibels (dB). The maximum allowable signal loss for a 10BaseT segment is 11.5 dB, and the quality of the cable used will have a significant effect on its signal carrying capabilities.
10BaseT segments utilize standard 8-pin RJ-45 (RJ stands for registered jack) telephone type connectors (see fig. 7.9) both at the hub and at the MDI. Usually the cabling will be pulled within the walls or ceiling of the site from the hub to a plate in the wall near the computer or DTE. A patch cable is then used to connect the wall socket to the NIC itself. This provides a connection between the two MAUs on the circuit, one integrated into the hub and the other integrated into the network interface of the DTE. Since UTP cable utilizes separate pairs of wires for transmitting and receiving, however, it is crucial that the transmit pair from one MAU be connected to the receive pair on the other, and vice versa. This is known as signal crossover, and it can be provided either by a special crossover cable or it can be integrated into the design of the hub. The latter solution is preferable because it allows the entire wiring installation to be performed "straight through," without concern for the crossover. The 802.3i specification requires that each hub port containing a crossover circuit be marked with an "X" to signify this.
Figure 7.9
An RJ-45 connector looks like a telephone cable connector.
While existing Category 3 cable can be used for 10BaseT, for new cable installations, the use of Category 5 cable is strongly recommended. Future developments in networking will never give cause to regret this decision, and the savings on future upgrades will almost certainly outweigh the initial expense. In addition, if cost is a factor, considerable savings can be realized by pulling Category 5 cable and utilizing Category 3 hardware for the connectors. These can later be upgraded without the need for invasive work.
The hubs used for a 10BaseT network may contain up to 132 ports, enabling the connection of that many devices, but multiple hubs can be connected to each other using a 10Base2 or other type of segment, or the 10BaseT ports themselves (as long as signal crossover is somehow provided). Up to three mixing segments connecting multiple hubs can be created, supporting up to 30 hubs each. Thus, as with the 10BaseF variants to be examined later, it is possible to install the Ethernet maximum of 1024 nodes on a single network, without violating any of the other 802.3 configuration specifications. Hubs that conform to the standard will also contain a link integrity circuit that is designed to ensure that the connection between the hub port and the DTE at the other end of the cable remains intact.
Every 1/60th of a second, a pulse is sent out of each active port on a hub. If the appropriate response is not received from the connected device, then most hubs will be able to automatically disable the functionality of that port. Green LEDs on both the hub port and the NIC in the DTE will also be extinguished. This sort of link integrity checking is important for 10BaseT networks because, unlike the coaxial Ethernets, separate transmit and receive wire pairs are used. If a DTE was to have a non-functioning receive wire, for example, due to a faulty interface, it may interpret this as a quiet channel when network traffic is, in fact, occurring. This may cause it to transmit at the wrong times, perhaps even continuously, a condition known as jabber, resulting in many more collisions than the system is intended to cope with.
One of the frequent causes of problems on 10BaseT networks stems from the use of improper patch cables to connect computers to the wall socket. The standard satin cables used to connect telephones will appear to function properly when used to connect a DTE to a UTP network. However, these cables lack the twisting that is a crucial factor in suppressing the signal crosstalk that this medium is subject to. On a twisted-pair Ethernet network, collisions are detected by comparing the signals on the transmit and receive pairs within the UTP cable. When signals are detected at the same time on both pairs, the collision mechanism is triggered.
Excessive amounts of crosstalk can cause phantom collisions, which occur too late to be retransmitted by the MAC mechanisms within the Ethernet interface. These packets are therefore discarded and must later be detected and retransmitted by the upper layers of the OSI model. This process can reduce network performance considerably, especially when multiplied by a large number of computers.
Fiber-Optic Ethernet. The use of fiber-optic cable to network computers has found, with good reason, great favor in the industry. Most of the common drawbacks of copper media are virtually eliminated in this new technology. Since pulses of light are used instead of electrical current, there is no possibility of signal crosstalk and attenuation levels are far lower, resulting in much greater possible segment lengths. Devices connected by fiber-optic cable are also electrically isolated, allowing links between remote buildings to be safely created. Conducting links between buildings can be very dangerous, due to electrical disturbances caused by differences in ground potential, lightning, and other natural phenomena.
This sort of link between buildings is also facilitated by the fiber-optic cable's narrow gauge and high flexibility. Other means of establishing Ethernet connections between buildings are available, many of them utilizing unbounded, or wireless, media such as lasers, microwaves, or radio, but these are generally far more expensive and much less reliable. Fiber-optic cable is also capable of carrying far more data than the 10 Mbps defined by the Ethernet standards. Its primary drawback is its continued high installation and hardware costs, even after years on the market. For this reason, fiber-optic technology is used primarily as a backbone medium, to link servers or repeaters over long distances rather than for connections to the desktop, except in environments where electromagnetic interference (EMI) levels are high enough to prevent the use of other media.
FDDI is a fiber-optic-based network standard that supports speeds of 100 Mbps, and this will be examined later in this chapter, but there is also an Ethernet alternative known as 10BaseF that utilizes the same medium. Cabling can be installed to run at the 10 Mbps provided by the Ethernet standard and later upgraded to higher speeds by the replacement of hubs and adapters. Like 10BaseT, fiber optic uses separate cables for transmitting and receiving data, but the two are not combined in one sheath, as UTP is, nor is there any reason for them to be twisted. The Ethernet fiber standards allow the use of an MAU that is external to the NIC, such as is used by thicknet networks. The fiber-optic MAU (FOMAU) is connected to the MDI using the same type of AUI cable used by thicknet MAUs. Other 10BaseF interfaces may integrate the MAU onto the expansion card, as with the other Ethernet variants.
The first fiber-optic standard for Ethernet was part of the original DIX standard of the early 1980s. Known as the Fiber Optic Inter-Repeater Link segment (FOIRL), its purpose, as the name implies, was to link repeaters at locations up to 1,000 meters away, too distant for the other Ethernet media types to span. This also provided a convenient method for linking different network types, once the thinnet and 10BaseT media standards came into use. As prices for the fiber-optic hardware came down, however (from outlandish to merely unreasonable), some users expressed a desire to use fiber links directly to the desktop. Some equipment allowing this was marketed before there was a standard supporting such practices, but the 10BaseF additions to the 802.3 specification provide a number of fiber-based alternatives to a simple repeater link segment.
When discussing the configuration of multiple Ethernet segments, the terms link segment and mixing segment are used to describe the two fundamental types of connections between repeaters. A link segment is one that has only two connections on it--that is, a link between two DTEs only, most often used to span the distance between two remotely located networks. A mixing segment is one that contains more than two connections, usually for the purpose of attaching nodes to the network. Thus, a standard thick or thin Ethernet bus connecting any number of computers would be a mixing segment. Technically, each connection on a 10BaseT network is a link segment because there are no intervening connections between the MAU in the hub and MAU on the NIC.
Several sub-designations are specified by the 10BaseF standard, and the primary difference between them is the type of segment for which they are intended to be used.
Broadband Ethernet. Although it is not often used, there is a broadband standard for Ethernet networks. A broadband network is one in which the network bandwidth is divided or split to allow multiple signals to be sent simultaneously. This is called multiplexing, and the Ethernet variant uses a method of multiplexing called frequency division multiplexing. The concept and the cable itself are similar to those used for a cable television network. Multiple signals are all transmitted at the same time, and the receiving station chooses the appropriate one by selecting a certain frequency to monitor. This form of Ethernet is known as 10Broad36 because the maximum segment length allowed by the standard is 3,600 meters. This is far longer than any other allowable segment in the entire specification, obviously providing the capability to make connections over extremely long distances. Fiber-optic cable has become much more popular for this purpose, however, and 10Broad36 installations are few and far between.
As touched upon earlier, the key to the successful operation of an Ethernet network is the proper functioning of the media access control and collision detection mechanisms. Signals must be completely propagated around a collision domain according to specific timing specifications for the system to work reliably. The two primary factors controlled by these specifications are the round-trip signal propagation delay and the inter-packet gap.
The larger an Ethernet network is and the more segments that comprise a specific collision domain, the greater the amount of signal delay incurred as each packet wends its way to the far ends of the network medium. It is crucial to the efficient operation of the system that this round-trip signal propagation delay not exceed the limits imposed by the 802.3 specification.
When consecutive packets are transmitted over an Ethernet network, the specification calls for a specific inter-packet gap--that is, a required minimum amount of space (amounting to 9.6 microseconds) between packet transmissions. The normal operations of a repeater, when combined with the standard amount of signal disturbance that occurs as a packet travels over the network medium, can lead to a reduction in the length of this gap, causing possible packet loss. This is known as tailgating. A typical Ethernet interface is usually configured to pause for a brief period of time after reading the end of a packet. This blind time prevents the normal noise at the end of a packet from being treated as the beginning of a subsequent packet. Obviously, the blind time must be less than the inter-packet gap; it usually ranges from 1 to 4 microseconds. Should the inter-packet gap time be reduced to a value smaller than the blind time, an incoming packet may not be properly recognized as such and therefore discarded.
The 802.3 specification provides two possible means of determining the limitations that must be imposed on a particular network to maintain the proper values for these two attributes. One is a complex mathematical method by which the individual components of a specific network can be enumerated and values assigned based on segment lengths, number of connections, and number and placement of repeaters to perform calculations resulting in the precise signal propagation delay and inter-packet gap figures for that installation. It is then easy to determine just what can be done to that network, while still remaining within acceptable range of values provided by the specification.
This procedure is usually only performed on networks that are a great deal more complex than the models provided as the other method for configuring a multi-segment network. This is sometimes known as the 5-4-3 rule, and it provides a series of simple guidelines to follow in order to prevent an Ethernet network from becoming too large to manage its own functions.
The basic 5-4-3 rule states that a transmission between any two devices within a single collision domain can pass through no more than five network segments, connected by four repeaters, of which no more than three of the segments are mixing segments. The transmission can also pass through no more than two MAUs and two AUIs, excluding those within the repeaters themselves. The repeaters used must also be compliant with the specifications in the 802.3 standard for their functions. When a network consists of only four segments, connected by three repeaters, then all of the segments may be mixing segments, if desired.
On a 10BaseT network, two segments are always utilized to connect the communicating machines to their respective hubs. Since these two segments are both link segments because there are no connections other than those to the MAUs in the hub and the host adapter, this leaves up to three mixing segments for use in interconnecting multiple hubs.
A number of exceptions to this basic rule are defined as part of the 10BaseF standards, and these are among the primary advantages of these standards on an Ethernet. On a network composed of five segments with four repeaters, fiber-optic link segments (whether FOIRL, 10BaseFB, or 10BaseFL) can be up to 500 meters long, while fiber-optic mixing segments (10BaseFP) can be no longer than 300 meters. On a network with only four segments and three repeaters, fiber-optic links between repeaters can be up to 1,000 meters long for FOIRL, 10BaseFB, and 10BaseFL segments and 700 meters long for 10BaseFP segments. Links between a repeater and a DTE can be no longer than 400 meters for 10BaseFL segments and 300 meters for 10BaseFP segments.
Obviously, these specifications provide only the broadest estimation of the actual values that may be found on a particular network. These rules define the maximum allowable limits for a single collision domain, and Ethernet is a network type that functions best when it is not pushed to its limits. This is not to say, however, that exceeding these limitations in any way will cause immediate problems. A segment that is longer than the recommended limit, or a segment with a few more DTEs than specified in the standard will probably not cause your network to grind to a screeching halt. It can, however, cause a slight degradation of performance that will only be exacerbated by further expansion.
As applications and data types become larger and more demanding, a need for greater network throughput to the desktop has become apparent, and several new networking standards have risen out of that need. Although network speeds of up to 100 Mbps have been achievable for some time through the use of FDDI fiber-optic links, this technology remains too complex and expensive for use in connections to individual workstations, in most cases. However, FDDI continues to be a popular choice for network backbone links, particularly when large distances must be spanned or remote buildings on a campus network linked together.
Another technology that promises to deliver increased throughput to the desktop is asynchronous transfer mode (ATM). This new communications standard is commonly recognized as being a dominant player in the future of networking, but the absence of ratified standards has made it far too tenuous a technology to draw major commitments from network administrators at this time. The first ATM products have hit the market, but chances of interoperability between manufacturers are slim right now.
The other great drawback of these alternative high-speed networking solutions is that they both require a complete replacement of virtually every network-related component in the system, from host adapter to hub and everything in between. What was needed for the average corporate network was a system that could allow for the use of cabling already installed at the site, when necessary, and provide a gradual upgrade path for the other networking hardware so that the entire network would not have to be rebuilt in order to perform the upgrade.
To satisfy this need, several competing standards have arisen that utilize existing 10BaseT cable networks to provide 100 Mbps to the desktop. Of these competing standards, only the IEEE 802.3u document, also known as the 100BaseT standard, can truly be called an Ethernet network. Other standards, such as the 100VG (voice grade) AnyLAN network being promulgated by Hewlett Packard and other vendors may run over the same cable types, but they use different methods of media access control, making them incompatible with existing Ethernet networks. 100BaseT utilizes the exact same frame format and CSMA/CD media access and collision detection techniques as existing 802.3 networks. This means that all existing protocol analysis and network management tools, as well as the investments made in staff training on Ethernet networks, can still be used. In addition, backwards-compatibility with standard 10BaseT traffic and a feature providing auto-negotiation of transmission speed allow for the gradual introduction of 100BaseT hardware onto an existing 10BaseT network.
Ethernet host adapters that support both 10 Mbps and 100 Mbps speeds are becoming commonplace on the market. This market is a highly competitive one, and the additional circuitry required for the adapters is minimal, causing prices to drop quickly as this technology rapidly gains in popularity. Computers with such adapters installed can be operated at standard 10 Mbps speed until a hub supporting the new standard is installed. The auto-negotiation feature will then cause the workstation to shift to the higher transmission rate. In this way, workstations can be shifted to 100 Mbps as the users' needs dictate. This is a rare instance when the network can be made to conform to the user, instead of the other way around.
Indeed, virtually every part of the 100BaseT standard is designed around compatibility issues with existing hardware. Three different cabling standards are provided to accommodate existing networks, and even new installations can benefit from the fact that the fiber-optic wiring standard, for example, is adopted wholesale from the document specifying the wiring guidelines for fiber-optic cable used in FDDI networks. This prevents cable installers from having to learn new guidelines to perform an installation and also keeps prices down. This can be an important factor when you are contracting to have wire pulled for a new site.
Remember that people may have been pulling twisted-pair cable for business office phone systems for decades but still know little or nothing about the requirements for a data-grade installation. Be sure that your contractors are familiar with the standards for the type of cabling that you choose to install and that specific details about the way in which the installation is to be performed are included in the contract. This may include specifications regarding proximity to other service connections, signal crossover, and use of other wire pairs within the same cable, as well as the type and grade of materials employed.
Another factor of a cabling installation that may be of prime importance is when the work is actually done. Unless you are installing a network into brand-new space that is not yet occupied, you will be faced with the dilemma of whether or not you should attempt to have the installation performed while business is being conducted. Standards such as 100BaseT have made it possible for network upgrades to be performed without interruption of business. The question of whether to pay overtime rates in order to have cabling installed at night or on weekends or whether to have the contractors attempt to work around your employees is one that must be individually made for every type of business. A good cabling contractor, though, should be able to work nights and still leave an unfinished job site in a state that is suitable for corporate business each day. This is often the mark of a true professional.
100BaseT Cabling Standards. The three cabling standards provided in the 100BaseT specification are designed to accommodate virtually all of the extant cabling installed for use in 10BaseT networks. Obviously, the primary goal is to allow 100BaseT speeds to be introduced onto an existing network without the need for pulling all new cable. Table 7.2 lists the three standards and type of cabling that is called for in each.
Standard | Cable Type | Segment Length |
100BaseTX | Category 5 (two pairs) | 100 meters |
100BaseT4 | Category 3, 4, or 5 (four pairs) | 100 meters |
100BaseFX | 62.5 micrometer Multimode Fiber (2 strands) | 400 meters |
All of the standards listed above utilize a similar interface between the actual network medium and the Ethernet port provided by the NIC in the DTE. A medium dependent interface, or MDI, the same as one that can be used for a 10BaseT network, connects to the network cable and is linked to the Ethernet adapter with a physical layer device (PHY) and a media independent interface (MII). These two components may take several forms.
Many 100BaseT host adapter cards are now available that integrate all of these components as circuitry on the expansion card, allowing a standard RJ-45 connection from the network medium to the adapter itself. Other realizations of the technology may take the form of a daughter card that provides switchable 10/100 Mbps capability to an existing 10BaseT adapter. This connects to the network medium utilizing an RJ-45 jack and plug directly into the Ethernet adapter in the host machine. The third possible configuration is through the use of an external physical layer device, much like the separate MAUs or transceivers used by thicknet systems. The MII of this device then connects to the Ethernet adapter using a short (no more than 0.5 meter) 40-pin cable.
In this way, a number of options are provided to accommodate the networking equipment already installed. Any one of these arrangements can be attached to any one of the designated cable types, providing enough flexibility to allow 100BaseT to be used as a high-speed networking solution. As we examine the three cabling standards in the following sections, notice the way in which they encompass virtually every twisted-pair cabling installation in place today, providing almost universal upgradeability.
100BaseTX. Generally speaking, UTP cabling that conforms to the EIA/TIA Category 5 specification is recommended for use by data transmission systems running at high speeds. The 100BaseTX standard is provided for use by installations that have already had the foresight to install Category 5 cable. Using two pairs of wires, the pinouts for a 100BaseTX connection are identical to those of a standard 10BaseT network.
Although the cabling standard for 100BaseTX is based almost entirely on the ANSI TP-PMD wiring standard, the pinouts from the ANSI standard have been changed to allow 100BaseTX segments to be connected directly to existing Category 5 networks without modification. This ANSI standard also allows for the use of 150-ohm shielded twisted-pair (STP) cable, such as that used for token-ring networks. Thus, network types other than Ethernet can also be adapted to the 100BaseT standard, although without the interoperability and auto-negotiation provided to existing 10BaseT Ethernets. Cable of this type, using 9-pin D connectors, is wired according to the ANSI TP-PMD specifications.
As with 10BaseT, the maximum segment length called for by the 100BaseTX standard is 100 meters but for a different reason. Segment length on a 10BaseT network is determined by loss of signal strength as a pulse travels over the network medium. Although 100 meters is used as a rule of thumb, a Category 5 10BaseT installation can often include segments of up to 150 meters, as long as the signal strength is maintained. Cable testers of various types can be used to determine whether the installed network maintains the signal strength necessary to extend the segment beyond 100 meters.
For a 100BaseTX segment, however, the 100-meter limitation is imposed to make sure that the round trip timing specifications of the standard are followed. Thus, it is not the strength of the signal, but the amount of time that it takes for the signal to be propagated over the segment that determines the maximum segment length. In other words, 100 meters is a strict guideline that should not be exceeded, even to the point at which the maximum 0.5 meter length of an MII cable (at each end) must be subtracted from the overall segment length.
As with 10BaseT networks, 100BaseTX segments must provide signal crossover at some location on the network. The connections for the transmit pair of wires at one end of the segment, must be attached to the receive connections at the other end, so that proper bi-directional communications can be provided. This crossover can be provided within the hub (in which case a port must be marked with an "X") or within the cable itself.
100BaseT4. In order not to alienate the administrators of the large number of installed 10BaseT networks that utilize voice grade Category 3 cabling, a cabling standard was provided to accommodate 100BaseT on networks of this type. To compensate for the decreased signal strength provided by the lesser quality cable, however, the standard requires the use of four wire pairs, instead of the two used by both 10BaseT and 100BaseTX.
Of the four pairs, the transmit (TX) and receive (RX) wires utilize the same pinouts as 100BaseTX (and 10BaseT). The two additional pairs are configured for use as bi- directional connections, labeled BI_D3 and BI_D4, using the remaining four connectors in the standard RJ-45 jack. Signal crossover for the transmit and receive pairs is identical to that of a 100BaseTX segment, but the two bi-directional pairs must be crossed over as well, with the D3 pair connected to the D4 pair and vice versa. Again, this crossover can be provided by the cable itself or within the hub.
In every other way, a 100BaseT4 segment is configured identically to a 100BaseTX segment. For installations that are limited by the quality of the cable that they are utilizing but which have the extra wire pairs available, creating 100BaseT4 segments can be more economically feasible than pulling new cable for an entire network. Transitional technology such as this allows a network to be gradually upgraded as time and finances permit. As we shall see, different 100BaseT segment types, along with 10BaseT segments, can be easily combined in a single network, allowing additional throughput to be allocated to users as needed.
100BaseFX. The 100BaseFX specification provides for the establishment of fiber-optic link segments that can take advantage of the greater distances and electrical isolation provided by fiber-optic cabling. The medium used is two separate strands of multimode fiber-optic (MMF) cable with an inner core diameter of 62.5 micrometers and an outer cladding diameter of 125 micrometers. Since the crosstalk and signal attenuation problems common to copper cabling are much less of an issue with fiber optic, separate strands of cable are used with no need for twisting, and the crossover connection can be provided by the link connections themselves, rather than inside the hub. A maximum signal loss of 11 dB over the length of the segment is specified by the standard, but the 400-meter maximum segment length is specified, again, by the need for a highly specific maximum round trip signal propagation delay, rather than concern for signal loss.
Several different connector types may be used for the 100BaseFX MDI, again to accommodate the different legacy networks that may be adapted to this technology. The connector type most highly recommended by the specification is the duplex SC connector, although a standard M type FDDI media interface connector (MIC), or a spring loaded bayonet (ST) connector may be used as well. Since they utilize the exact same signaling scheme, the 100BaseFX and the 100BaseTX specifications are known collectively as the 100BaseX specifications.
100BaseT Network Configuration Guidelines. The 100BaseT specification defines two classes of multiport repeaters or hubs for use with all of the various media types. As with the 10BaseT standard, these devices are defined as concentrators that connect disparate network segments to form a single collision domain, or network. A Class I hub can be used to connect segments of different media types while a Class II hub can only connect segments of the same media type. The standard dictates that the different hub types must be labeled with the appropriate Roman numeral within a circle, for easy identification.
The fundamental 100BaseT rules for connecting segments within a single collision domain are as follows:
Table 7.3 lists the maximum segment lengths allowed according to media type and repeater type.
Repeater Type | 100BaseTX/T4 | 100BaseFX | Mixed Fiber/Copper |
Direct DTE-DTE | 100 meters | N/A | 400 meters |
1 Class I Repeater | 200 meters | 230 meters | 240 meters |
1 Class II Repeater | 200 meters | 285 meters | 318 meters |
2 Class II Repeaters | 205 meters | 212 meters | 226 meters |
As you can see, the copper media types provide for fairly consistent limits throughout the various configurations, but the introduction of fiber-optic cable extends the length limitations. The one exception to this is the 205 meter limit when connecting copper segments with two Class II repeaters. This figure is valid for Category 5 cabling only. Voice grade Category 3 cable is limited to an overall length of 200 meters.
Class I repeaters generally provide greater amounts of delay overhead when translating signals for use with the various media types, so they impose greater segment length limitations than repeaters of the Class II variety. When mixed networks using both copper and fiber segments are defined, the figures provided in the table assume a 100 meter copper segment as contributing towards the total listed. It should also be noted that, for all of the network types, the maximum total one-meter length of any MII cables used (0.5 meters at each end) must be counted towards the total length of the segment.
These quibbles over what seem to be inconsequential variances in segment length should indicate how tightly these estimates are integrated into the 100BaseT specifications. 10Mbps networks do not tax the medium to the degree at which 100 Mbps ones do, and so a certain unofficial "fudge factor" can be assumed to exist on the slower systems. Long experience has determined that many standard Ethernet networks continue to function acceptably, despite physical layer installations that exceed the recommended specifications. 100BaseT is far newer technology, however, and a far narrower margin for variation is provided. It is recommended that these limitations be adhered to quite stringently, at least until experience has determined where variations can safely be made.
As with traditional Ethernet, the 100BaseT standard provides the means by which individual network segment limitations may be calculated mathematically. The primary limiting factor for 100BaseT, however, is the Path Delay Value, which is a measurement of the round trip signal propagation delay of the worst case path--that is, the two stations on the network that are the greatest distance apart, with the greatest number of repeaters between them. Cable delay values for the specific types of media used to form the network, along with the distances spanned, and the number of repeaters, are plugged into a formula, an extra margin is added for additional safety, and a specific value is derived that indicates whether or not the network meets the requirements of the 100BaseT standard. As with 10BaseT, exceeding the recommended values can result in late collisions and packet CRC errors that severely affect the efficiency and reliability of the network.
Obviously, the limitation in the number of repeaters allowed in a 100BaseT collision domain over that of 10BaseT will require a certain amount of redesign in some networks being retrofitted to the faster system. An existing network that is stretched to the limit of the 5-4-3 10BaseT guideline may have to have its repeaters relocated to conform to the new restrictions, but the increased segment lengths allowed for most of the cabling types should make the task a possible one for most existing installations. In any case, it should be clear that migrating to 100BaseT is more than just a matter of replacing NICs and hubs.
Ethernet Switching. Note also that the limitations detailed earlier apply only to segments within a single collision domain. Hubs that provide packet-switching services between the segments are becoming increasingly popular, and have come to make up a large portion of the market. A packet-switching hub essentially provides the same services as a repeater, but at a higher level. Packets received on one segment are regenerated for transmission via another. All of the OSI model from the network layer up is shared by the two segments, but the data link and the physical layer are isolated, establishing separate collision domains for the two and providing what is essentially a dedicated network for each port on the device.
In this way, a centrally located switch can be used to provide links to multiport repeaters at remote locations throughout the enterprise. These repeaters are then linked to individual workstations in the immediate area. The network, in its most strictly defined sense, extends from the switch port to the DTE, with only one intervening repeater. More demanding installations may even go so far as to use switched ports for the individual desktop connections themselves, thus providing the greatest possible amount of throughput to each workstation. As you may expect, a packet switching hub will be more expensive than a simpler repeating device, but they may be the most economical means of adapting an existing network to today's requirements. Just the time and expense saved by not having to replace dozens of LAN adapters in workstations all over the network may be enough to lure administrators towards this technology.
Bear in mind that this switching technique is by no means limited to networks using 100BaseT. Switches are becoming a popular solution for 10BaseT and even token-ring networks. In fact, adding switches to a 10BaseT network may provide enough additional performance to obviate the immediate need for a large-scale network upgrade program.
Full Duplex Ethernet. Another technique that is being used to increase the efficiency of both 10BaseT and 100BaseT links is the establishment of full duplex Ethernet connections. Ethernet networks normally communicate using a half duplex protocol. This means that only one station at a time can be transmitting over the network link. Like a two-way radio, a single station on the network may transmit and then must switch into a listen mode to receive a response. Managing this communications traffic without the loss of any data is the basic function of a media access control mechanism like CSMA/CD.
Full duplex Ethernet, on the other hand, functions more in the way that a telephone does, allowing both ends of a link segment to transmit and receive simultaneously, theoretically doubling the overall throughput of the link. For this reason, the entire Ethernet media access control system can be dispensed with when a full duplex link is established. In order to establish such a link, only two stations can be present in the collision domain. Like a party line telephone system, chaos would ensue if more than two parties were all speaking at the same time. Therefore, full duplex Ethernet is usually used to connect two packet-switched ports on remote hubs. The elimination of the media access protocol also removes the need for any concerns about signal propagation across the link, so very long distances may be spanned by 10BaseF or 100BaseFX links. The only limitation would be imposed by signal loss due to attenuation which, in fiber-optic connections, is minimal. Links of this type can span up to two kilometers or more and form an excellent means of connecting remote buildings on a campus network.
It must be noted that the full duplex Ethernet has not been standardized by the IEEE or by any other standards body. Individual hardware vendors are responsible for creating and marketing the concept. There may, therefore, be significant variations among different vendors in the rules for establishing such links. Compatibility of hardware made by different manufacturers is also not guaranteed. In addition, you will find that a full duplex link will generally not deliver the doubled throughput that theory dictates should result. This is because some of the higher layer network protocols in the OSI model also rely on what are essentially half duplex communication techniques. They cannot, therefore, make full use of the capabilities furnished by the data link layer, and the overall increase in throughput may be limited to somewhere between 25% and 50% over that of half duplex Ethernet. The cost of implementing full duplex into existing adapter and hub designs, however, is minimal, adding no more than 5% to the cost of the hardware. Therefore, even the moderate gain in throughput provided may be well worth the cost involved.
Auto-Negotiation. As on 10BaseT networks, 100BaseT utilizes a link pulse to continually test the efficacy of each network connection, but the fast link pulse (FLP) signals generated by 100BaseT adapters are utilized for another function as well. Unlike the normal link pulse (NLP) signals generated by 10BaseT, which simply signal that a proper connection exists, FLP signals are used by 100BaseT stations to advertise their communications abilites.
At the very least, an indication of the greatest possible communications speed is furnished by the FLP, but additional information may be provided as well, such as the ability of the station to establish a full duplex Ethernet connection and other data useful for network management. This information can be used by the two stations at either end of a link segment to auto-negotiate the fastest possible link supported by both stations. Although an optional feature, according to the 100BaseT standard, auto-negotiation is a popular option, considering the large number of hubs and adapters coming to market that can support both 10 and 100 Mbps speeds. Several different approaches to the inclusion of additional functionality into the FLP exist, however. One of these that has received a good deal of attention is called NWay. Developed by National Semiconductor, NWay must reside in both the adapter and the hub for the full auto-negotiation capabilities to be utilized. Many vendors are considering it for inclusion in their products, but until a standard for this technology is realized, either by an official governing body or simply by vox populi, these must be considered to be proprietary techniques and evaluated as such.
Since auto-negotiation is optional, there is more control provided over the generation of the link pulse signals than with 10BaseT. Settings are usually made available at each device to allow the pulses to be generated automatically when the device is powered up, or it may be implemented manually. Fast link pulses are designed to coexist with the normal link pulses so that negotiation may take place with existing 10BaseT hardware as well. A traditional 10BaseT hub with no knowledge of auto-negotiation, when connected to an Ethernet adapter capable of operation at both 10 and 100 Mbps, will cause a link to be established at the slower speed and normal 10BaseT operations to continue without incompatibilities. This allows network managers to implement an upgrade program in any manner they choose. At this point in time, anyone with intentions of upgrading to 100BaseT should begin purchasing dual-speed Ethernet adapters for any new systems being installed. This way the replacement of a 10BaseT hub with a 100BaseT model can be performed at any future time desired, and the appropriately equipped workstations will shift to the higher-speed connection as soon as the new equipment is detected. As with 10BaseT systems, the pulses are only generated during network idle periods and have no effect on overall network traffic.
The auto-negotiation feature, when it is enabled, determines the highest common set of capabilities provided by both stations on a link segment, according to the following list of priorities, and then creates a connection using the highest priority protocol of which both sides are capable:
Notice that although 100BaseT4 and 100BaseTX are both capable of the same transmission speed, 100BaseT4 is given the higher priority. This is because it is capable of supporting a wider array of media types than 100BaseTX. A segment with hardware at both ends that supports both transmission types will default to 100BaseT4, rather than 100BaseTX, unless explicitly instructed otherwise.
When auto-negotiating hubs are used that are of the multiport repeater type, it must be noted, however, that since only a single signal is generated for use on all of the device's ports, the highest common speed of all of the devices connected to the hub will be used. In other words, a hub with ports connected to eleven DTEs with 100 Mbps network adapters and one DTE with a standard 10BaseT adapter will run all of the stations at 10 Mbps. A packet-switching hub is, of course, not subject to this limitation. Since each of its ports amounts to what is essentially a separate network, individual speed negotiations will take place for every port.
100VG AnyLAN. The primary source of competition to the 802.3u Fast Ethernet standard in the battle of the 100 Mbps networking specifications is known as 100 Voice Grade AnyLAN, as defined in the 801.12 IEEE standard. Championed by Hewlett Packard and AT&T, as well as several other companies, it is, as the name implies, a networking standard that, like 100BaseT, provides 100 Mbps throughput but is specifically designed to take advantage of the existing voice grade Category 3 wiring that is already installed at so many network sites. Like 100BaseT4, the lower grade of cabling requires the use of four wire pairs instead of two, but beyond this, 100VG AnyLAN is radically different from 100BaseT.
First of all, 100VG AnyLAN cannot, by any means, be called an Ethernet network. In fact, it is a new protocol that is unique to the networking world and this, if anything, is its greatest drawback. All of the investments in time and money made on Ethernet or token-ring training along with management and troubleshooting tools for these environments are lost when you convert to 100VG AnyLAN. In addition, the standard is based on the assumption that the greatest single investment made in a network is in the cable installation. The basic philosophy of 100VG AnyLAN is to use a network's existing cable plant, including the existing RJ-45 jacks and cross connectors, but all other components, including hubs and adapters, must be replaced.
The cutthroat competition over these competing 100 Mbps standards is the result of users' clamor for a convenient and economical upgrade strategy for their networks. After all, FDDI and CDDI networks providing the same throughput have been available for years, but the expense and labor involved in converting to a network that runs such technology to the desktop has remained the prohibitive factor preventing its widespread acceptance. Both 100BaseT and 100VG AnyLAN provide more reasonable upgrade capabilities than FDDI and CDDI, providing the means for a gradual conversion spread out over as long a period of time as desired. Individual workstations can be upgraded to 100VG AnyLAN as the user's need arises because, as with 100BaseT, there are combination adapters available that provide plugs for both 10BaseT and AnyLAN.
In addition, the same Ethernet packet format as 10BaseT is used by 100VG AnyLAN, allowing hubs for the two network types to coexist on the same network. Although the Ethernet frame type is being supported first, there are also plans for 100VG AnyLAN hubs supporting the 802.5 frame type used by token-ring networks to be made available, as well as units supporting both packet types. The signaling scheme and the media access control protocol used by 100VG AnyLAN, however, are different from those used by any other network.
As a general rule, the overall similarity of the 100BaseT hardware to its 10BaseT counterparts will allow compatible equipment to be developed and produced more quickly and less expensively than that for 100VG AnyLAN. A combination 10BaseT/100VG AnyLAN NIC, for example, actually amounts to the components of two separate adapters on one card, while a 10/100BaseT NIC can utilize some of the same components for both functions to keep costs down. The same holds true for 100BaseT hubs and bridges, which are little more complex than 10BaseT models with the same capabilities. Also, a great many more vendors are currently producing 100BaseT hardware than 100VG AnyLAN, and many more systems manufacturers have declared their preference for its use than have advocated the other standard, giving it an immediate price advantage in the marketplace and a superior collection of testimonials.
These are all very young products, however, and it is difficult to predict the direction the pendulum will swing. Some hardware manufacturers are planning to produce equipment for both network types, refusing to take a definitive stand for one over the other. Others are attempting to combine the functionality for both networks in single devices, allowing the administrator to choose one of the two network types depending on the needs of the individual user. I dare say, though, that one of these network types will prove to be the dominant interim solution, as network administrators everywhere lick their chops in anticipation of ATM, which nearly everyone agrees will eventually come to dominate the networking industry, at some point one, two, five, or ten years down the road, depending on whose opinion you believe.
On the other side of the argument, however, is the fact that, despite the unavailability of long-term, real-world performance data, early reports indicate that 100VG AnyLAN generally provides a greater increase in network throughput than 100BaseT does. This is primarily because 100BaseT is subject to the same latency problems and tendency towards diminished performance under high-traffic conditions that normal Ethernet is. The technology that 100VG AnyLAN is based on provides nearly the entire potential throughput of the segment to each transmission.
Obviously, choosing one of the two standards is a complex decision, which must balance the need for maximum throughput versus a more solid, competitive, and economical market for the required hardware and factor in the need for staff training in order to support this new protocol. The following section examines how 100VG AnyLAN provides this allegedly superior level of performance, and provides background information to aid in the decision-making process between the two competing standards.
Quartet Signaling. A 10BaseT network utilizes two wire pairs for its communications. One is used to transmit, and the other for collision detection. The 100BaseT4 standard uses four pairs of wires, with the extra two pairs usable for communications in either direction. 100VG AnyLAN also uses four wire pairs, but it utilizes a technique called quartet signaling that allows it to transmit over all four wire pairs simultaneously. The encoding scheme used, called 5B/6B NRZ, allows the number of bits transmitted per cycle to be two-and-a-half-times greater than that of 10BaseT networks. Multipled by the four pairs of wires used to transmit, this results in a tenfold overall increase in transmission speed, using only a slightly higher frequency than 10BaseT, thus allowing the use of voice grade cabling.
Demand Priority. The basic reason why all four wire pairs can be used to transmit simultaneously is that 100VG AnyLAN eliminates the need for a collision detection mechanism such as that found on Ethernet networks. The media access control method utilized by 100VG AnyLAN is called demand priority, and while it is radically different from the CSMA/CD method used by Ethernet, it makes a good deal of sense for the environment that it's used in.
As we have seen, the 10BaseT network standard is an adaptation of a protocol that was originally designed for a bus topology composed primarily of mixing segments, on which multiple nodes must contend for the same bandwidth. Networks wired in a star topology, however, are composed primarily of link segments. While the 802.3 standard was ingeniously adapted to the star configuration by designating the link segments for connection of the hub to the node and the mixing segments for the interconnection of the hubs, the primary sources of possible media contention are the network workstations. When the workstations are connected to a hub using link segments, negotiation for media access need only be conducted between two different entities (while on a mixing segment, up to 30 entities can be contending for the same bandwidth). 100VG AnyLAN takes advantage of the star topology by having intelligence within the hub control access to the network medium.
Demand priority calls for individual network nodes to request permission from the hub to transmit a packet. If the network is not being used, the hub permits the transmission, receives the packet, and directs it to the proper outgoing destination port. Unlike Ethernet, where every packet is seen by all of the nodes within a given collision domain, only the transmitting and receiving stations, along with the intervening hubs, ever see a particular AnyLAN packet, thus providing an added measure of security unavailable from traditional Ethernet, token-ring, or FDDI networks.
Since arbitration is provided by the hub, priorities for certain data types can also be established, allowing particular applications to be allotted an uninterrupted flow of bandwidth, if desired. For real-time multimedia applications such as videoconferencing, where careful flow control is required, this can be a crucial factor to good performance. As with token ring, there are no collisions on a 100VG AnyLAN network that is running properly. There are no delays, therefore, caused by packet retransmissions and no cause for network performance to decrease as traffic increases.
Integration with 10BaseT. 100VG AnyLAN can also be integrated into a 10BaseT segment through the use of bridges that buffer the higher speed transmissions, feeding them to the slower medium at the proper rate. This technique can also be used to attach 100VG networks to an existing backbone. No packet translation of any kind is necessary, which avoids any delays that would normally be incurred by this process and allows the necessary bridging circuitry to be easily incorporated within the hub if desired.
Overall, 100VG AnyLAN requires a higher degree of commitment from the network administrator than 100BaseT does. The hardware is much less reliant on tried-and-true technology and the innovative nature of the standard implies a greater risk as the marketplace determines whether the concept will continue to be a viable one. For the many administrators who are considering these 100 Mbps technologies as interim solutions for their networks, it would be understandable for them to be reticent to expend the time, effort, and expense to adapt to a new network type that would only be phased out within a few years. Current indications point to 100BaseT as being far more widely accepted by the industry than 100VG AnyLAN, but there are major industry players advocating both systems, and both or neither could come to dominate the high-speed networking world over the next few years.
The Workstation Bus. It should be noted that, for any network offering 100 Mbps performance levels, the ISA bus will generally be insufficient to support the needs of the network interface. 100BaseT network adapters are currently available only for EISA and PCI buses, and testing of various cards for both bus types made by the same manufacturers yields very little performance difference between the two. It should therefore not be necessary to upgrade from an EISA to a PCI machine simply to take full advantage of 100BaseT.
Adapters for the VESA Local Bus are not being produced, primarily because vendors have achieved the best performance levels from adapter designs using the bus mastering capabilities supported by the EISA and PCI buses that prevent network data from having to be moved on and off of the card to be manipulated by the system processor. Avoiding any additional burden on the processor also helps to increase overall system performance. High performance 100BaseT cards may also offer SRAM FIFO (high-speed static RAM, first in, first out) caching, coprocessors, and dedicated chips providing increased performance for the adapter's media access control functions.
While ISA cards for 100VG AnyLAN do exist, their generally poor performance levels seem to indicate that the system bus is probably the location of a greater bottleneck than any caused by the network. Obviously, the benefits and drawbacks of all of the available buses are as applicable to LAN adapters as they are to SCSI or video cards. Consult chapter 5, "The Server Platform," for in-depth coverage of the attributes of the different bus types.
Of course, as with any network-related upgrade, the question arises as to the real need for 100 Mbps to the desktop. Depending on the other hardware involved, the operating systems used by the servers and workstations, and the size and quantity of the files transmitted over the net, the overall increase in productivity provided by this type of network upgrade may prove to be negligible. This technology excels primarily in the sustained transfer of large files over the network medium. For applications that generate large amounts of network traffic, such as scientific, engineering, prepress, and software development environments, this may be a boon, and for networks that have been continually expanded in size and traffic levels without an increase in throughput, a significant performance bottleneck may be removed.
Most general-use business networks, however, are relatively empty, and the thoughtful LAN administrator faced with a slow network must be careful to determine exactly what is causing the slowdown before committing to a costly upgrade program. From a practical standpoint, Ethernet traffic problems can probably be more efficiently and economically addressed with the addition of Ethernet switches and a proper evaluation and reorganization of the network plan. A wholesale replacement of all hubs and adapters is probably not necessary just to have a properly functioning business network environment.
As to the new multimedia data types that are threatening to overwhelm traditional networks, if an administrator were to honestly ask whether her users really had a productive need for full motion video to the desktop, the answer would probably be no. Just because a new technology becomes available, doesn't mean that we should all go out and search for some way to put it to use. In fact, even full motion video can be adequately delivered to the desktop over the network, when 10 Mbps of dedicated bandwidth is supplied. When the networking industry marketing machine goes into a feeding frenzy over a new technology like Fast Ethernet, it can be difficult to find a clear path through the carnage in order to see whether the new product actually makes things better than they were before. This is a question that every LAN adminstrator must answer individually for every network that she is responsible for.
Barring the new networking technologies now gaining widespread attention in the marketplace, the traditional alternative to Ethernet has been the token-ring network. Originally developed by IBM, which still remains its primary champion, token-ring networks can deliver data at 16 Mbps using a media access control mechanism that is radically different from the CSMA/CD scheme used by Ethernet.
The IEEE 802.5 standard defines a token-ring network. The standard was deliberately developed to be an alternative to other 802.x media access control specifications, all of which utilize the same logical link control protocol defined in the 802.2 standard. Unlike the bus and star topologies utilized by Ethernet networks, token ring, as the name implies, organizes its connections in a logical ring topology so that packets can be passed from node to node in an endless rotation.
It is called a logical ring topology because the network is actually wired according to the same sort of star arrangement as a 10BaseT network. The ring exists only within the hubs to which all of the nodes are attached (see fig. 7.10). Usually known as multistation attachment units, MSAUs (sometimes improperly called MAUs), or simply wiring centers in the 802.5 document, token-ring concentrators provide more functionality than the multiport repeaters used for Ethernets. A token-ring MSAU monitors the existence of each node attached to its ports. A packet originating at any station is passed to the MSAU, which passes it in turn to the next station in the ring. That station returns it to the MSAU, which continues passing the packet around the ring until each attached node has received it. The packet is then removed from the ring by the node where it originated from.
Figure 7.10
This is a basic token-ring network, showing both the physical star topology and the
logical ring topology.
For this system to work, the MSAU must be constantly aware of the operation of each attached node. Should a packet be sent to a non-functioning workstation, it will not be returned to the MSAU and cannot be sent along on its way. Therefore, MSAUs continuously monitor the activity of all of the attached workstations. A node that is switched off or malfunctions is immediately removed from the ring by the bypass relays in the MSAU, and no further packets are sent to it until it signals its readiness to continue.
The original token-ring MSAUs developed by IBM actually used a mechanical device to control access to each port. Before attaching the cable that was connected to a workstation, the network administrator had to initialize the port to be used with a keying device that entered that port into the ring. In addition, the actual network medium and the connectors used were all of proprietary IBM design. The cabling was quite thick (a good deal thicker than the 50-ohm coaxial used for thin Ethernet) and the connectors large and unwieldy. Cables were also sold only in prepackaged form and available in a limited assortment of lengths. Since there was no competition at the time, the prices of all of the hardware components were gratuitously inflated. The original token-ring networks were also designed to run at a maximum speed of 4 Mbps.
It should be clear that, if these conditions had not changed considerably, token ring would have gone the way of StarLAN and ARCnet on the slow road to oblivion. However, the modern token-ring network is considerably more advanced than this, and now can provide 16 Mbps of throughput in what is, arguably, a manner that is better suited to high traffic networks than Ethernet. First of all, the preferred network medium for token ring is now type 1 shielded twisted-pair cabling (STP) that is similar to the UTP used in 10BaseT networks, except for additional insulation surrounding the twisted strands, thus providing increased resistance to EMI as well as a higher cost for the medium and its installation. The connectors used may be of the familiar RJ-45 telephone type, but DB-9 connectors (such as are used for the serial ports on PCs) can also be used. At the MSAU, self-shorting IBM data connectors are usually used.
The 802.5 document states that up to 250 stations can operate on the same ring, but it defines no specifics for the type of cable to be used, and real-world performance figures depend highly on the type of cable and the round-trip lengths of the segments extending from the MSAU to the connected node (called the lobe length). Token ring can even be run over conventional UTP cable, although the number of attached nodes and the lobe lengths will be further limited by the lesser capabilities of the medium. The IBM Token Ring specifications allow up to 260 lobes on an STP segment and up to 72 on a UTP segment. The following list describes the cable types, as defined by IBM, which have come into general use in the network industry.
Generally speaking, the maximum allowable lobe length for cable types 1 and 2 is 200 meters, for type 3: 120 meters, for type 6: 45 meters, and type 5 can be up to 1,000 meters in length. The term lobe length is defined as the round-trip signaling distance between a workstation and an MSAU. A 100-meter cable connection therefore yields a lobe length of 200 meters. No more than three segments (joined by repeaters) can be connected in series, and there can be up to 33 MSAUs on a single network. If these last two statements seem contradictory, it is because when multiple token-ring MSAUs are interconnected to form a single ring, they do not comprise separate segments, as would the use of multiple hubs in an Ethernet network. Also, all of the nodes on a particular network must run at the same speed. Dual-speed token-ring NICs and MSAUs, which run at either 4 or 16 Mbps, are common. However, all of the ports on a single MSAU must run at the same speed and can be connected to another MSAU running at a different speed only by a bridge or a router so that two separate network domains are established.
MSAUs are also considerably more advanced than they were in the early days of token ring, with models available that provide similar features to the higher-end Ethernet concentrators. In addition, the current move of the industry towards switching over bridging or repeating also applies to token ring. Complex switches are even available that allow the connection and routing of data over multiple network types from the same device. Token ring, Ethernet, Fast Ethernet, FDDI, and even ATM networks can all be connected to the same device, and a packet entering through any port is routed directly out the port where the destination address is located, after being translated to the proper signalling type for the destination network.
Multiple MSAUs can also be interconnected to form a single ring. In a token-ring environment, a ring is the equivalent of a single collision domain or network in Ethernet parlance. Since collisions are not a normal occurrence on token-ring networks, the term collision domain is not valid, but a ring consists of a group of DTEs interconnected so that the same network segment is shared by all. Every MSAU has special ports, labeled Ring In (RI) and Ring Out (RO), which are designated for connection to other MSAUs, allowing you to create large rings of up to 250 nodes.
As we have seen, the real key to the functionality of any network is how multiple stations can communicate using a shared network medium. Unlike Ethernet, in which each DTE essentially executes an independent instance of the accepted MAC protocol for its own use, the MSAU arbitrates token ring's media access.
Essentially, media access in a token-ring network is controlled by the passing of a specialized packet, or token, from one node to the next around the ring. Only one token can be present on the ring at any one time and that token contains a monitor setting bit that designates it as a "busy" or "free" token. Only a node in possession of a free token may transmit a frame.
When a workstation is ready to transmit a packet, it waits until a free token is sent to it by the preceding node in the ring. Once it has been received, the transmitting node appends its packet to the token, changes the monitor setting to busy, and sends it on its way to the next node in the ring. Each node in turn then receives the packet and passes it along, thus functioning in normal repeat mode, the functional equivalent to a unidirectional repeater.
Whether the packet is destined for use by that workstation or not, the packet is passed to the next node. Having traversed the entire ring, the packet then arrives back at the node that originated it. This node then reads the packet and compares it with what it had previously transmitted, as a check for data corruption. After passing this test, the originating workstation then removes the packet from the ring, generates a new free token and sends it to the next node, where the process begins again.
Some token-ring networks also support a feature called early token release (ETR), in which the sending node generates a free token immediately after it finishes transmitting its packet. Thus, the packet is sent without a busy token included, but with a free token following immediately after. The next station in the ring receives the data packet and the token and may then pass on the first data packet, transmit a data packet of its own, and then transmit another free token. In this manner, more than one packet may be traveling around the ring at any one time, but there will still be only one token. This eliminates the waiting periods incurred as tokens and packets are passed from station to station.
Thus, in theory, a collision should never occur on a token-ring network. Packets may be transmitted at the maximum rate allowed by the MAC protocol with no degradation of network performance. This is why many people consider token ring to be a superior type of network for heavy traffic environments. As an Ethernet network becomes busier, a larger number of collisions occur, forcing a greater number of retransmissions, and therefore, delays. On a token-ring network, although there is a greater amount of overhead traffic generated by maintenance functions, the maximum possible delay incurred before a given station can transmit is the period that the node must wait for a free token to be passed to it. The greater the traffic on the network, the longer this delay will be, but there is no additional traffic generated by the retransmission of packets damaged by collision, allowing the network to utilize virtually its entire bandwidth for authentic non-redundant traffic.
Token-ring stations are also capable of utilizing different access priority levels so that specific stations can be configured to be more likely to receive a free token that it can transmit with. The later discussion of the 802.5 frame types in this chapter covers how these priorities are exercised. There are also automatic mechanisms in a token-ring network that provide the means for recognizing and localizing error conditions on the network.
When any station on a ring detects a problem, such as a break in the ring, it begins a process called beaconing that helps isolate the exact location of the problem. Beacon frames are sent out over the network, which define a failure domain. The failure domain consists of the station detecting the failure and its nearest active upstream neighbor (NAUN). If there are any stations located between these two, they must, by definition, be inactive and are designated as the locations of the failure. An auto-reconfiguration process then begins; active stations within the failure domain activate diagnostic routines in the hope of bypassing the offending nodes, allowing communications to continue. Depending on the cause of the problem, the network may ultimately be halted, or it may continue to operate by removing problem stations from the ring.
As a means of monitoring and maintaining the network, one node on the ring acts as an active monitor. This station functions as the instigator for most of the ring control and maintenance procedures conducted by the network. Since all stations are capable of generating a token, for example, there must be one station that generates the first token, in order to start the process. This is one of the functions of the active monitor. It also initiates the neighbor notification process--each node on the network learns the identity of it nearest active upstream and downstream neighbors, provides timing services for the network, checks for the existence of packets circulating continuously around the ring, as well as performing other maintenance functions.
Any station may become the active monitor through a process called token claiming that is initiated whenever any station (or standby monitor, or SM) on the network fails to detect the existence of a frame or an active monitor (through the receipt of an active monitor present, or AMP MAC frame) within a designated amount of time. Token claiming consists of each SM sending out specialized frames based on address values. The first SM to receive three of its own frames back is designated the active monitor (AM). In this manner, the active monitor constantly checks the network, and the other nodes constantly check the active monitor to ensure that the network access mechanisms are always functioning properly.
There are many other functions defined in the network management protocol (NMP) defined in the 802.5 standard document, some of which may be performed by the active monitor or by other stations on the network, which may or may not be wholly dedicated to such a purpose. These include the Ring Parameter Server (RPS), which monitors the addresses of all nodes entering and leaving the ring; the Ring Error Monitor (REM), which tracks the occurrence and frequency of errors occurring on the ring; the LAN Bridge Server (LBS), which monitors the activities of all bridges connected to the network; and the Configuration Report Server (CRS), which gathers performance and configuration from other nodes on the network. All of the information generated by these functions can be sent to a node that has been specifically designated as the network management node by the running of software designed to compile, track, and analyze all of this data and adjust the network's performance characteristics accordingly.
The 802.5 Frame Format. Unlike Ethernet, which uses one basic frame type for all of its functions, the IEEE Token Ring standard defines several basic frame formats (see fig. 7.11), which are used for many different functions: a data/command frame, a token frame, and an abort sequence frame. The data/command frame is a single frame type that can be used both for transfer of LLC data to upper level protocols and for MAC information used to implement one of dozens of ring maintenance control procedures. Only the data frame contains information that is destined for use by protocols higher up in the OSI model. All of the other frame configurations are used solely for maintaining and controlling the ring.
Figure 7.11
The fields that comprise the IEEE 802.5 data/command frame.
A token frame, three bytes long, consists of only the Start Delimiter, Access Control, and End Delimiter fields, just as previously defined. The abort sequence frame, used to clear the ring when a premature end to the transmission of a packet is detected, consists only of the Start and End Delimiters, as previously defined. These two frames are used only for control and maintenance of the 802.5 protocol.
The Downside to Token Ring. The primary drawback to a token-ring network is the additional expense incurred by the higher prices for virtually every hardware component required for its construction. Throughout its history, token ring has been dominated by IBM, which has functioned as the trendsetter for the technology far more than standards bodies like the IEEE have. Throughout its history, it has usually been IBM that was first to release innovations in token-ring technology, such as the increased 16 Mbps transmission rate, only to have them assimilated into the published standards at a later time. Indeed, the 802.5 document is very brief (less than 100 pages) when compared with the 802.3 standard. There are also fewer vendors and therefore less competition in the token-ring hardware market than there are in that of Ethernet.
Token-ring adapters can cost two or three times more than Ethernet adapters, with similar markups applied to MSAUs and other ancillary hardware. Token ring also offers fewer convenient throughput upgrade paths than Ethernet does. Migration to a 100 Mbps technology will require the wholesale replacement of virtually the entire network, except for the cabling itself. For these reasons, token ring has remained second to Ethernet in popularity, with approximately 10 million nodes installed worldwide, but its proponents are earnest and quite vocal, and its capabilities as an efficient system for business networking are incontestable.
Although it is hardly ever used in new installations these days, the Attached Resource Computer Network (ARCnet) is another networking standard for the physical and data link layers of the OSI model. Introduced by the Datapoint Corporation in 1977, SMC has been the primary ARCnet vendor since 1983. Running at 2.5 Mbps, ARCnet is the slowest network of those considered in this chapter. This is one of the primary causes of its unpopularity because, otherwise, ARCnet is capable of providing the same basic network services as Ethernet and token ring at far lower costs and with a great deal of physical layer flexibility.
ARCnet can be wired in a bus topology, using RG-62/U coaxial cable and BNC connectors (also known as high impedance ARCnet), or in a star topology, using UTP or IBM Type 1 cabling with RJ-45 or D-shell connectors (also known as low impedance ARCnet). Hybrid networks of mixed bus and star topologies (also known as a tree topology) can also be assembled, consisting of nodes daisy-chained with twisted-pair cable connected to a hub that connects to other hubs using coaxial cable. ARCnet is very forgiving in this respect. As with the other network types, care must be taken to properly terminate all segments, using a 93-ohm resistor pack for coaxial buses and a 105-ohm resistor for twisted pair. (Note that the 93-ohm resistor pack differs from the 50-ohm terminators used by thinnet, although they may be virtually identical in appearance). Even fiber-optic cable can be used with ARCnet.
The connectors used for ARCnet are standard BNC connectors for coaxial cable. Twisted pair can utilize RJ-45 connectors or the standard D-shell connectors used by the serial ports on PCs. Connection boxes called active links are used to connect high-impedance cable segments, and baluns are available for providing an interface between coaxial and twisted-pair cable types.
Three types of ARCnet hubs are available. Active hubs, containing anywhere from 8 to 64 ports, have a power supply and function as a repeater as well as a wiring nexus. Passive hubs, which have only four ports, use no power and function simply as signal splitters. Intelligent hubs are also available, which are capable of monitoring the status of their links.
High-impedance (coaxial) ARCnet must use only active hubs. Segments connecting two stations can span up to 305 meters, while segments connecting hubs can extend 610 meters. Up to eight nodes can be connected in series without an intervening hub, and there must be at least one meter of cable between nodes. Low-impedance ARCnet can use both active and passive hubs. A segment connecting an active hub and a node or two active hubs can span up to 610 meters, while a segment connecting a node or an active hub to a passive hub can be no more than 30 meters. Passive hubs can only be located between active hubs and nodes. Two passive hubs can never be directly connected to each other. High and low impedance network segments can also be mixed on the same network, provided that the limitations for each are observed. Up to 10 nodes can be connected in series when UTP cable is used.
The maximum limitations for any ARCnet network are 255 nodes (active hubs count as nodes) and a total cable length of 6,000 meters. Maximum segment lengths may vary depending on the type of cable used, but a maximum of 11 dB of signal attentuation over the entire segment at 5 MHz is all that is allowable. Two connected nodes must also have a signal propagation delay of no more than 31 microseconds.
Unlike most network types, the node addresses of ARCnet networks must be manually set (from 1 to 255) on the NICs through the use of jumper switches. Address conflicts are, therefore, a distinct possibility, resolvable only by manual examination of all of the NICs. The adapter with the lowest numerical node address automatically becomes the network's controller, similar in basic function to the active monitor on a token-ring network.
Like token ring, ARCnet uses a token-based media access mechanism. A token is generated by the controller and sent to each station in turn, giving them the opportunity to grab the token and transmit. ARCnet, however, uses a far less efficient signaling scheme to arbitrate the token passing. Once a token is grabbed, a query and an acknowledgment must be exchanged by the sending and receiving stations before the actual data frame can be transmitted. It is not until the transmitted frame is received and acknowledged by the destination that the token can be released by the sender to the next station.
This is another major drawback that has contributed to the virtual disappearance of ARCnet in the business networking world. Its 2.5 Mbps transmission rate, which is slow enough already, is further reduced by the large amount of signaling overhead required for normal communications (three bits of overhead per byte transmitted). Other exchanges of control information between sources and destinations are also required that contain no data and provide additional overhead.
There have also been known to be compatibility problems with some upper layer protocols, such as NetWare's IPX, due to the small frame size used by ARCnet. No more than 508 bytes of data can be included in an ARCnet frame, and the standard datagram size used by IPX is 576 bytes. An extra layer of translation, called the fragmentation layer, had to be devised to allow NetWare traffic to run on ARCnet. This extra layer breaks IPX packets into two smaller packets that are capable of being sent within the ARCnet frame's data field, and then reassembles them at the destination, adding still another level of overhead.
For a network that is used for absolutely nothing more than file and printer sharing, and a minimal amount of these, ARCnet may be marginally suitable and would certainly be far less expensive than any of the other major network types. For use in any business that plans on being in operation more than two years down the road, however, ARCnet is a shortsighted solution that will probably disappear from use completely within a few years and is not recommended under any conditions.
Since its introduction in 1986, the Fiber Distributed Data Interface (FDDI) has come to be the accepted standard for high-speed network backbones and connections to high performance workstations. Running at 100 Mbps, it remains to be seen how the new high-speed technologies will affect the use of FDDI for these purposes. Both Fast Ethernet and 100VG AnyLAN offer the same speed with considerably lower upgrade and installation costs, even providing fiber-optic standards for connections over long distances and between buildings. If, once ATM becomes standardized, it proves to be half as popular as it seems that it will be, then FDDI's days could well be numbered.
The FDDI standard was created by the ANSI X3T9.5 committee. The document describes a network laid out in a dual ring topology, using the same token passing media access control mechanism that token ring uses, except that early token release is always used, instead of being optional. The dual ring provides two independent loops with traffic traveling in opposite directions to provide fault tolerance for the network (see fig. 7.12).
Figure 7.12
This is a basic FDDI double-ring network
Under normal conditions, only one of the two rings actively carries traffic. When a break or other disturbance in the primary ring is detected, relay stations on either side of the break begin to redirect traffic onto the secondary ring. Stations connected to both rings have two transceivers and are designated as dual-attachment (DAS) or Class A stations; single-attachment (SAS) or Class B stations are connected to only one ring, have only one transceiver, and therefore cannot benefit from this fault tolerance.
An FDDI ring can contain up to 1,000 stations, with a cable length of no more than 200 kilometers. The use of Class A stations, however, effectively halves these limitations, as they count for two connections each and the dual rings double the length of the cable. No more than two kilometers of cable can be laid without an intervening station or repeater. Obviously, these so-called limitations provide for a larger and longer network than any of the other protocols considered thus far.
The rings by which an FDDI network is organized may be actual ones, in that the stations are wired directly to one another or a concentrator may be used, as in a token-ring network, to provide a virtual ring to what is physically a star topology physical installation. A concentrator provides an easier mechanism for automatically removing a malfunctioning station from the ring but also provides a single point of failure for the entire network. The cable called for by the standard is graded index multimode fiber with a core diameter of 62.5 micrometers. Other types of single mode and multimode cable have been used successfully, however, as well as standard Type 1 STP and Category 5 UTP, although these are limited to a distance of 100 meters or less between connections.
In an FDDI network, the physical layer is divided into two sublayers, the physical medium dependent layer (PMD) and the physical layer (PHY). The PMD defines the optical characteristics of the physical layer, including photodetectors and optical power sources, as well as the transceivers, medium interface connectors (MICs) and cabling, as with other network types. The power source must be able to send a signal of 25 microwatts, and the detector must be able to read a signal of as little as two microwatts. The MIC, or FDDI connector, was designed by ANSI especially for this standard, and it has come to be used for other fiber-optic media standards as well. It is designed to provide the best possible connection to avoid signal loss and is keyed to prevent incorrect component combinations from being connected together. Other, less expensive, connector types have also been used for some FDDI networks, although their use has not been standardized. Be sure to check on the type of connectors used by all FDDI hardware that you intend to purchase, as interoperability may be a problem with anything other than official FDDI connectors.
The PHY layer functions as the medium-independent intermediary between the PMD layer and the MAC layer above. As the first electronic layer, it is implemented (along with the MAC layer) by the chipset in the FDDI network adapter and is responsible for the encoding and decoding of data into the light pulses transmitted over the medium. The signaling scheme used by FDDI networks is quite different and more efficient than the Manchester and Differential Manchester schemes used by Ethernet and token ring. Called NRZI 4B/5B encoding, this method provides a signaling efficiency rate of 80%, as opposed to the 50% rate of the other network types. This means that an Ethernet network pushed from 10 to 100 Mbps would have to utilize a 200 MHz signal, while only 125 MHz is needed by FDDI, to provide the same throughput.
FDDI also supports a more flexible system of assigning bandwidth according to priorities than token ring does. Available bandwidth can be split into synchronous and asynchronous traffic. Synchronous bandwidth is a section of the 100 Mbps that is designated for use by traffic that requires a continuous data stream, such as real-time voice or video. The remaining bandwidth is devoted to asynchronous traffic, which can be dynamically assigned, according to an eight-level system of priorities administered by the station management (SMT) protocol that is part of the FDDI specification.
The original FDDI standard supported asynchronous communications and, while it did have a synchronous mode definition, it did not provide the degree of flow control that was needed for applications such as real-time video. Thus, the FDDI-II standard was created to define what is officially known as hybrid ring control (HRC) FDDI. The basic difference in the standards was the addition of a mechanism, called a hybrid multiplexer (HMUX) that allowed both packet-switched (from the original MAC layer) and circuit-switched data to be processed by the same PHY layer. The circuit-switched data, which can be defined as a real-time data stream such as voice or video is provided by an isochronous media access control (IMAC) mechanism called a circuit switching multiplexer (CMUX).
It is essentially the IMAC and the HMUX that make up the hybrid ring control element of the FDDI-II standard. Other changes made to the document at this time included the addition of alternative fiber media types, including single mode fiber-optic cabling. The hybrid mode capabilities of an FDDI-II are optional. The network can be run in basic mode that differs little from the original standard.
FDDI utilizes a token passing MAC mechanism that is very similar to that used by standard token-ring networks. Two basic frame types, a token frame and a data/command frame, are defined, with the fields and their functions basically similar to those defined in the 802.5 standard. FDDI even has a Station Management (SMT) protocol that is very similar in function to Token Ring's NMT, providing ring management and frame control to the network.
Fiber-optic cable is very difficult and expensive to install, and while users may be tempted to try to perform a coaxial or twisted-pair physical layer installation themselves, they should not even consider doing fiber without expert help. These factors are major contributors to the limited but stable market that FDDI seems to have established for itself. It has found its niche in the networking world and it fulfills it admirably, but the fast-rising young newcomers, like Fast Ethernet and 100 VG AnyLAN are a distinct threat to its continued use. When the same speed and segment lengths can be realized with less expense, retraining, and maintenance costs, no persuasive reason can be found for installing new FDDI backbones. Unless these newer network types fail utterly, and this is doubtful, the FDDI standard may lapse from general use entirely before the end of the decade.
Installing network cabling of any type is not something that can be properly learned from a book. While it may be relatively easy and inexpensive to connect a handful of PCs into a workgroup network using prepackaged cables, creating the physical fabric of a large network with results that are both functional and businesslike in appearance is a task better left to professionals. Unfortunately, it can sometimes be difficult to discern the professionals from the hacks with no training that simply hang out a consultant's shingle and purport to be networking experts.
The physical layer is often treated separately from other types of network training. CNE certification may include some basic training concerning the different types of cabling and the guidelines for the various network types, but it does not in any way cover such tasks as the crimping of connectors onto bulk cable, which are the most crucial parts of a physical layer installation. For 10BaseT installations, companies that install telephone systems certainly have the knowledge to properly pull and connect the cabling, but you should be certain that they are familiar with the special requirements for a data grade installation.
It should be quite simple to find a contractor who is capable of installing the cable properly. The problem usually lies in finding one who will do it for a good price. In any case, you should require a contractor to furnish a complete diagram of the proposed cable layout, including the locations of all connectors involved, making sure that the distances are within the specifications for the network type. Depending on your estimate of the contractor's expertise, you may or may not have to inspect the work closely to be sure that it's done properly. Since most cabling jobs will be hidden within the walls and ceilings of the site, the time to find problems is while the work is being done and not afterwards.
The physical and data link layers are the fundamental building blocks of a modern LAN. None of the higher level functions will be able to proceed normally if the foundation that they are built on is unstable. An informed and intelligent decision as to the proper network type to use for the needs of a particular organization can set the standard for the way that the network is to be built and the way that it will be run. Technology and purchasing decisions made well at the outset can ensure that a network installed today can later be adapted to whatever needs may arise.
In coordination with the material covered in chapters 5, "The Server Platform," and 6, "The Workstation Platform," nearly all of the essential hardware needed to construct a basic LAN is discussed. Other sections of this book will cover the many different products and services that can be used on the network, but all of them are dependent on this basic infrastructure. If the foundation is not solid, then the tower cannot stand for long; time and effort expended on network fundamentals will never be wasted.
© Copyright, Macmillan Computer Publishing. All rights reserved.