INTRODUCTION TO COMPUTER NETWORKS

INTRODUCTION TO COMPUTER NETWORKS
1.2.1 Local Area Networks
L
ocal area networks, generally called LANs, are privately-owned networks within a single building                                     or campus of
up to a few kilometers in size. They are widely used to connect personal computers and workstations in company offices and factories to share resources (e.g., printers) and exchange information.
 LANs are
distinguished from other kinds of networks by three characteristics:
(1) their size,
 (2) their transmission technology, and
 (3) their topology.
LANs are restricted in size, which means that the worst-case transmission time is bounded and known inadvance.
Knowing this bound makes it possible to use certain kinds of designs that would not otherwise be
possible. It also simplifies network management.
LANs may use a transmission technology consisting of a cable to which all the machines are attached, like the
telephone company party lines once used in rural areas. Traditional LANs run at speeds of 10 Mbps to 100
Mbps, have low delay (microseconds or nanoseconds), and make very few errors. Newer LANs operate at up to
10 Gbps. In this book, we will adhere to tradition and measure line speeds in megabits/sec (1 Mbps is 1,000,000
bits/sec) and gigabits/sec (1 Gbps is 1,000,000,000 bits/sec).
Various topologies are possible for broadcast LANs. Figure 1-7 shows two of them. In a bus (i.e., a linear cable)
network, at any instant at most one machine is the master and is allowed to transmit. All other machines are
required to refrain from sending. An arbitration mechanism is needed to resolve conflicts when two or more
machines want to transmit simultaneously. The arbitration mechanism may be centralized or distributed. IEEE
802.3, popularly called Ethernet, for example, is a bus-based broadcast network with decentralized control,
usually operating at 10 Mbps to 10 Gbps. Computers on an Ethernet can transmit whenever they want to; if two
or more packets collide, each computer just waits a random time and tries again later.
Figure 1-7. Two broadcast networks. (a) Bus. (b) Ring.
A second type of broadcast system is the ring. In a ring, each bit propagates around on its own, not waiting for
the rest of the packet to which it belongs. Typically, each bit circumnavigates the entire ring in the time it takes to
21
transmit a few bits, often before the complete packet has even been transmitted. As with all other broadcast
systems, some rule is needed for arbitrating simultaneous accesses to the ring. Various methods, such as
having the machines take turns, are in use. IEEE 802.5 (the IBM token ring), is a ring-based LAN operating at 4
and 16 Mbps. FDDI is another example of a ring network.
Broadcast networks can be further divided into static and dynamic, depending on how the channel is allocated. A
typical static allocation would be to divide time into discrete intervals and use a round-robin algorithm, allowing
each machine to broadcast only when its time slot comes up. Static allocation wastes channel capacity when a
machine has nothing to say during its allocated slot, so most systems attempt to allocate the channel
dynamically (i.e., on demand).
Dynamic allocation methods for a common channel are either centralized or decentralized. In the centralized
channel allocation method, there is a single entity, for example, a bus arbitration unit, which determines who
goes next. It might do this by accepting requests and making a decision according to some internal algorithm. In
the decentralized channel allocation method, there is no central entity; each machine must decide for itself
whether to transmit. You might think that this always leads to chaos, but it does not. Later we will study many
algorithms designed to bring order out of the potential chaos.
1.2.2 Metropolitan Area Networks
A metropolitan area network, or MAN, covers a city. The best-known example of a MAN is the cable television
network available in many cities. This system grew from earlier community antenna systems used in areas with
poor over-the-air television reception. In these early systems, a large antenna was placed on top of a nearby hill
and signal was then piped to the subscribers’ houses.
At first, these were locally-designed, ad hoc systems. Then companies began jumping into the business, getting
contracts from city governments to wire up an entire city. The next step was television programming and even
entire channels designed for cable only. Often these channels were highly specialized, such as all news, all
sports, all cooking, all gardening, and so on. But from their inception until the late 1990s, they were intended for
television reception only.
Starting when the Internet attracted a mass audience, the cable TV network operators began to realize that with
some changes to the system, they could provide two-way Internet service in unused parts of the spectrum. At
that point, the cable TV system began to morph from a way to distribute television to a metropolitan area
network. To a first approximation, a MAN might look something like the system shown in Fig. 1-8. In this figure
we see both television signals and Internet being fed into the centralized head end for subsequent distribution to
people’s homes. We will come back to this subject in detail in Chap. 2.
Figure 1-8. A metropolitan area network based on cable TV.
Cable television is not the only MAN. Recent developments in high-speed wireless Internet access resulted in
another MAN, which has been standardized as IEEE 802.16. We will look at this area in Chap. 2.
1.2.3 Wide Area Networks
A wide area network, or WAN, spans a large geographical area, often a country or continent. It contains a
collection of machines intended for running user (i.e., application) programs. We will follow traditional usage and
call these machines hosts. The hosts are connected by a communication subnet, or just subnet for short. The
hosts are owned by the customers (e.g., people’s personal computers), whereas the communication subnet is
typically owned and operated by a telephone company or Internet service provider. The job of the subnet is to
carry messages from host to host, just as the telephone system carries words from speaker to listener.
Separation of the pure communication aspects of the network (the subnet) from the application aspects (the
hosts), greatly simplifies the complete network design.
In most wide area networks, the subnet consists of two distinct components: transmission lines and switching
elements. Transmission lines move bits between machines. They can be made of copper wire, optical fiber, or
even radio links. Switching elements are specialized computers that connect three or more transmission lines.
When data arrive on an incoming line, the switching element must choose an outgoing line on which to forward
them. These switching computers have been called by various names in the past; the name router is now most
commonly used. Unfortunately, some people pronounce it ”rooter” and others have it rhyme with ”doubter.”
Determining the correct pronunciation will be left as an exercise for the reader. (Note: the perceived correct
answer may depend on where you live.)
In this model, shown in Fig. 1-9, each host is frequently connected to a LAN on which a router is present,
although in some cases a host can be connected directly to a router. The collection of communication lines and
routers (but not the hosts) form the subnet.
Figure 1-9. Relation between hosts on LANs and the subnet.
A short comment about the term ”subnet” is in order here. Originally, its only meaning was the collection of
routers and communication lines that moved packets from the source host to the destination host. However,
some years later, it also acquired a second meaning in conjunction with network addressing (which we will
discuss in Chap. 5). Unfortunately, no widely-used alternative exists for its initial meaning, so with some
hesitation we will use it in both senses. From the context, it will always be clear which is meant.
In most WANs, the network contains numerous transmission lines, each one connecting a pair of routers. If two
routers that do not share a transmission line wish to communicate, they must do this indirectly, via other routers.
When a packet is sent from one router to another via one or more intermediate routers, the packet is received at
each intermediate router in its entirety, stored there until the required output line is free, and then forwarded. A
subnet organized according to this principle is called a store-and-forward or packet-switched subnet. Nearly all
wide area networks (except those using satellites) have store-and-forward subnets. When the packets are small
and all the same size, they are often called cells.
The principle of a packet-switched WAN is so important that it is worth devoting a few more words to it.
Generally, when a process on some host has a message to be sent to a process on some other host, the
sending host first cuts the message into packets, each one bearing its number in the sequence. These packets
23
are then injected into the network one at a time in quick succession. The packets are transported individually
over the network and deposited at the receiving host, where they are reassembled into the original message and
delivered to the receiving process. A stream of packets resulting from some initial message is illustrated in Fig.
1-10.
Figure 1-10. A stream of packets from sender to receiver.
In this figure, all the packets follow the route ACE, rather than ABDE or ACDE. In some networks all packets
from a given message must follow the same route; in others each packet is routed separately. Of course, if ACE
is the best route, all packets may be sent along it, even if each packet is individually routed.
Routing decisions are made locally. When a packet arrives at router A,itis up to A to decide if this packet should
be sent on the line to B or the line to C. How A makes that decision is called the routing algorithm. Many of them
exist. We will study some of them in detail in Chap. 5.
Not all WANs are packet switched. A second possibility for a WAN is a satellite system. Each router has an
antenna through which it can send and receive. All routers can hear the output from the satellite, and in some
cases they can also hear the upward transmissions of their fellow routers to the satellite as well. Sometimes the
routers are connected to a substantial point-to-point subnet, with only some of them having a satellite antenna.
Satellite networks are inherently broadcast and are most useful when the broadcast property is important.
1.1.2 Home Applications
In 1977, Ken Olsen was president of the Digital Equipment Corporation, then the number two computer vendor
in the world (after IBM). When asked why Digital was not going after the personal computer market in a big way,
he said: ”There is no reason for any individual to have a computer in his home.” History showed otherwise and
Digital no longer exists. Why do people buy computers for home use? Initially, for word processing and games,
but in recent years that picture has changed radically. Probably the biggest reason now is for Internet access.
Some of the more popular uses of the Internet for home users are as follows:
1. Access to remote information.
2. Person-to-person communication.
3. Interactive entertainment.
4. Electronic commerce.
Access to remote information comes in many forms. It can be surfing the World Wide Web for information or just
for fun. Information available includes the arts, business, cooking, government, health, history, hobbies,
recreation, science, sports, travel, and many others. Fun comes in too many ways to mention, plus some ways
that are better left unmentioned.
Many newspapers have gone on-line and can be personalized. For example, it is sometimes possible to tell a
newspaper that you want everything about corrupt politicians, big fires, scandals involving celebrities, and
epidemics, but no football, thank you. Sometimes it is even possible to have the selected articles downloaded to
your hard disk while you sleep or printed on your printer just before breakfast. As this trend continues, it will
cause massive unemployment among 12-year-old paperboys, but newspapers like it because distribution has
always been the weakest link in the whole production chain.
The next step beyond newspapers (plus magazines and scientific journals) is the on-line digital library. Many
professional organizations, such as the ACM (www.acm.org) and the IEEE Computer Society
(www.computer.org), already have many journals and conference proceedings on-line. Other groups are
following rapidly. Depending on the cost, size, and weight of book-sized notebook computers, printed books may
become obsolete. Skeptics should take note of the effect the printing press had on the medieval illuminated
manuscript.
All of the above applications involve interactions between a person and a remote database full of information.
The second broad category of network use is person-to-person communication, basically the 21st century’s
answer to the 19th century’s telephone. E-mail is already used on a daily basis by millions of people all over the
world and its use is growing rapidly. It already routinely contains audio and video as well as text and pictures.
Smell may take a while.
Any teenager worth his or her salt is addicted to instant messaging. This facility, derived from the UNIX talk
program in use since around 1970, allows two people to type messages at each other in real time. A multiperson
version of this idea is the chat room, in which a group of people can type messages for all to see.
Worldwide newsgroups, with discussions on every conceivable topic, are already commonplace among a select
group of people, and this phenomenon will grow to include the population at large. These discussions, in which
one person posts a message and all the other subscribers to the newsgroup can read it, run the gamut from
humorous to impassioned. Unlike chat rooms, newsgroups are not real time and messages are saved so that
when someone comes back from vacation, all messages that have been posted in the meanwhile are patiently
waiting for reading.
Another type of person-to-person communication often goes by the name of peer-to-peer communication, to
distinguish it from the client-server model (Parameswaran et al., 2001). In this form, individuals who form a loose
14
group can communicate with others in the group, as shown in Fig. 1-3. Every person can, in principle,
communicate with one or more other people; there is no fixed division into clients and servers.
Figure 1-3. In a peer-to-peer system there are no fixed clients and servers.
Peer-to-peer communication really hit the big time around 2000 with a service called Napster, which at its peak
had over 50 million music fans swapping music, in what was probably the biggest copyright infringement in all of
recorded history (Lam and Tan, 2001; and Macedonia, 2000). The idea was fairly simple. Members registered
the music they had on their hard disks in a central database maintained on the Napster server. If a member
wanted a song, he checked the database to see who had it and went directly there to get it. By not actually
keeping any music on its machines, Napster argued that it was not infringing anyone’s copyright. The courts did
not agree and shut it down.
However, the next generation of peer-to-peer systems eliminates the central database by having each user
maintain his own database locally, as well as providing a list of other nearby people who are members of the
system. A new user can then go to any existing member to see what he has and get a list of other members to
inspect for more music and more names. This lookup process can be repeated indefinitely to build up a large
local database of what is out there. It is an activity that would get tedious for people but is one at which
computers excel.
Legal applications for peer-to-peer communication also exist. For example, fans sharing public domain music or
sample tracks that new bands have released for publicity purposes, families sharing photos, movies, and
genealogical information, and teenagers playing multiperson on-line games. In fact, one of the most popular
Internet applications of all, e-mail, is inherently peer-to-peer. This form of communication is expected to grow
considerably in the future.
Electronic crime is not restricted to copyright law. Another hot area is electronic gambling. Computers have been
simulating things for decades. Why not simulate slot machines, roulette wheels, blackjack dealers, and more
gambling equipment? Well, because it is illegal in a lot of places. The trouble is, gambling is legal in a lot of other
places (England, for example) and casino owners there have grasped the potential for Internet gambling. What
happens if the gambler and the casino are in different countries, with conflicting laws? Good question.
Other communication-oriented applications include using the Internet to carry telephone calls, video phone, and
Internet radio, three rapidly growing areas. Another application is telelearning, meaning attending 8 A.M. classes
without the inconvenience of having to get out of bed first. In the long run, the use of networks to enhance
human-to-human communication may prove more important than any of the others.
Our third category is entertainment, which is a huge and growing industry. The killer application here (the one
that may drive all the rest) is video on demand. A decade or so hence, it may be possible to select any movie or
television program ever made, in any country, and have it displayed on your screen instantly. New films may
become interactive, where the user is occasionally prompted for the story direction (should Macbeth murder
Duncan or just bide his time?) with alternative scenarios provided for all cases. Live television may also become
interactive, with the audience participating in quiz shows, choosing among contestants, and so on.
15
On the other hand, maybe the killer application will not be video on demand. Maybe it will be game playing.
Already we have multiperson real-time simulation games, like hide-and-seek in a virtual dungeon, and flight
simulators with the players on one team trying to shoot down the players on the opposing team. If games are
played with goggles and three-dimensional real-time, photographic-quality moving images, we have a kind of
worldwide shared virtual reality.
Our fourth category is electronic commerce in the broadest sense of the term. Home shopping is already popular
and enables users to inspect the on-line catalogs of thousands of companies. Some of these catalogs will soon
provide the ability to get an instant video on any product by just clicking on the product’s name. After the
customer buys a product electronically but cannot figure out how to use it, on-line technical support may be
consulted.
Another area in which e-commerce is already happening is access to financial institutions. Many people already
pay their bills, manage their bank accounts, and handle their investments electronically. This will surely grow as
networks become more secure.
One area that virtually nobody foresaw is electronic flea markets (e-flea?). On-line auctions of second-hand
goods have become a massive industry. Unlike traditional e-commerce, which follows the client-server model,
on-line auctions are more of a peer-to-peer system, sort of consumer-to-consumer. Some of these forms of ecommerce
have acquired cute little tags based on the fact that ”to” and ”2” are pronounced the same. The most
popular ones are listed in Fig. 1-4.
Figure 1-4. Some forms of e-commerce.
No doubt the range of uses of computer networks will grow rapidly in the future, and probably in ways no one
can now foresee. After all, how many people in 1990 predicted that teenagers tediously typing short text
messages on mobile phones while riding buses would be an immense money maker for telephone companies in
10 years? But short message service is very profitable.
Computer networks may become hugely important to people who are geographically challenged, giving them the
same access to services as people living in the middle of a big city. Telelearning may radically affect education;
universities may go national or international. Telemedicine is only now starting to catch on (e.g., remote patient
monitoring) but may become much more important. But the killer application may be something mundane, like
using the webcam in your refrigerator to see if you have to buy milk on the way home from work.
1.1.3 Mobile Users
Mobile computers, such as notebook computers and personal digital assistants (PDAs), are one of the fastestgrowing
segments of the computer industry. Many owners of these computers have desktop machines back at
the office and want to be connected to their home base even when away from home or en route. Since having a
wired connection is impossible in cars and airplanes, there is a lot of interest in wireless networks. In this section
we will briefly look at some of the uses of wireless networks.
Why would anyone want one? A common reason is the portable office. People on the road often want to use
their portable electronic equipment to send and receive telephone calls, faxes, and electronic mail, surf the Web,
access remote files, and log on to remote machines. And they want to do this from anywhere on land, sea, or air.
For example, at computer conferences these days, the organizers often set up a wireless network in the
conference area. Anyone with a notebook computer and a wireless modem can just turn the computer on and be
connected to the Internet, as though the computer were plugged into a wired network. Similarly, some
16
universities have installed wireless networks on campus so students can sit under the trees and consult the
library’s card catalog or read their e-mail.
Wireless networks are of great value to fleets of trucks, taxis, delivery vehicles, and repairpersons for keeping in
contact with home. For example, in many cities, taxi drivers are independent businessmen, rather than being
employees of a taxi company. In some of these cities, the taxis have a display the driver can see. When a
customer calls up, a central dispatcher types in the pickup and destination points. This information is displayed
on the drivers’ displays and a beep sounds. The first driver to hit a button on the display gets the call.
Wireless networks are also important to the military. If you have to be able to fight a war anywhere on earth on
short notice, counting on using the local networking infrastructure is probably not a good idea. It is better to bring
your own.
Although wireless networking and mobile computing are often related, they are not identical, as Fig. 1-5 shows.
Here we see a distinction between fixed wireless and mobile wireless. Even notebook computers are sometimes
wired. For example, if a traveler plugs a notebook computer into the telephone jack in a hotel room, he has
mobility without a wireless network.
Figure 1-5. Combinations of wireless networks and mobile computing.
On the other hand, some wireless computers are not mobile. An important example is a company that owns an
older building lacking network cabling, and which wants to connect its computers. Installing a wireless network
may require little more than buying a small box with some electronics, unpacking it, and plugging it in. This
solution may be far cheaper than having workmen put in cable ducts to wire the building.
But of course, there are also the true mobile, wireless applications, ranging from the portable office to people
walking around a store with a PDA doing inventory. At many busy airports, car rental return clerks work in the
parking lot with wireless portable computers. They type in the license plate number of returning cars, and their
portable, which has a built-in printer, calls the main computer,
1.4 Reference Models
Now that we have discussed layered networks in the abstract, it is time to look at some examples. In the next
two sections we will discuss two important network architectures, the OSI reference model and the TCP/IP
reference model. Although the protocols associated with the OSI model are rarely used any more, the model
itself is actually quite general and still valid, and the features discussed at each layer are still very important. The
TCP/IP model has the opposite properties: the model itself is not of much use but the protocols are widely used.
For this reason we will look at both of them in detail. Also, sometimes you can learn more from failures than from
successes.
1.4.1 The OSI Reference Model
The OSI model (minus the physical medium) is shown in Fig. 1-20. This model is based on a proposal developed
by the International Standards Organization (ISO) as a first step toward international standardization of the
protocols used in the various layers (Day and Zimmermann, 1983). It was revised in 1995 (Day, 1995). The
model is called the ISO OSI (Open Systems Interconnection) Reference Model because it deals with connecting
open systemsβ€”that is, systems that are open for communication with other systems. We will just call it the OSI
model for short.
Figure 1-20. The OSI reference model.
The OSI model has seven layers. The principles that were applied to arrive at the seven layers can be briefly
summarized as follows:
1. A layer should be created where a different abstraction is needed.
2. Each layer should perform a well-defined function.
3. The function of each layer should be chosen with an eye toward defining internationally standardized
protocols.
4. The layer boundaries should be chosen to minimize the information flow across the interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown together in the
same layer out of necessity and small enough that the architecture does not become unwieldy.
Below we will discuss each layer of the model in turn, starting at the bottom layer. Note that the OSI model itself
is not a network architecture because it does not specify the exact services and protocols to be used in each
layer. It just tells what each layer should do. However, ISO has also produced standards for all the layers,
although these are not part of the reference model itself. Each one has been published as a separate
international standard.
The Physical Layer
The physical layer is concerned with transmitting raw bits over a communication channel. The design issues
have to do with making sure that when one side sends a 1 bit, it is received by the other side as a 1 bit, not as a
0 bit. Typical questions here are how many volts should be used to represent a 1 and how many for a 0, how
many nanoseconds a bit lasts, whether transmission may proceed simultaneously in both directions, how the
initial connection is established and how it is torn down when both sides are finished, and how many pins the
network connector has and what each pin is used for. The design issues here largely deal with mechanical,
electrical, and timing interfaces, and the physical transmission medium, which lies below the physical layer.
The Data Link Layer
The main task of the data link layer is to transform a raw transmission facility into a line that appears free of
undetected transmission errors to the network layer. It accomplishes this task by having the sender break up the
input data into data frames (typically a few hundred or a few thousand bytes) and transmit the frames
sequentially. If the service is reliable, the receiver confirms correct receipt of each frame by sending back an
acknowledgement frame.
Another issue that arises in the data link layer (and most of the higher layers as well) is how to keep a fast
transmitter from drowning a slow receiver in data. Some traffic regulation mechanism is often needed to let the
transmitter know how much buffer space the receiver has at the moment. Frequently, this flow regulation and the
error handling are integrated.
Broadcast networks have an additional issue in the data link layer: how to control access to the shared channel.
A special sublayer of the data link layer, the medium access control sublayer, deals with this problem.
The Network Layer
The network layer controls the operation of the subnet. A key design issue is determining how packets are
routed from source to destination. Routes can be based on static tables that are ”wired into” the network and
rarely changed. They can also be determined at the start of each conversation, for example, a terminal session
(e.g., a login to a remote machine). Finally, they can be highly dynamic, being determined anew for each packet,
to reflect the current network load.
If too many packets are present in the subnet at the same time, they will get in one another’s way, forming
bottlenecks. The control of such congestion also belongs to the network layer. More generally, the quality of
service provided (delay, transit time, jitter, etc.) is also a network layer issue.
When a packet has to travel from one network to another to get to its destination, many problems can arise. The
addressing used by the second network may be different from the first one. The second one may not accept the
packet at all because it is too large. The protocols may differ, and so on. It is up to the network layer to overcome
all these problems to allow heterogeneous networks to be interconnected.
In broadcast networks, the routing problem is simple, so the network layer is often thin or even nonexistent.
The Transport Layer
38
The basic function of the transport layer is to accept data from above, split it up into smaller units if need be,
pass these to the network layer, and ensure that the pieces all arrive correctly at the other end. Furthermore, all
this must be done efficiently and in a way that isolates the upper layers from the inevitable changes in the
hardware technology.
The transport layer also determines what type of service to provide to the session layer, and, ultimately, to the
users of the network. The most popular type of transport connection is an error-free point-to-point channel that
delivers messages or bytes in the order in which they were sent. However, other possible kinds of transport
service are the transporting of isolated messages, with no guarantee about the order of delivery, and the
broadcasting of messages to multiple destinations. The type of service is determined when the connection is
established. (As an aside, an error-free channel is impossible to achieve; what people really mean by this term is
that the error rate is low enough to ignore in practice.)
The transport layer is a true end-to-end layer, all the way from the source to the destination. In other words, a
program on the source machine carries on a conversation with a similar program on the destination machine,
using the message headers and control messages. In the lower layers, the protocols are between each machine
and its immediate neighbors, and not between the ultimate source and destination machines, which may be
separated by many routers. The difference between layers 1 through 3, which are chained, and layers 4 through
7, which are end-to-end, is illustrated in Fig. 1-20.
The Session Layer
The session layer allows users on different machines to establish sessions between them. Sessions offer
various services, including dialog control (keeping track of whose turn it is to transmit), token management
(preventing two parties from attempting the same critical operation at the same time), and synchronization
(checkpointing long transmissions to allow them to continue from where they were after a crash).
The Presentation Layer
Unlike lower layers, which are mostly concerned with moving bits around, the presentation layer is concerned
with the syntax and semantics of the information transmitted. In order to make it possible for computers with
different data representations to communicate, the data structures to be exchanged can be defined in an
abstract way, along with a standard encoding to be used ”on the wire.” The presentation layer manages these
abstract data structures and allows higher-level data structures (e.g., banking records), to be defined and
exchanged.
The Application Layer
The application layer contains a variety of protocols that are commonly needed by users. One widely-used
application protocol is HTTP (HyperText Transfer Protocol), which is the basis for the World Wide Web. When a
browser wants a Web page, it sends the name of the page it wants to the server using HTTP. The server then
sends the page back. Other application protocols are used for file transfer, electronic mail, and network news.
1.4.2 The TCP/IP Reference Model
Let us now turn from the OSI reference model to the reference model used in the grandparent of all wide area
computer networks, the ARPANET, and its successor, the worldwide Internet. Although we will give a brief
history of the ARPANET later, it is useful to mention a few key aspects of it now. The ARPANET was a research
network sponsored by the DoD (U.S. Department of Defense). It eventually connected hundreds of universities
and government installations, using leased telephone lines. When satellite and radio networks were added later,
the existing protocols had trouble interworking with them, so a new reference architecture was needed. Thus,
the ability to connect multiple networks in a seamless way was one of the major design goals from the very
beginning. This architecture later became known as the TCP/IP Reference Model, after its two primary protocols.
It was first defined in (Cerf and Kahn, 1974). A later perspective is given in (Leiner et al., 1985). The design
philosophy behind the model is discussed in (Clark, 1988).
Given the DoD’s worry that some of its precious hosts, routers, and internetwork gateways might get blown to
39
pieces at a moment’s notice, another major goal was that the network be able to survive loss of subnet
hardware, with existing conversations not being broken off. In other words, DoD wanted connections to remain
intact as long as the source and destination machines were functioning, even if some of the machines or
transmission lines in between were suddenly put out of operation. Furthermore, a flexible architecture was
needed since applications with divergent requirements were envisioned, ranging from transferring files to realtime
speech transmission.
The Internet Layer
All these requirements led to the choice of a packet-switching network based on a connectionless internetwork
layer. This layer, called the internet layer, is the linchpin that holds the whole architecture together. Its job is to
permit hosts to inject packets into any network and have them travel independently to the destination (potentially
on a different network). They may even arrive in a different order than they were sent, in which case it is the job
of higher layers to rearrange them, if in-order delivery is desired. Note that ”internet” is used here in a generic
sense, even though this layer is present in the Internet.
The analogy here is with the (snail) mail system. A person can drop a sequence of international letters into a
mail box in one country, and with a little luck, most of them will be delivered to the correct address in the
destination country. Probably the letters will travel through one or more international mail gateways along the
way, but this is transparent to the users. Furthermore, that each country (i.e., each network) has its own stamps,
preferred envelope sizes, and delivery rules is hidden from the users.
The internet layer defines an official packet format and protocol called IP (Internet Protocol). The job of the
internet layer is to deliver IP packets where they are supposed to go. Packet routing is clearly the major issue
here, as is avoiding congestion. For these reasons, it is reasonable to say that the TCP/IP internet layer is
similar in functionality to the OSI network layer. Figure 1-21 shows this correspondence.
Figure 1-21. The TCP/IP reference model.
The Transport Layer
The layer above the internet layer in the TCP/IP model is now usually called the transport layer. It is designed to
allow peer entities on the source and destination hosts to carry on a conversation, just as in the OSI transport
layer. Two end-to-end transport protocols have been defined here. The first one, TCP (Transmission Control
Protocol), is a reliable connection-oriented protocol that allows a byte stream originating on one machine to be
delivered without error on any other machine in the internet. It fragments the incoming byte stream into discrete
messages and passes each one on to the internet layer. At the destination, the receiving TCP process
reassembles the received messages into the output stream. TCP also handles flow control to make sure a fast
sender cannot swamp a slow receiver with more messages than it can handle.
The second protocol in this layer, UDP (User Datagram Protocol), is an unreliable, connectionless protocol for
applications that do not want TCP’s sequencing or flow control and wish to provide their own. It is also widely
used for one-shot, client-server-type request-reply queries and applications in which prompt delivery is more
important than accurate delivery, such as transmitting speech or video. The relation of IP, TCP, and UDP is
40
shown in Fig. 1-22. Since the model was developed, IP has been implemented on many other networks.
Figure 1-22. Protocols and networks in the TCP/IP model initially.
The Application Layer
The TCP/IP model does not have session or presentation layers. No need for them was perceived, so they were
not included. Experience with the OSI model has proven this view correct: they are of little use to most
applications.
On top of the transport layer is the application layer. It contains all the higher-level protocols. The early ones
included virtual terminal (TELNET), file transfer (FTP), and electronic mail (SMTP), as shown in Fig. 1-22. The
virtual terminal protocol allows a user on one machine to log onto a distant machine and work there. The file
transfer protocol provides a way to move data efficiently from one machine to another. Electronic mail was
originally just a kind of file transfer, but later a specialized protocol (SMTP) was developed for it. Many other
protocols have been added to these over the years: the Domain Name System (DNS) for mapping host names
onto their network addresses, NNTP, the protocol for moving USENET news articles around, and HTTP, the
protocol for fetching pages on the World Wide Web, and many others.
The Host-to-Network Layer
Below the internet layer is a great void. The TCP/IP reference model does not really say much about what
happens here, except to point out that the host has to connect to the network using some protocol so it can send
IP packets to it. This protocol is not defined and varies from host to host and network to network. Books and
papers about the TCP/IP model rarely discuss it.
1.4.3 A Comparison of the OSI and TCP/IP Reference Models
The OSI and TCP/IP reference models have much in common. Both are based on the concept of a stack of
independent protocols. Also, the functionality of the layers is roughly similar. For example, in both models the
layers up through and including the transport layer are there to provide an end-to-end, network-independent
transport service to processes wishing to communicate. These layers form the transport provider. Again in both
models, the layers above transport are application-oriented users of the transport service.
Despite these fundamental similarities, the two models also have many differences. In this section we will focus
on the key differences between the two reference models. It is important to note that we are comparing the
reference models here, not the corresponding protocol stacks. The protocols themselves will be discussed later.
For an entire book comparing and contrasting TCP/IP and OSI, see (Piscitello and Chapin, 1993).
Three concepts are central to the OSI model:
1. Services.
2. Interfaces.
3. Protocols.
41
Probably the biggest contribution of the OSI model is to make the distinction between these three concepts
explicit. Each layer performs some services for the layer above it. The service definition tells what the layer does,
not how entities above it access it or how the layer works. It defines the layer’s semantics.
A layer’s interface tells the processes above it how to access it. It specifies what the parameters are and what
results to expect. It, too, says nothing about how the layer works inside.
Finally, the peer protocols used in a layer are the layer’s own business. It can use any protocols it wants to, as
long as it gets the job done (i.e., provides the offered services). It can also change them at will without affecting
software in higher layers.
These ideas fit very nicely with modern ideas about object-oriented programming. An object, like a layer, has a
set of methods (operations) that processes outside the object can invoke. The semantics of these methods
define the set of services that the object offers. The methods’ parameters and results form the object’s interface.
The code internal to the object is its protocol and is not visible or of any concern outside the object.
The TCP/IP model did not originally clearly distinguish between service, interface, and protocol, although people
have tried to retrofit it after the fact to make it more OSI-like. For example, the only real services offered by the
internet layer are SEND IP PACKET and RECEIVE IP PACKET.
As a consequence, the protocols in the OSI model are better hidden than in the TCP/IP model and can be
replaced relatively easily as the technology changes. Being able to make such changes is one of the main
purposes of having layered protocols in the first place.
The OSI reference model was devised before the corresponding protocols were invented. This ordering means
that the model was not biased toward one particular set of protocols, a fact that made it quite general. The
downside of this ordering is that the designers did not have much experience with the subject and did not have a
good idea of which functionality to put in which layer.
For example, the data link layer originally dealt only with point-to-point networks. When broadcast networks
came around, a new sublayer had to be hacked into the model. When people started to build real networks using
the OSI model and existing protocols, it was discovered that these networks did not match the required service
specifications (wonder of wonders), so convergence sublayers had to be grafted onto the model to provide a
place for papering over the differences. Finally, the committee originally expected that each country would have
one network, run by the government and using the OSI protocols, so no thought was given to internetworking.
To make a long story short, things did not turn out that way.
With TCP/IP the reverse was true: the protocols came first, and the model was really just a description of the
existing protocols. There was no problem with the protocols fitting the model. They fit perfectly. The only trouble
was that the model did not fit any other protocol stacks. Consequently, it was not especially useful for describing
other, non-TCP/IP networks.
Turning from philosophical matters to more specific ones, an obvious difference between the two models is the
number of layers: the OSI model has seven layers and the TCP/IP has four layers. Both have (inter)network,
transport, and application layers, but the other layers are different.
Another difference is in the area of connectionless versus connection-oriented communication. The OSI model
supports both connectionless and connection-oriented communication in the network layer, but only connectionoriented
communication in the transport layer, where it counts (because the transport service is visible to the
users). The TCP/IP model has only one mode in the network layer (connectionless) but supports both modes in
the transport layer, giving the users a choice. This choice is especially important for simple request-response
protocols.
1.5 Example Networks
The subject of computer networking covers many different kinds of networks, large and small, well known and
less well known. They have different goals, scales, and technologies. In the following sections, we will look at
some examples, to get an idea of the variety one finds in the area of computer networking.
We will start with the Internet, probably the best known network, and look at its history, evolution, and
technology. Then we will consider ATM, which is often used within the core of large (telephone) networks.
Technically, it is quite different from the Internet, contrasting nicely with it. Next we will introduce Ethernet, the
dominant local area network. Finally, we will look at IEEE 802.11, the standard for wireless LANs.
1.5.1 The Internet
The Internet is not a network at all, but a vast collection of different networks that use certain common protocols
and provide certain common services. It is an unusual system in that it was not planned by anyone and is not
controlled by anyone. To better understand it, let us start from the beginning and see how it has developed and
why. For a wonderful history of the Internet, John Naughton’s (2000) book is highly recommended. It is one of
those rare books that is not only fun to read, but also has 20 pages of ibid.’s and op. cit.’s for the serious
historian. Some of the material below is based on this book.
Of course, countless technical books have been written about the Internet and its protocols as well. For more
information, see, for example, (Maufer, 1999).
The ARPANET
The story begins in the late 1950s. At the height of the Cold War, the DoD wanted a command-and-control
network that could survive a nuclear war. At that time, all military communications used the public telephone
network, which was considered vulnerable. The reason for this belief can be gleaned from Fig. 1-25(a). Here the
45
black dots represent telephone switching offices, each of which was connected to thousands of telephones.
These switching offices were, in turn, connected to higher-level switching offices (toll offices), to form a national
hierarchy with only a small amount of redundancy. The vulnerability of the system was that the destruction of a
few key toll offices could fragment the system into many isolated islands.
Figure 1-25. (a) Structure of the telephone system. (b) Baran’s proposed distributed switching system.
Around 1960, the DoD awarded a contract to the RAND Corporation to find a solution. One of its employees,
Paul Baran, came up with the highly distributed and fault-tolerant design of Fig. 1-25(b). Since the paths
between any two switching offices were now much longer than analog signals could travel without distortion,
Baran proposed using digital packet-switching technology throughout the system. Baran wrote several reports
for the DoD describing his ideas in detail. Officials at the Pentagon liked the concept and asked AT&T, then the
U.S. national telephone monopoly, to build a prototype. AT&T dismissed Baran’s ideas out of hand. The biggest
and richest corporation in the world was not about to allow some young whippersnapper tell it how to build a
telephone system. They said Baran’s network could not be built and the idea was killed.
Several years went by and still the DoD did not have a better command-and-control system. To understand what
happened next, we have to go back to October 1957, when the Soviet Union beat the U.S. into space with the
launch of the first artificial satellite, Sputnik. When President Eisenhower tried to find out who was asleep at the
switch, he was appalled to find the Army, Navy, and Air Force squabbling over the Pentagon’s research budget.
His immediate response was to create a single defense research organization, ARPA, the Advanced Research
Projects Agency. ARPA had no scientists or laboratories; in fact, it had nothing more than an office and a small
(by Pentagon standards) budget. It did its work by issuing grants and contracts to universities and companies
whose ideas looked promising to it.
For the first few years, ARPA tried to figure out what its mission should be, but in 1967, the attention of ARPA’s
then director, Larry Roberts, turned to networking. He contacted various experts to decide what to do. One of
them, Wesley Clark, suggested building a packet-switched subnet, giving each host its own router, as illustrated
in Fig. 1-10.
After some initial skepticism, Roberts bought the idea and presented a somewhat vague paper about it at the
ACM SIGOPS Symposium on Operating System Principles held in Gatlinburg, Tennessee in late 1967 (Roberts,
1967). Much to Roberts’ surprise, another paper at the conference described a similar system that had not only
been designed but actually implemented under the direction of Donald Davies at the National Physical
Laboratory in England. The NPL system was not a national system (it just connected several computers on the
NPL campus), but it demonstrated that packet switching could be made to work. Furthermore, it cited Baran’s
now discarded earlier work. Roberts came away from Gatlinburg determined to build what later became known
as the ARPANET.
46
The subnet would consist of minicomputers called IMPs (Interface Message Processors) connected by 56-kbps
transmission lines. For high reliability, each IMP would be connected to at least two other IMPs. The subnet was
to be a datagram subnet, so if some lines and IMPs were destroyed, messages could be automatically rerouted
along alternative paths.
Each node of the network was to consist of an IMP and a host, in the same room, connected by a short wire. A
host could send messages of up to 8063 bits to its IMP, which would then break these up into packets of at most
1008 bits and forward them independently toward the destination. Each packet was received in its entirety
before being forwarded, so the subnet was the first electronic store-and-forward packet-switching network.
ARPA then put out a tender for building the subnet. Twelve companies bid for it. After evaluating all the
proposals, ARPA selected BBN, a consulting firm in Cambridge, Massachusetts, and in December 1968,
awarded it a contract to build the subnet and write the subnet software. BBN chose to use specially modified
Honeywell DDP-316 minicomputers with 12K 16-bit words of core memory as the IMPs. The IMPs did not have
disks, since moving parts were considered unreliable. The IMPs were interconnected by 56-kbps lines leased
from telephone companies. Although 56 kbps is now the choice of teenagers who cannot afford ADSL or cable,
it was then the best money could buy.
The software was split into two parts: subnet and host. The subnet software consisted of the IMP end of the
host-IMP connection, the IMP-IMP protocol, and a source IMP to destination IMP protocol designed to improve
reliability. The original ARPANET design is shown in Fig. 1-26.
Figure 1-26. The original ARPANET design.
Outside the subnet, software was also needed, namely, the host end of the host-IMP connection, the host-host
protocol, and the application software. It soon became clear that BBN felt that when it had accepted a message
on a host-IMP wire and placed it on the host-IMP wire at the destination, its job was done.
Roberts had a problem: the hosts needed software too. To deal with it, he convened a meeting of network
researchers, mostly graduate students, at Snowbird, Utah, in the summer of 1969. The graduate students
expected some network expert to explain the grand design of the network and its software to them and then to
assign each of them the job of writing part of it. They were astounded when there was no network expert and no
grand design. They had to figure out what to do on their own.
Nevertheless, somehow an experimental network went on the air in December 1969 with four nodes: at UCLA,
UCSB, SRI, and the University of Utah. These four were chosen because all had a large number of ARPA
contracts, and all had different and completely incompatible host computers (just to make it more fun). The
network grew quickly as more IMPs were delivered and installed; it soon spanned the United States. Figure 1-27
shows how rapidly the ARPANET grew in the first 3 years.
Figure 1-27. Growth of the ARPANET. (a) December 1969. (b) July 1970. (c) March 1971. (d) April 1972. (e)
September 1972.
47
In addition to helping the fledgling ARPANET grow, ARPA also funded research on the use of satellite networks
and mobile packet radio networks. In one now famous demonstration, a truck driving around in California used
the packet radio network to send messages to SRI, which were then forwarded over the ARPANET to the East
Coast, where they were shipped to University College in London over the satellite network. This allowed a
researcher in the truck to use a computer in London while driving around in California.
This experiment also demonstrated that the existing ARPANET protocols were not suitable for running over
multiple networks. This observation led to more research on protocols, culminating with the invention of the
TCP/IP model and protocols (Cerf and Kahn, 1974). TCP/IP was specifically designed to handle communication
over internetworks, something becoming increasingly important as more and more networks were being hooked
up to the ARPANET.
To encourage adoption of these new protocols, ARPA awarded several contracts to BBN and the University of
California at Berkeley to integrate them into Berkeley UNIX. Researchers at Berkeley developed a convenient
program interface to the network (sockets) and wrote many application, utility, and management programs to
make networking easier.
The timing was perfect. Many universities had just acquired a second or third VAX computer and a LAN to
connect them, but they had no networking software. When 4.2BSD came along, with TCP/IP, sockets, and many
network utilities, the complete package was adopted immediately. Furthermore, with TCP/IP, it was easy for the
LANs to connect to the ARPANET, and many did.
During the 1980s, additional networks, especially LANs, were connected to the ARPANET. As the scale
increased, finding hosts became increasingly expensive, so DNS (Domain Name System) was created to
organize machines into domains and map host names onto IP addresses. Since then, DNS has become a
generalized, distributed database system for storing a variety of information related to naming. We will study it in
detail in Chap. 7.
NSFNET
By the late 1970s, NSF (the U.S. National Science Foundation) saw the enormous impact the ARPANET was
having on university research, allowing scientists across the country to share data and collaborate on research
projects. However, to get on the ARPANET, a university had to have a research contract with the DoD, which
many did not have. NSF’s response was to design a successor to the ARPANET that would be open to all
university research groups. To have something concrete to start with, NSF decided to build a backbone network
48
to connect its six supercomputer centers, in San Diego, Boulder, Champaign, Pittsburgh, Ithaca, and Princeton.
Each supercomputer was given a little brother, consisting of an LSI-11 microcomputer called a fuzzball. The
fuzzballs were connected with 56-kbps leased lines and formed the subnet, the same hardware technology as
the ARPANET used. The software technology was different however: the fuzzballs spoke TCP/IP right from the
start, making it the first TCP/IP WAN.
NSF also funded some (eventually about 20) regional networks that connected to the backbone to allow users at
thousands of universities, research labs, libraries, and museums to access any of the supercomputers and to
communicate with one another. The complete network, including the backbone and the regional networks, was
called NSFNET. It connected to the ARPANET through a link between an IMP and a fuzzball in the Carnegie-
Mellon machine room. The first NSFNET backbone is illustrated in Fig. 1-28.
Figure 1-28. The NSFNET backbone in 1988.
NSFNET was an instantaneous success and was overloaded from the word go. NSF immediately began
planning its successor and awarded a contract to the Michigan-based MERIT consortium to run it. Fiber optic
channels at 448 kbps were leased from MCI (since merged with WorldCom) to provide the version 2 backbone.
IBM PC-RTs were used as routers. This, too, was soon overwhelmed, and by 1990, the second backbone was
upgraded to 1.5 Mbps.
As growth continued, NSF realized that the government could not continue financing networking forever.
Furthermore, commercial organizations wanted to join but were forbidden by NSF’s charter from using networks
NSF paid for. Consequently, NSF encouraged MERIT, MCI, and IBM to form a nonprofit corporation, ANS
(Advanced Networks and Services), as the first step along the road to commercialization. In 1990, ANS took
over NSFNET and upgraded the 1.5-Mbps links to 45 Mbps to form ANSNET. This network operated for 5 years
and was then sold to America Online. But by then, various companies were offering commercial IP service and it
was clear the government should now get out of the networking business.
To ease the transition and make sure every regional network could communicate with every other regional
network, NSF awarded contracts to four different network operators to establish a NAP (Network Access Point).
These operators were PacBell (San Francisco), Ameritech (Chicago), MFS (Washington, D.C.), and Sprint (New
York City, where for NAP purposes, Pennsauken, New Jersey counts as New York City). Every network operator
that wanted to provide backbone service to the NSF regional networks had to connect to all the NAPs.
This arrangement meant that a packet originating on any regional network had a choice of backbone carriers to
get from its NAP to the destination’s NAP. Consequently, the backbone carriers were forced to compete for the
regional networks’ business on the basis of service and price, which was the idea, of course. As a result, the
concept of a single default backbone was replaced by a commercially-driven competitive infrastructure. Many
people like to criticize the Federal Government for not being innovative, but in the area of networking, it was DoD
and NSF that created the infrastructure that formed the basis for the Internet and then handed it over to industry
to operate.
49
During the 1990s, many other countries and regions also built national research networks, often patterned on the
ARPANET and NSFNET. These included EuropaNET and EBONE in Europe, which started out with 2-Mbps
lines and then upgraded to 34-Mbps lines. Eventually, the network infrastructure in Europe was handed over to
industry as well.
Internet Usage
The number of networks, machines, and users connected to the ARPANET grew rapidly after TCP/IP became
the only official protocol on January 1, 1983. When NSFNET and the ARPANET were interconnected, the growth
became exponential. Many regional networks joined up, and connections were made to networks in Canada,
Europe, and the Pacific.
Sometime in the mid-1980s, people began viewing the collection of networks as an internet, and later as the
Internet, although there was no official dedication with some politician breaking a bottle of champagne over a
fuzzball.
The glue that holds the Internet together is the TCP/IP reference model and TCP/IP protocol stack. TCP/IP
makes universal service possible and can be compared to the adoption of standard gauge by the railroads in the
19th century or the adoption of common signaling protocols by all the telephone companies.
What does it actually mean to be on the Internet? Our definition is that a machine is on the Internet if it runs the
TCP/IP protocol stack, has an IP address, and can send IP packets to all the other machines on the Internet.
The mere ability to send and receive electronic mail is not enough, since e-mail is gatewayed to many networks
outside the Internet. However, the issue is clouded somewhat by the fact that millions of personal computers can
call up an Internet service provider using a modem, be assigned a temporary IP address, and send IP packets to
other Internet hosts. It makes sense to regard such machines as being on the Internet for as long as they are
connected to the service provider’s router.
Traditionally (meaning 1970 to about 1990), the Internet and its predecessors had four main applications:
1. E-mail. The ability to compose, send, and receive electronic mail has been around since the early days
of the ARPANET and is enormously popular. Many people get dozens of messages a day and consider
it their primary way of interacting with the outside world, far outdistancing the telephone and snail mail.
E-mail programs are available on virtually every kind of computer these days.
2. News. Newsgroups are specialized forums in which users with a common interest can exchange
messages. Thousands of newsgroups exist, devoted to technical and nontechnical topics, including
computers, science, recreation, and politics. Each newsgroup has its own etiquette, style, and customs,
and woe betide anyone violating them.
3. Remote login. Using the telnet, rlogin, or ssh programs, users anywhere on the Internet can log on to
any other machine on which they have an account.
4. File transfer. Using the FTP program, users can copy files from one machine on the Internet to another.
Vast numbers of articles, databases, and other information are available this way.
Up until the early 1990s, the Internet was largely populated by academic, government, and industrial
researchers. One new application, the WWW (World Wide Web) changed all that and brought millions of new,
nonacademic users to the net. This application, invented by CERN physicist Tim Berners-Lee, did not change
any of the underlying facilities but made them easier to use. Together with the Mosaic browser, written by Marc
Andreessen at the National Center for Supercomputer Applications in Urbana, Illinois, the WWW made it
possible for a site to set up a number of pages of information containing text, pictures, sound, and even video,
with embedded links to other pages. By clicking on a link, the user is suddenly transported to the page pointed to
by that link. For example, many companies have a home page with entries pointing to other pages for product
information, price lists, sales, technical support, communication with employees, stockholder information, and
more.
Numerous other kinds of pages have come into existence in a very short time, including maps, stock market
tables, library card catalogs, recorded radio programs, and even a page pointing to the complete text of many
books whose copyrights have expired (Mark Twain, Charles Dickens, etc.). Many people also have personal
pages (home pages).
50
Much of this growth during the 1990s was fueled by companies called ISPs (Internet Service Providers). These
are companies that offer individual users at home the ability to call up one of their machines and connect to the
Internet, thus gaining access to e-mail, the WWW, and other Internet services. These companies signed up tens
of millions of new users a year during the late 1990s, completely changing the character of the network from an
academic and military playground to a public utility, much like the telephone system. The number of Internet
users now is unknown, but is certainly hundreds of millions worldwide and will probably hit 1 billion fairly soon.
Architecture of the Internet
In this section we will attempt to give a brief overview of the Internet today. Due to the many mergers between
telephone companies (telcos) and ISPs, the waters have become muddied and it is often hard to tell who is
doing what. Consequently, this description will be of necessity somewhat simpler than reality. The big picture is
shown in Fig. 1-29. Let us examine this figure piece by piece now.
Figure 1-29. Overview of the Internet.
A good place to start is with a client at home. Let us assume our client calls his or her ISP over a dial-up
telephone line, as shown in Fig. 1-29. The modem is a card within the PC that converts the digital signals the
computer produces to analog signals that can pass unhindered over the telephone system. These signals are
transferred to the ISP’s POP (Point of Presence), where they are removed from the telephone system and
injected into the ISP’s regional network. From this point on, the system is fully digital and packet switched. If the
ISP is the local telco, the POP will probably be located in the telephone switching office where the telephone
wire from the client terminates. If the ISP is not the local telco, the POP may be a few switching offices down the
road.
The ISP’s regional network consists of interconnected routers in the various cities the ISP serves. If the packet is
destined for a host served directly by the ISP, the packet is delivered to the host. Otherwise, it is handed over to
the ISP’s backbone operator.
At the top of the food chain are the major backbone operators, companies like AT&T and Sprint. They operate
large international backbone networks, with thousands of routers connected by high-bandwidth fiber optics.
Large corporations and hosting services that run server farms (machines that can serve thousands of Web
pages per second) often connect directly to the backbone. Backbone operators encourage this direct connection
by renting space in what are called carrier hotels, basically equipment racks in the same room as the router to
allow short, fast connections between server farms and the backbone.
51
If a packet given to the backbone is destined for an ISP or company served by the backbone, it is sent to the
closest router and handed off there. However, many backbones, of varying sizes, exist in the world, so a packet
may have to go to a competing backbone. To allow packets to hop between backbones, all the major backbones
connect at the NAPs discussed earlier. Basically, a NAP is a room full of routers, at least one per backbone. A
LAN in the room connects all the routers, so packets can be forwarded from any backbone to any other
backbone. In addition to being interconnected at NAPs, the larger backbones have numerous direct connections
between their routers, a technique known as private peering. One of the many paradoxes of the Internet is that
ISPs who publicly compete with one another for customers often privately cooperate to do private peering (Metz,
2001).
This ends our quick tour of the Internet. We will have a great deal to say about the individual components and
their design, algorithms, and protocols in subsequent chapters. Also worth mentioning in passing is that some
companies have interconnected all their existing internal networks, often using the same technology as the
Internet. These intranets are typically accessible only within the company but otherwise work the same way as
the Internet.
1.5.2 Connection-Oriented Networks: X.25, Frame Relay, and ATM
Since the beginning of networking, a war has been going on between the people who support connectionless
(i.e., datagram) subnets and the people who support connection-oriented subnets. The main proponents of the
connectionless subnets come from the ARPANET/Internet community. Remember that DoD’s original desire in
funding and building the ARPANET was to have a network that would continue functioning even after multiple
direct hits by nuclear weapons wiped out numerous routers and transmission lines. Thus, fault tolerance was
high on their priority list; billing customers was not. This approach led to a connectionless design in which every
packet is routed independently of every other packet. As a consequence, if some routers go down during a
session, no harm is done as long as the system can reconfigure itself dynamically so that subsequent packets
can find some route to the destination, even if it is different from that which previous packets used.
The connection-oriented camp comes from the world of telephone companies. In the telephone system, a caller
must dial the called party’s number and wait for a connection before talking or sending data. This connection
setup establishes a route through the telephone system that is maintained until the call is terminated. All words
or packets follow the same route. If a line or switch on the path goes down, the call is aborted. This property is
precisely what the DoD did not like about it.
Why do the telephone companies like it then? There are two reasons:
1. Quality of service.
2. Billing.
By setting up a connection in advance, the subnet can reserve resources such as buffer space and router CPU
capacity. If an attempt is made to set up a call and insufficient resources are available, the call is rejected and
the caller gets a kind of busy signal. In this way, once a connection has been set up, the connection will get good
service. With a connectionless network, if too many packets arrive at the same router at the same moment, the
router will choke and probably lose packets. The sender will eventually notice this and resend them, but the
quality of service will be jerky and unsuitable for audio or video unless the network is very lightly loaded.
Needless to say, providing adequate audio quality is something telephone companies care about very much,
hence their preference for connections.
The second reason the telephone companies like connection-oriented service is that they are accustomed to
charging for connect time. When you make a long distance call (or even a local call outside North America) you
are charged by the minute. When networks came around, they just automatically gravitated toward a model in
which charging by the minute was easy to do. If you have to set up a connection before sending data, that is
when the billing clock starts running. If there is no connection, they cannot charge for it.
Ironically, maintaining billing records is very expensive. If a telephone company were to adopt a flat monthly rate
with unlimited calling and no billing or record keeping, it would probably save a huge amount of money, despite
the increased calling this policy would generate. Political, regulatory, and other factors weigh against doing this,
however. Interestingly enough, flat rate service exists in other sectors. For example, cable TV is billed at a flat
52
rate per month, no matter how many programs you watch. It could have been designed with pay-per-view as the
basic concept, but it was not, due in part to the expense of billing (and given the quality of most television, the
embarrassment factor cannot be totally discounted either). Also, many theme parks charge a daily admission fee
for unlimited rides, in contrast to traveling carnivals, which charge by the ride.
That said, it should come as no surprise that all networks designed by the telephone industry have had
connection-oriented subnets. What is perhaps surprising, is that the Internet is also drifting in that direction, in
order to provide a better quality of service for audio and video, a subject we will return to in Chap. 5. But now let
us examine some connection-oriented networks.
X.25 and Frame Relay
Our first example of a connection-oriented network is X.25, which was the first public data network. It was
deployed in the 1970s at a time when telephone service was a monopoly everywhere and the telephone
company in each country expected there to be one data network per countryβ€”theirs. To use X.25, a computer
first established a connection to the remote computer, that is, placed a telephone call. This connection was given
a connection number to be used in data transfer packets (because multiple connections could be open at the
same time). Data packets were very simple, consisting of a 3-byte header and up to 128 bytes of data. The
header consisted of a 12-bit connection number, a packet sequence number, an acknowledgement number, and
a few miscellaneous bits. X.25 networks operated for about a decade with mixed success.
In the 1980s, the X.25 networks were largely replaced by a new kind of network called frame relay. The essence
of frame relay is that it is a connection-oriented network with no error control and no flow control. Because it was
connection-oriented, packets were delivered in order (if they were delivered at all). The properties of in-order
delivery, no error control, and no flow control make frame relay akin to a wide area LAN. Its most important
application is interconnecting LANs at multiple company offices. Frame relay enjoyed a modest success and is
still in use in places today.
Asynchronous Transfer Mode
Yet another, and far more important, connection-oriented network is ATM (Asynchronous Transfer Mode). The
reason for the somewhat strange name is that in the telephone system, most transmission is synchronous
(closely tied to a clock), and ATM is not.
ATM was designed in the early 1990s and launched amid truly incredible hype (Ginsburg, 1996; Goralski, 1995;
Ibe, 1997; Kim et al., 1994; and Stallings, 2000). ATM was going to solve all the world’s networking and
telecommunications problems by merging voice, data, cable television, telex, telegraph, carrier pigeon, tin cans
connected by strings, tom-toms, smoke signals, and everything else into a single integrated system that could do
everything for everyone. It did not happen. In large part, the problems were similar to those we described earlier
concerning OSI, that is, bad timing, technology, implementation, and politics. Having just beaten back the
telephone companies in round 1, many in the Internet community saw ATM as Internet versus the Telcos: the
Sequel. But it really was not, and this time around even diehard datagram fanatics were aware that the Internet’s
quality of service left a lot to be desired. To make a long story short, ATM was much more successful than OSI,
and it is now widely used deep within the telephone system, often for moving IP packets. Because it is now
mostly used by carriers for internal transport, users are often unaware of its existence, but it is definitely alive
and well.
ATM Virtual Circuits
Since ATM networks are connection-oriented, sending data requires first sending a packet to set up the
connection. As the setup packet wends its way through the subnet, all the routers on the path make an entry in
their internal tables noting the existence of the connection and reserving whatever resources are needed for it.
Connections are often called virtual circuits, in analogy with the physical circuits used within the telephone
system. Most ATM networks also support permanent virtual circuits, which are permanent connections between
two (distant) hosts. They are similar to leased lines in the telephone world. Each connection, temporary or
permanent, has a unique connection identifier. A virtual circuit is illustrated in Fig. 1-30.
Figure 1-30. A virtual circuit.
53
Once a connection has been established, either side can begin transmitting data. The basic idea behind ATM is
to transmit all information in small, fixed-size packets called cells. The cells are 53 bytes long, of which 5 bytes
are header and 48 bytes are payload, as shown in Fig. 1-31. Part of the header is the connection identifier, so
the sending and receiving hosts and all the intermediate routers can tell which cells belong to which connections.
This information allows each router to know how to route each incoming cell. Cell routing is done in hardware, at
high speed. In fact, the main argument for having fixed-size cells is that it is easy to build hardware routers to
handle short, fixed-length cells. Variable-length IP packets have to be routed by software, which is a slower
process. Another plus of ATM is that the hardware can be set up to copy one incoming cell to multiple output
lines, a property that is required for handling a television program that is being broadcast to many receivers.
Finally, small cells do not block any line for very long, which makes guaranteeing quality of service easier.
Figure 1-31. An ATM cell.
All cells follow the same route to the destination. Cell delivery is not guaranteed, but their order is. If cells 1 and 2
are sent in that order, then if both arrive, they will arrive in that order, never first 2 then 1. But either or both of
them can be lost along the way. It is up to higher protocol levels to recover from lost cells. Note that although this
guarantee is not perfect, it is better than what the Internet provides. There packets can not only be lost, but
delivered out of order as well. ATM, in contrast, guarantees never to deliver cells out of order.
ATM networks are organized like traditional WANs, with lines and switches (routers). The most common speeds
for ATM networks are 155 Mbps and 622 Mbps, although higher speeds are also supported. The 155-Mbps
speed was chosen because this is about what is needed to transmit high definition television. The exact choice
of 155.52 Mbps was made for compatibility with AT&T’s SONET transmission system, something we will study in
Chap. 2. The 622 Mbps speed was chosen so that four 155-Mbps channels could be sent over it.
The ATM Reference Model
ATM has its own reference model, different from the OSI model and also different from the TCP/IP model. This
model is shown in Fig. 1-32. It consists of three layers, the physical, ATM, and ATM adaptation layers, plus
whatever users want to put on top of that.
Figure 1-32. The ATM reference model.
54
The physical layer deals with the physical medium: voltages, bit timing, and various other issues. ATM does not
prescribe a particular set of rules but instead says that ATM cells can be sent on a wire or fiber by themselves,
but they can also be packaged inside the payload of other carrier systems. In other words, ATM has been
designed to be independent of the transmission medium.
The ATM layer deals with cells and cell transport. It defines the layout of a cell and tells what the header fields
mean. It also deals with establishment and release of virtual circuits. Congestion control is also located here.
Because most applications do not want to work directly with cells (although some may), a layer above the ATM
layer has been defined to allow users to send packets larger than a cell. The ATM interface segments these
packets, transmits the cells individually, and reassembles them at the other end. This layer is the AAL (ATM
Adaptation Layer).
Unlike the earlier two-dimensional reference models, the ATM model is defined as being three-dimensional, as
shown in Fig. 1-32. The user plane deals with data transport, flow control, error correction, and other user
functions. In contrast, the control plane is concerned with connection management. The layer and plane
management functions relate to resource management and interlayer coordination.
The physical and AAL layers are each divided into two sublayers, one at the bottom that does the work and a
convergence sublayer on top that provides the proper interface to the layer above it. The functions of the layers
and sublayers are given in Fig. 1-33.
Figure 1-33. The ATM layers and sublayers, and their functions..
55
The PMD (Physical Medium Dependent) sublayer interfaces to the actual cable. It moves the bits on and off and
handles the bit timing. For different carriers and cables, this layer will be different.
The other sublayer of the physical layer is the TC (Transmission Convergence) sublayer. When cells are
transmitted, the TC layer sends them as a string of bits to the PMD layer. Doing this is easy. At the other end,
the TC sublayer gets a pure incoming bit stream from the PMD sublayer. Its job is to convert this bit stream into
a cell stream for the ATM layer. It handles all the issues related to telling where cells begin and end in the bit
stream. In the ATM model, this functionality is in the physical layer. In the OSI model and in pretty much all other
networks, the job of framing, that is, turning a raw bit stream into a sequence of frames or cells, is the data link
layer’s task.
As we mentioned earlier, the ATM layer manages cells, including their generation and transport. Most of the
interesting aspects of ATM are located here. It is a mixture of the OSI data link and network layers; it is not split
into sublayers.
The AAL layer is split into a SAR (Segmentation And Reassembly) sublayer and a CS (Convergence Sublayer).
The lower sublayer breaks up packets into cells on the transmission side and puts them back together again at
the destination. The upper sublayer makes it possible to have ATM systems offer different kinds of services to
different applications (e.g., file transfer and video on demand have different requirements concerning error
handling, timing, etc.).
As it is probably mostly downhill for ATM from now on, we will not discuss it further in this book. Nevertheless,
since it has a substantial installed base, it will probably be around for at least a few more years. For more
information about ATM, see (Dobrowski and Grise, 2001; and Gadecki and Heckart, 1997).
1.5.3 Ethernet
Both the Internet and ATM were designed for wide area networking. However, many companies, universities,
and other organizations have large numbers of computers that must be connected. This need gave rise to the
local area network. In this section we will say a little bit about the most popular LAN, Ethernet.
The story starts out in pristine Hawaii in the early 1970s. In this case, ”pristine” can be interpreted as ”not having
a working telephone system.” While not being interrupted by the phone all day long makes life more pleasant for
vacationers, it did not make life more pleasant for researcher Norman Abramson and his colleagues at the
University of Hawaii who were trying to connect users on remote islands to the main computer in Honolulu.
Stringing their own cables under the Pacific Ocean was not in the cards, so they looked for a different solution.
The one they found was short-range radios. Each user terminal was equipped with a small radio having two
frequencies: upstream (to the central computer) and downstream (from the central computer). When the user
wanted to contact the computer, it just transmitted a packet containing the data in the upstream channel. If no
one else was transmitting at that instant, the packet probably got through and was acknowledged on the
downstream channel. If there was contention for the upstream channel, the terminal noticed the lack of
acknowledgement and tried again. Since there was only one sender on the downstream channel (the central
computer), there were never collisions there. This system, called ALOHANET, worked fairly well under
conditions of low traffic but bogged down badly when the upstream traffic was heavy.
About the same time, a student named Bob Metcalfe got his bachelor’s degree at M.I.T. and then moved up the
river to get his Ph.D. at Harvard. During his studies, he was exposed to Abramson’s work. He became so
interested in it that after graduating from Harvard, he decided to spend the summer in Hawaii working with
Abramson before starting work at Xerox PARC (Palo Alto Research Center). When he got to PARC, he saw that
the researchers there had designed and built what would later be called personal computers. But the machines
were isolated. Using his knowledge of Abramson’s work, he, together with his colleague David Boggs, designed
and implemented the first local area network (Metcalfe and Boggs, 1976).
They called the system Ethernet after the luminiferous ether, through which electromagnetic radiation was once
thought to propagate. (When the 19th century British physicist James Clerk Maxwell discovered that
electromagnetic radiation could be described by a wave equation, scientists assumed that space must be filled
56
with some ethereal medium in which the radiation was propagating. Only after the famous Michelson-Morley
experiment in 1887 did physicists discover that electromagnetic radiation could propagate in a vacuum.)
The transmission medium here was not a vacuum, but a thick coaxial cable (the ether) up to 2.5 km long (with
repeaters every 500 meters). Up to 256 machines could be attached to the system via transceivers screwed onto
the cable. A cable with multiple machines attached to it in parallel is called a multidrop cable. The system ran at
2.94 Mbps. A sketch of its architecture is given in Fig. 1-34. Ethernet had a major improvement over
ALOHANET: before transmitting, a computer first listened to the cable to see if someone else was already
transmitting. If so, the computer held back until the current transmission finished. Doing so avoided interfering
with existing transmissions, giving a much higher efficiency. ALOHANET did not work like this because it was
impossible for a terminal on one island to sense the transmission of a terminal on a distant island. With a single
cable, this problem does not exist.
Figure 1-34. Architecture of the original Ethernet..
Despite the computer listening before transmitting, a problem still arises: what happens if two or more computers
all wait until the current transmission completes and then all start at once? The solution is to have each
computer listen during its own transmission and if it detects interference, jam the ether to alert all senders. Then
back off and wait a random time before retrying. If a second collision happens, the random waiting time is
doubled, and so on, to spread out the competing transmissions and give one of them a chance to go first.
The Xerox Ethernet was so successful that DEC, Intel, and Xerox drew up a standard in 1978 for a 10-Mbps
Ethernet, called the DIX standard. With two minor changes, the DIX standard became the IEEE 802.3 standard
in 1983.
Unfortunately for Xerox, it already had a history of making seminal inventions (such as the personal computer)
and then failing to commercialize on them, a story told in Fumbling the Future (Smith and Alexander, 1988).
When Xerox showed little interest in doing anything with Ethernet other than helping standardize it, Metcalfe
formed his own company, 3Com, to sell Ethernet adapters for PCs. It has sold over 100 million of them.
Ethernet continued to develop and is still developing. New versions at 100 Mbps, 1000 Mbps, and still higher
have come out. Also the cabling has improved, and switching and other features have been added. We will
discuss Ethernet in detail in Chap. 4.
In passing, it is worth mentioning that Ethernet (IEEE 802.3) is not the only LAN standard. The committee also
standardized a token bus (802.4) and a token ring (802.5). The need for three more-or-less incompatible
standards has little to do with technology and everything to do with politics. At the time of standardization,
General Motors was pushing a LAN in which the topology was the same as Ethernet (a linear cable) but
computers took turns in transmitting by passing a short packet called a token from computer to computer. A
computer could only send if it possessed the token, thus avoiding collisions. General Motors announced that this
scheme was essential for manufacturing cars and was not prepared to budge from this position. This
announcement notwithstanding, 802.4 has basically vanished from sight.
Similarly, IBM had its own favorite: its proprietary token ring. The token was passed around the ring and
whichever computer held the token was allowed to transmit before putting the token back on the ring. Unlike
802.4, this scheme, standardized as 802.5, is still in use at some IBM sites, but virtually nowhere outside of IBM
sites. However, work is progressing on a gigabit version (802.5v), but it seems unlikely that it will ever catch up
with Ethernet. In short, there was a war between Ethernet, token bus, and token ring, and Ethernet won, mostly
because it was there first and the challengers were not as good.
57
1.5.4 Wireless LANs: 802.11
Almost as soon as notebook computers appeared, many people had a dream of walking into an office and
magically having their notebook computer be connected to the Internet. Consequently, various groups began
working on ways to accomplish this goal. The most practical approach is to equip both the office and the
notebook computers with short-range radio transmitters and receivers to allow them to communicate. This work
rapidly led to wireless LANs being marketed by a variety of companies.
The trouble was that no two of them were compatible. This proliferation of standards meant that a computer
equipped with a brand X radio would not work in a room equipped with a brand Y base station. Finally, the
industry decided that a wireless LAN standard might be a good idea, so the IEEE committee that standardized
the wired LANs was given the task of drawing up a wireless LAN standard. The standard it came up with was
named 802.11. A common slang name for it is WiFi. It is an important standard and deserves respect, so we will
call it by its proper name, 802.11.
The proposed standard had to work in two modes:
1. In the presence of a base station.
2. In the absence of a base station.
In the former case, all communication was to go through the base station, called an access point in 802.11
terminology. In the latter case, the computers would just send to one another directly. This mode is now
sometimes called ad hoc networking. A typical example is two or more people sitting down together in a room
not equipped with a wireless LAN and having their computers just communicate directly. The two modes are
illustrated in Fig. 1-35.
Figure 1-35. (a) Wireless networking with a base station. (b) Ad hoc networking.
The first decision was the easiest: what to call it. All the other LAN standards had numbers like 802.1, 802.2,
802.3, up to 802.10, so the wireless LAN standard was dubbed 802.11. The rest was harder.
In particular, some of the many challenges that had to be met were: finding a suitable frequency band that was
available, preferably worldwide; dealing with the fact that radio signals have a finite range; ensuring that users’
privacy was maintained; taking limited battery life into account; worrying about human safety (do radio waves
cause cancer?); understanding the implications of computer mobility; and finally, building a system with enough
bandwidth to be economically viable.
At the time the standardization process started (mid-1990s), Ethernet had already come to dominate local area
networking, so the committee decided to make 802.11 compatible with Ethernet above the data link layer. In
particular, it should be possible to send an IP packet over the wireless LAN the same way a wired computer sent
an IP packet over Ethernet. Nevertheless, in the physical and data link layers, several inherent differences with
Ethernet exist and had to be dealt with by the standard.
First, a computer on Ethernet always listens to the ether before transmitting. Only if the ether is idle does the
computer begin transmitting. With wireless LANs, that idea does not work so well. To see why, examine Fig. 1-
36. Suppose that computer A is transmitting to computer B, but the radio range of A’s transmitter is too short to
58
reach computer C. If C wants to transmit to B it can listen to the ether before starting, but the fact that it does not
hear anything does not mean that its transmission will succeed. The 802.11 standard had to solve this problem.
Figure 1-36. The range of a single radio may not cover the entire system.
The second problem that had to be solved is that a radio signal can be reflected off solid objects, so it may be
received multiple times (along multiple paths). This interference results in what is called multipath fading.
The third problem is that a great deal of software is not aware of mobility. For example, many word processors
have a list of printers that users can choose from to print a file. When the computer on which the word processor
runs is taken into a new environment, the built-in list of printers becomes invalid.
The fourth problem is that if a notebook computer is moved away from the ceiling-mounted base station it is
using and into the range of a different base station, some way of handing it off is needed. Although this problem
occurs with cellular telephones, it does not occur with Ethernet and needed to be solved. In particular, the
network envisioned consists of multiple cells, each with its own base station, but with the base stations
connected by Ethernet, as shown in Fig. 1-37. From the outside, the entire system should look like a single
Ethernet. The connection between the 802.11 system and the outside world is called a portal.
Figure 1-37. A multicell 802.11 network.
After some work, the committee came up with a standard in 1997 that addressed these and other concerns. The
wireless LAN it described ran at either 1 Mbps or 2 Mbps. Almost immediately, people complained that it was too
slow, so work began on faster standards. A split developed within the committee, resulting in two new standards
in 1999. The 802.11a standard uses a wider frequency band and runs at speeds up to 54 Mbps. The 802.11b
standard uses the same frequency band as 802.11, but uses a different modulation technique to achieve 11
Mbps. Some people see this as psychologically important since 11 Mbps is faster than the original wired
Ethernet. It is likely that the original 1-Mbps 802.11 will die off quickly, but it is not yet clear which of the new
standards will win out.
To make matters even more complicated than they already were, the 802 committee has come up with yet
another variant, 802.11g, which uses the modulation technique of 802.11a but the frequency band of 802.11b.
We will come back to 802.11 in detail in Chap. 4.

Leave a Reply

Your email address will not be published. Required fields are marked *

Enable Notifications OK No thanks