- “Noise” damages (corrupts) the messages; we would like to be able to communicate reliably in the presence of noise
- Establishing and maintaining physical communication lines is costly; we would like to be able to connect arbitrary senders and receivers while keeping the economic cost of network resources to a minimum
- Time is always an issue in information systems as is generally in life; we would like to be able to provide expedited delivery particularly for messages that have short deadlines
Figure 1-1: The customer cares about the visible network properties that can be controlled
by the adjusting the network parameters.
Figure 1-1 illustrates what the customer usually cares about and what the network engineer can
do about it. The visible network variables (“symptoms”), easily understood by a non-technical
person include:
Delivery: The network must deliver data to the correct destination(s). Data must be received only
by the intended recipients and not by others.
Correctness: Data must be delivered accurately, because distorted data is generally unusable.
Timeliness: Data must be delivered before they need to be put to use; else, they would be useless.
Fault tolerance and cost effectiveness are important characteristics of networks. For some of these
parameters, the acceptable value is a matter of degree, judged subjectively. Our focus will be on
network performance (objectively measurable characteristics) and quality of service
(psychological determinants).
Limited resources can become overbooked, resulting in message loss. A network should be able
to deliver messages even if some links experience outages.
The tunable parameters (or “knobs”) for a network include: network topology, communication
protocols, architecture, components, and the physical medium (connection lines) over which the
signal is transmitted.
- Connection topology: completely connected graph compared to link sharing with multiplexing and demultiplexing. Paul Baran considered in 1964 theoretically best architecture for survivability of data networks (Figure 1-2). He considered only network graph topology and assigned no qualities to its nodes and links1. He found that the distributed-topology network which resembles a fisherman’s net, Figure 1-2(c), has the greatest resilience to element (node or link) failures. Figure 1-3 shows the actual topology of the entire Internet (in 1999). This topology evolved over several decades by incremental contributions from many independent organizations, without a “grand plan” to guide the overall design. In a sense, one could say that the Internet topology evolved in a “self-organizing” manner. Interestingly, it resembles more the decentralized-topology network with many hubs (Figure 1-2(b)), and to a lesser extent the distributed topology (Figure 1-2(c)).
- Network architecture: what part of the network is a fixed infrastructure as opposed to being ad hoc built for a temporary need.
- Component characteristics: reliability and performance of individual hardware components (nodes and links). Faster and more reliable components are also more costly. When a network node (called switch or router) relays messages from a faster to a slower link, a congestion and a waiting-queue buildup may occur under a heavy traffic. In practice, all queues have limited capacity of their “waiting room,” so loss occurs when messages arrive at a full queue.
- Performance metrics: success rate of transmitted packets (or, packet loss rate), average delay of packet delivery, and delay variability (also known as jitter).
- Different applications (data/voice/multimedia) have different requirements: sensitive to loss vs. sensitive to delay/jitter.
- Heterogeneity: Diverse software and hardware of network components need to coexist and interoperate. The diversity results from different user needs and their economic capabilities, as well as because installed infrastructure tends to live long enough to become mixed with several new generations of technologies.
- Autonomy: Different parts of the Internet are controlled by independent organizations. Even a sub-network controlled by the same multinational organization, such as IBM or Coca Cola, may cross many state borders. These independent organizations are generally in competition with each other and do not necessarily provide one another the most accurate information about their own networks. The implication is that the network engineer can effectively control only a small part of the global network. As for the rest, the engineer will be able to receive only limited information about the characteristics of others’ autonomous sub-networks. Any local solutions must be developed based on that limited information about the rest of the global network.
- Scalability: Although a global network like the Internet consists of many autonomous domains, there is a need for standards that prevent the network from becoming fragmented into many noninteroperable pieces (“islands”). Solutions are needed that will ensure smooth growth of the network as many new devices and autonomous domains are added. Again, information about available network resources is either impossible to obtain in real time, or may be proprietary to the domain operator.