Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Understanding Communication Protocols and IP Addressing in Computer Networks, Study notes of Computer Networks

An overview of communication protocols and IP addressing in computer networks. It covers the concepts of computing and communication protocols, the need for communication protocols, and the differences between connection-oriented and connectionless protocols. The document also discusses basic packet transfer protocols like IP, TCP, and UDP, as well as file transfer and remote login application protocols like FTP and Telnet. Additionally, it touches on convergent cell switching in ATM networks and fast routing technique using labels: MPLS. IP addresses and their versions (IPv4 and IPv6) are also discussed.

Typology: Study notes

2021/2022

Uploaded on 09/27/2022

pratic
pratic 🇬🇧

5

(4)

216 documents

1 / 36

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
3 1
UNIT 9 COMMUNICATION PROTOCOLS
AND NETWORK ADDRESSING
Structure
9.0 Objectives
9.1 Introduction
9.2 What are Protocols?
9.3 Computing Protocols
9.4 Communication Protocols: General Concepts
9.5 Common Communication Protocols
9.6 Basic Communication Protocols: IP, UDP, TCP
9.6.1 Internet Protocol (IP)
9.6.2 User Datagram Protocol (UDP)
9.6.3 Transmission Control Protocol (TCP)
9.7 Client-Server Architecture
9.8 Application Level Communication Protocols: FTP, Telnet
9.8.1 File Transfer Protocol (FTP)
9.8.2 Remote Login (Telnet)
9.9 Switching Level Convergence Protocol: ATM
9.10 Multi Protocol Label Switching: MPLS
9.11 Telephone and Mobile Numbering
9.11.1 Landline Telephone Numbering
9.11.2 Mobile Phone Numbering
9.12 Number Portability
9.13 IP Addressing: IPv4, IPv6
9.14 Web Communication Protocols: HTTP, WAP, LTP
9.15 Summary
9.16 Answers to Self-check Exercises
9.17 Keywords
9.18 References and Further Reading
9.0 OBJECTIVES
After going through this Unit, you will be able to understand and appreciate:
What are protocols;
Difference between computing and communication protocols;
Need for communication protocols;
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24

Partial preview of the text

Download Understanding Communication Protocols and IP Addressing in Computer Networks and more Study notes Computer Networks in PDF only on Docsity!

UNIT 9 COMMUNICATION PROTOCOLS

AND NETWORK ADDRESSING

Structure 9.0 Objectives 9.1 Introduction 9.2 What are Protocols? 9.3 Computing Protocols 9.4 Communication Protocols: General Concepts 9.5 Common Communication Protocols 9.6 Basic Communication Protocols: IP, UDP, TCP 9.6.1 Internet Protocol (IP) 9.6.2 User Datagram Protocol (UDP) 9.6.3 Transmission Control Protocol (TCP) 9.7 Client-Server Architecture 9.8 Application Level Communication Protocols: FTP, Telnet 9.8.1 File Transfer Protocol (FTP) 9.8.2 Remote Login (Telnet) 9.9 Switching Level Convergence Protocol: ATM 9.10 Multi Protocol Label Switching: MPLS 9.11 Telephone and Mobile Numbering 9.11.1 Landline Telephone Numbering 9.11.2 Mobile Phone Numbering 9.12 Number Portability 9.13 IP Addressing: IPv4, IPv 9.14 Web Communication Protocols: HTTP, WAP, LTP 9.15 Summary 9.16 Answers to Self-check Exercises 9.17 Keywords 9.18 References and Further Reading

9.0 OBJECTIVES

After going through this Unit, you will be able to understand and appreciate:  What are protocols;  Difference between computing and communication protocols;  Need for communication protocols;

Network Fundamentals  Difference between connection-oriented and connectionless protocols;  Basic packet transfer protocols like IP, TCP and UDP;  Most widely used network computing architecture: Client-Server;  File transfer and remote login application protocols like FTP and Telnet;  Convergent cell switching in ATM networks;  Fast routing technique using labels: MPLS;  Numbering schemes used for landline and mobile phones;  How network computers are addressed world over;  Details of IPv4 addressing scheme;  IPv6 features briefly; and  Web access protocols for both wired and wireless networks.

9.1 INTRODUCTION

Quest for new knowledge is the central theme of human existence. All of us, whether we realise or not, are in the process of acquiring new knowledge all the time. When we ask a question, we are seeking knowledge. When we answer a query, we give information to the person posing the question. When a person assimilates the given information, we say that the person has acquired knowledge. Knowledge is spread via information that is communicated from one person to another in some form: oral, writing etc. Thus, knowledge, information and information communication are three entities that are closely inter-related. It is often said that we are in the information age. In the last about six decades, information in the world has been growing at an exponential rate, i.e. doubling every 10 years. Information Communication Technology (ICT) has grown leaps and bound in the last 30 - 40 years. Instant transfer of information from one part of the world to any other part is a reality today. Underlying this development is the convergence of computer and communication technologies. This convergence process started in late 1960s and has led to the development of worldwide computer network that is now known popularly as Internet. A large number of home and office local area networks (LANs) and innumerable personal computers all over the world have been interconnected to form Internet. Hence, it aptly said that Internet is a network of networks. Information travels in the form of data packets on Internet and hence it is also called a data network. Data packets are of fixed length, say 2048 bytes, i.e. 2" bytes. Long messages are broken into as many packets as required before transmission. Because of packet-based transmission, the Internet also carries the nomenclature Packet Data Network (PDN). Since Internet is an open public network, another related nomenclature that is used sometimes is Packet Switched Public Data network (PSPDN). Internet is not limited to its presence only on the land but is also in ships at the seas and in planes in the air. United Nations today has 192 countries of the world as its members. Almost all these countries have Internet connection in place. About 200,000 LANs are connected to the Internet. Over 1.5 billion people, i.e. a quarter of the world population has access to Internet. With the evolution of Internet, our life-style is changing. A number of our day-to-day activities are being carried out on the Internet. Clearly, the society is evolving towards a networked community with electronic information as the central commodity.

Network Fundamentals 3 4 9.3, we briefly discuss computing protocols and study communication protocols in greater detail in later sections.

9.3 COMPUTING PROTOCOLS

Computing and communication protocols together define sets of rules and procedures that govern all the information management functions. With electronic information being the central commodity in NEIS, information management functions become the core of technological capability in networks. There are seven functions of electronic information management that are important:

  1. Generation
  2. Acquisition
  3. Storage
  4. Retrieval
  5. Processing
  6. Transmission
  7. Distribution Computing and communication protocols govern all these functions. In general, information is generated by human thought process, human acts and by happenings in nature. Human intellectual activity is creative and intuitive and hence may not be amenable to protocols. Whether technology generates information is a debatable point. When data is processed in a computer, the output is considered as information. In that sense, it may be said that computers generate information. But the basic data comes from nature or human activity. However, machine generation of information can be governed by protocols. Among the other functions, storage, retrieval and processing fall in the realm of computing protocols. The remaining functions, viz. acquisition, transmission and distribution fall in the class of communication protocols. Transmission and distribution functions may be collectively called as information dissemination. Transmission refers to bulk transfer between two main points. Distribution refers to transfer to end points like user computers or terminals. Computing protocols are relatively a recent development. As you are aware, information processing, storage and retrieval are functions performed by application processes. You are familiar with applications like word processing, spread sheet, power point presentation and data base management. Computing protocols deal with information storage, retrieval and exchange among these applications. For example, how do we import information from word files into spreadsheets or vice versa? Or how do we import information from word files to power point presentation slides and vice versa? Computing protocols are being evolved to make such imports fairly easy. Some of the well-known computing protocol functions include message passing, process synchronisation and process switching, simple object access and object communication and data portability. The idea of computing protocols is to encourage what are known as open systems design. Open systems follow industry standards and are capable of running on variety of platforms. For example, open office is an innovation in computing protocols. Many Java products use open computing protocols. Microsoft has recently announced a

number of open computing protocols and has made them available in public domain. Open computing protocols offer greater opportunity and choice for freelance developers as they conform to industry standards. In contrast, closed protocols are proprietary in nature and are vendor specific. Interoperability in computer systems is the main goal of computing protocols. By interoperability we mean the ability of different applications to interwork with each other using common data. User does not have to reformat and copy data from one application to another. Interoperability principles include:  Ensuring open connections  Promoting data portability  Enhancing support for industry standards  Driving open approach across competitors. Although the open approach is currently limited to application packages from the same vendor, increasingly computing protocols are addressing issues for interoperability across different vendor products and platforms. Interoperability concept is also applicable to networked computers. Self-Check Exercise Note: i) Write your answers in the space given below. ii) Check your answers with the answers given at the end of this Unit.

  1. Differentiate between computing and communication protocols. ..................................................................................................................... ..................................................................................................................... ..................................................................................................................... .....................................................................................................................

9.4 COMMUNICATION PROTOCOLS: GENERAL

CONCEPTS

Communication protocols deal with all aspects of communication functions that are required for information exchange among computers in a network or across networks. They are designed especially in the context of Internet. We have already discussed in Unit 8 the protocols that are used for information exchange in LANs. On the Internet, communication related functions include:  Breaking up messages into packets  Packet sequencing and reassembly  Synchronisation or handshaking for information exchange  Signalling: start and end of messages  Switching: routing or forwarding of messages towards their respective destinations  Connectionless and connection oriented transfers Communication Protocols and Network Addressing

envelope (encapsulated packet) is discarded and the original information obtained. This is termed as de-capsulation. Format conversion : Sometimes when moving packets between incompatible networks, pack formats may have to be changed. An example is moving packets between Ethernet and Token ring LANs, which calls for format conversion. Error handling : Errors occur in data transmission. These have to be detected and corrective action taken. Error detecting codes are used to detect errors. There are two basic techniques available for error recovery. One is when an error is detected in a packet, it is discarded and the sending station is requested to retransmit the packet. This technique is called automatic repeat request (ARQ). Handshake mechanism is used to request retransmission of the packet. The other is to use forward error correction codes (FEC) that are capable of both detecting and correcting errors at the receiving end. Sessions : A variety of tasks are performed on the networks by establishing sessions between a server and a client computer. Online search of databases, remote job entry, remote login to a time sharing system and fie transfer between two systems are examples of different types of sessions. Different sessions have different requirements. For example, a dialogue may be two-way simultaneous or one-way alternating. A large file transfer session may call for establishing roll back points to recover from connection failures. Session protocols perform functions required to establish, successfully execute and terminate properly different types of sessions. Packet loss : It is not unusual to experience unexpected loss of connections in networks. You might have had this experience while accessing Internet. Some Internet browsers including Microsoft’s Internet Explorer have provision to resume a session that was terminated unexpectedly, say due to a power failure. Many communication protocols have features to recover from unexpected connection failures. This is particularly so in sessions related protocols. In this section, we have studied the general features that are required in communication protocols. In the next section, we look at the details of some of the commonly used communication protocols. Self-Check Exercise Note: i) Write your answers in the space given below. ii) Check your answers with the answers given at the end of this Unit.

  1. We use signalling as a matter of fact in our daily life. Give any four examples of such signalling.
  2. Is SMS a connectionless or connection oriented service?
  3. When you are typing on a computer terminal, you make a mistake. Then you correct it. Which one of the techniques, ARQ or FEC you are using? Give reasons.
  4. Many word processing packages have auto correct features. Which one of the techniques, ARQ or FEC is used there? Give reasons. ..................................................................................................................... ..................................................................................................................... ..................................................................................................................... ..................................................................................................................... Communication Protocols and Network Addressing

Network Fundamentals

9.5 COMMON COMMUNICATION PROTOCOLS

The field of ICT is replete with protocols. Hundreds of protocols have been defined for various purposes. Many are very specialised, some are rarely used and some are defunct. You are already familiar with computing and communication protocols. There are other classes of protocols such as data (bits & bytes) transmission protocols, routing protocols, access protocols, services protocols and applications protocols. As a user of networks, you need to be concerned with only about a dozen protocols. This is much like a language dictionary having over 100,000 words and the average vocabulary of a person being about 4000 words. Extensively used communication protocols include:  Internet Protocol (IP)  User Datagram Protocol (UDP)  Transmission Control Protocol (TCP)  File Transfer Protocol (FTP)  Remote Login Protocol (Telnet)  Internet Control Message Protocol (ICMP)  Dynamic Host Configuration Protocol (DHCP)  Post Office Protocol 3 (POP3)  Simple Mail Transfer Protocol (SMTP)  Internet Message Access Protocol (IMAP)  Cell Switching Protocols (ATM)  Muti Protocol Label Switching (MPLS)  HyperText Transfer Protocol (HTTP)  Wireless Application Protocol (WAP)  Lightweight Transport Protocol (LTP)  General Packet Radio Service (GPRS)  Simple Network Management Protocol (SNMP) Of the above, the first three protocols, viz. IP, UDP and TCP are basic protocols used by a variety of Internet services and applications. We discuss them in Section 9.6. FTP and Telnet are most extensively used service or application level Internet protocols. A large number of applications on the Internet use what is known as Client-Server architecture. FTP and Telnet and web browsers also use this architecture. We present this architecture in Section 9.7. We discuss FTP and Telnet in Section 9.8. Routers use ICMP to report any abnormal event on the network. An example of an abnormal event that a router may discover is the outage of the network in some segment. Such an event may be reported to all other routers on the network as welt as the to the network management centre. ICMP is also used to monitor the functioning of Internet. ICMP, however, is not discussed in this course. DHCP is used for managing IP address allocation in local networks. This is an advanced protocol meant for network

Network Fundamentals 4 0 The datagram header has a mandatory fixed length part and an optional variable length part as shown in Fig. 9.1(b). The fixed length is 20 bytes and the variable length can be up to 40 bytes making the maximum size of the header as 60 bytes. The different fields of the fixed part are illustrated in Fig. 9.1 (c) where each row is 32-bit or 4 bytes long. The source and destination addresses are 32-bit each corresponding to IPv4 address format. IP addresses have two versions: Version 4 and Version 6 abbreviated as IPv4 and IPv respectively. IPv4 uses 32-bit address and IPv6 128-bit. IPv4 has been in use for a very long time, over 30 years, and most of the computers on the Internet have IPv addresses as of now. IPv6 has been introduced recently. Over the years, IPv6 is expected to replace IPv4 addresses. IP addresses are discussed in detail in Sections 9.13. The ‘version’ field in the header specifies the version to which the header belongs. Version information in each datagram permits the coexistence of different versions and smooth transition from one version to another. Header Payload (a) A Generalised Packet format Mandatory Fixed length Optional Variable length (b) Datagram header Darts Version | Header Service Datagram length Datagram identifier Fragment identifier Time to live Upper layer Protocol Header error control Source Address Destination Address Optional Fields up to 10 32-bit words (c) Mandatory Fields of IPv4 datagram header Fig. 9.1: IPv4 datagram formats The maximum size of IP datagrams can be up to 64 k bytes including the header and the text part. But rarely such a big size is used. Different networks are allowed to set their own limit for the maximum size of the datagram well below the theoretical limit of 64 k. This maximum size set by a network is called the maximum transfer unit (MTU) of that network. This provision further complicates processing of datagrams. If a datagram is delivered to a network with a size greater than the MTU of the network, then the datagram needs to be fragmented for transportation within that network and reassembled at the exit point of that network. In such a case, we need a provision to identify the datagram and its fragments. IP header fields, datagram identifier* and ‘fragment identifier’ in Fig. 9.1(c) are provided for this purpose. While reassembling the fragments, IP must know the original protocol from which the fragments came. This is specified in the field ‘Upper layer protocol’. The one-bit ‘M’ field when set to ‘V implies ‘More fragments to come’. This bit is set to ‘1’ in all but the last fragment. The last fragment will have this bit set to ‘0’. There may be certain applications where fragmentation may not be acceptable. The one-bit ‘D’ field, if set to ‘1’ would mean Do not fragment’. In such cases, the route will be so chosen that no fragmentation occur.

You may recall that sometimes packets may wander indefinitely without getting delivered to the destination due to routing errors. The field Time to live’ is used to exercise control over such malfunctioning. The field ‘service type’ addresses issues like priority etc. Other fields in the header are self-explanatory.

9.6.2 User Datagram Protocol (UDP)

UDP provides connectionless service at the user level. It uses IP for this purpose. In that sense, UDP is a higher-level protocol when compared to IP. Here, a user submits his/her entire message to UDP with a request for transfer to the specified destination. User message is a payload for UDP. In turn, UDP encapsulates this with its own header and passes the same to IP as payload. User datagrams are different from IP datagrams. User datagrams do not conform to IP standard. They are just chunks of information of any size. UDP encapsulates the user datagram with its own header to form UDP datagram. UDP may split a user datagram into multiple UDP datagrams conforming to IP standards. UDP datagram is shown in Fig 9.2. Now let us see as to why UDP adds its own header to the user data. Many application processes or users on a computer may use UDP simultaneously. Hence, UDP needs to maintain an identity of individual process and its corresponding destination process. This information is kept in its header in the form of source and destination port numbers so that the datagram may be delivered to the correct destination process along with the source identification. In Fig. 9.2 each row is 4 bytes long. With two rows the UDP header is 8-bytes long. The port fields in the header identify the source and destination processes or applications. Source Port Destination Port UDP Length UDP Error Control UDP Data or payload . Fig. 9.2: UDP datagram structure Each port field is 16-bit long. The destination port value is used to deliver the user datagram to the correct application. The destination application may use the source port value for sending a response to the source application. The port address feature is the one that distinguishes UDP from IP. Otherwise, the functional capability of UDP is the same as that of IP. As in the case of IP, UDP messages may be lost, duplicated and delivered out of sequence. The value of the UDP length field specifies the total length of the datagram including data part and the header. The use of UDP error control field is optional. This field is used only for the header portion of the PDU, i.e. the error control is done only for the header. The UDP does not perform error control at the datagram level. This must be taken care of at the application level. The payload supplied by the user or an application program follows the header. The entire UDP datagram with its header and user data becomes the payload for IP. UDP, being a connectionless service functions on the best-of-efforts basis. There is no delivery acknowledgement in UDP. There is no guarantee of delivery. But it is used extensively like the postal system. The protocol is simple, efficient and fast. There are a large number of applications where occasional non-delivery is acceptable. If the underlying network is reliable, UDP is very effective. Communication Protocols and Network Addressing

9.7 CLIENT-SERVER ARCHITECTURE

As mentioned earlier, FTP, Telnet and Web browsers are based on client-server architecture. Client-Server architecture is the most widely used form of computation on data networks. It has evolved from interactive computing model that was prevalent in the 1960s and 1970s. In interactive computing, a user interacts with a mainframe computer via a terminal that may be dumb or smart. The interaction model follows a master-slave approach. The mainframe computer acts as the master and the terminal as the slave. The slave terminal is under the complete control of the master computer. With the advent of personal computers and data networks, the master-slave model of interaction has given way to peer-to-peer interaction model. Peer-to-peer interaction permits arbitrary communication among computers on the network. No distinction is made among the computers. A PC may contact another PC or a large mainframe as easily. Similarly, a mainframe computer can contact another mainframe or a PC. Distributed computing has become the norm. Distributed computing means any form of computation between two or more computers communicating over network. Computers called servers that provide different types of services are on the networks. The services are accessible to other computers that are treated as clients of the service- providing computers. This model of interaction is known as the client-server architecture. A computer on the network may act both as a server and a client. When it provides service, it is a server and when it accesses the services of another computer, it is a client. We may thus say that the client-server architecture is a form of distributed computing with peer-to-peer interaction. The client-server configuration is depicted in Fig 9.3. There are two machines and a network in the configuration: a server machine, a client machine and a data network. Fig. 9.3: Client -Server configuration The server and the client interact via the data network. The server provides a set of information and computational services that are availed by remote clients. As shown in Fig. 9.3, usually many clients access one server simultaneously. It must be noted that the server and client machines do not actually interact. It is the server program and the client program that interact although we normally speak of server and client interaction. Support for multiple clients is possible only because of program-to-program interaction. A server creates as many processes of the same program as there are clients logged on to it. Use of multiprogramming and time-sharing features of the server operating system makes this possible. The server machine is one, but the instances of a server application program are many. This is how many users access one web site simultaneously. Client- server interaction may take place at one of the following three levels: Communication Protocols and Network Addressing

Network Fundamentals

  1. Human - Server Program
  2. Human - Human
  3. Client Program - Server Program The first case is the most popular one with human client and a machine server. A typical example is that of user accessing information from a server, say searching a database. An example of the second case is student - teacher chat session. In online learning, student is tutored by a teacher. The student is the client and the teacher is the server. Timed periodic file transfer or email transfer between two or more machines are examples of the third case. In all client-server interactions, it the client that always initiates a session. The server is ready and waiting without doing anything. When a client request comes, the server program responds. This is like a shopkeeper ready to sell with his shop open but the actual transaction takes place only when a customer arrives. The server service must be available on 24 x 7 (24 hours a day, 7 days a week). Server systems are generally more powerful than client systems. They fall in one of the following categories:  PC servers  Workstation servers  Mainframe servers PC servers typically use standard 32-bit microprocessors. They have large RAM and hard disk capacity. They are ruggedised for continuous uninterrupted running with backup power systems and cooling systems where required. PC servers must have an operating system that can handle multiple users, as many client PCs may connect to the server at a time. Such operating systems are known as network operating system (NOS). Some of the popular NOS are MS Windows NT, Windows 2003, Novell Netware and Linux. All the servers are designed to support simultaneous access from many clients. Workstation servers use high power or custom designed microprocessors. They are generally 64-bit or 128-bit microprocessor based systems. Workstation servers run under Unix like operating system that has a rich set of tools for supporting a wide variety of applications. Unix is a more reliable and secure operating system when compared to Windows. Linux is a recent addition to the world of operating systems and is considered a suitable substitute for both Unix and Windows. Linux is available in the open software domain. Some predict that in future, both PCs and workstations may run Linux instead of Windows or Unix. However, experience so far has not shown this to be true. Mainframe computer based servers are even more reliable and powerful than Unix workstation servers. Mainframe based servers are often called enterprise servers to convey the fact that they are more powerful than PC servers or workstation servers. Client systems are of two types:  Desktop personal computers  Mobile stations The most popularly used desktop systems are Intel microprocessor based computers running Microsoft’s Windows operating system. Such systems are sometimes called

Network Fundamentals ‘open’ command of FTP. The FTP client then invokes TCP to establish connection with the remote computer. Once the connection is established, FTP server is activated at the remote computer. At the next step, the user is authorised to access the remote computer by inputting a valid user name and password. A valid user account is required for this purpose. When the authorisation is successful, the user may examine and select a file on the remote computer by using the list command ‘Is’. He/She then uses FTP ‘gef command to transfer the file to his/her own computer. FTP client allows a user to transfer a file to the remote computer from the local computer as well. For this purpose, user invokes the ‘send’ command of FTP. The FTP client application is closed by ‘bye’ command. FTP recognises only two types of files: text and binary. Any non-text file is treated as binary file. Examples include audio, computer programs, spread sheets and graphics data. Text files have to be strictly according to one of the standard character encoding schemes like ASCII or EBCDIC. If in doubt about the nature of the fife, it is best to specify binary format. Binary format will transfer a text file as well successfully. However, transfer of text files is more efficient and faster. Where known, it is a good idea to specify ‘text’ as the file type. However, If an incorrect type is specified, the resulting file may be malformed. There are server systems on the Internet, which make available files to general public. Examples include servers providing government circulars or legal judgements. Such public files can be accessed without the user having an account on the server. FTP client makes this possible by providing an account called anonymous’ with password as ‘guest’. Since FTP application runs on client-server model, the FTP server must run under multiprogramming and time-sharing operating system to enable multiple clients to access the server simultaneously.

9.8.2 Remote Login (Telnet)

Telnet allows an Internet user to log into a remote time-sharing computer and access and execute programs on the remote machine. For this purpose, the user invokes a Telnet client on his/her machine and specifies the identity of the remote machine. Telnet client makes a connection to the remote computer using TCP. Once the connection is established, the remote computer (Telnet server program) takes over the user’s display and issues a login command. The user follows the regular login procedure by giving his/ her account name and password. From then on the user computer behaves exactly like a terminal on the remote system. When the user logs out, the remote computer breaks the Internet connection and the Telnet client on the local machine exits automatically. Remote login is a general access feature. The generality makes it a powerful tool on the Internet. It enables the programs on the remote computer accessible without having to make any changes to the programs themselves. The installation of the Telnet server on the time-sharing system is ail that is required. The telnet client and server together make the user computer appear as a standard terminal on the remote system. Hence, no changes are required as far as the application on the remote system is concerned. In view of this generality, different arbitrary brand of computers can be connected to the remote system. In effect, any computer on the Internet can become a Telnet client to any Telnet server on the Internet. Unlike FTP or e-mail, Telnet allows the user to interact dynamically with the remote system. Due to this, Telnet service is very popular. Telnet sessions may run into occasional problems. The application program on the

remote computer may malfunction or freeze. The local computer then hangs. We need a mechanism to come out of this situation. Remember that during a Telnet session, two programs are running: one the program on the remote computer and the other the Telnet client on the local machine. Telnet makes a provision to switch between these two programs. Once a Telnet session is established, every keystroke by the user is passed on to the remote computer. A special combination keystroke, like Ctrl +], is reserved to revert to the local program. The Telnet client examines every keystroke of the user before passing on the same to the remote machine. If the special combination key is pressed, it stops communication with the remote machine and allows communication with the local client program. The user can then terminate connection with the remote computer, close the Telnet client and resume local operations. Self-Check Exercise Note: i) Write your answers in the space given below. ii) Check your answers with the answers given at the end of this Unit.

  1. Let the IGNOU LIS course modules be available on a fictitious FTP server called “cm.lis@Jgnou.ac.in”. Write down the FTP commands and responses to download this unit to your computer. ..................................................................................................................... ..................................................................................................................... ..................................................................................................................... .....................................................................................................................

9.9 SWITCHING LEVEL CONVERGENCE

PROTOCOL: ATM

There are three major forms of switching techniques used in telecommunication and networks:

  1. Circuit Switching
  2. Packet Switching
  3. Cell Switching Circuit switching is the oldest technique used in telephone networks and has been in existence for over 120 years. Packet switching is about 50 years old used in data networks like the Internet. Cell switching is the most recent one evolved during mid 1990s used in new telecommunication infrastructure. Before we proceed to discuss these techniques, definitions of two terms are in order: channel and circuit. A channel is defined as an information pipe with some specified characteristics like bandwidth, capacity, level of attenuation and noise immunity. A channel is a one-way link. A circuit is a two-way link and comprises two channels that enable two-way information flow between two entities. The two channels of a circuit need not have the same characteristics. If they do, then the circuit is said to be symmetric. Otherwise, the circuit is said to be asymmetric. Some authors tend to use the term channel to mean a physical medium. This is incorrect. A physical medium like optical fibre may carry several thousand information channels in a multiplexed mode. Communication Protocols and Network Addressing

4 9  To redefine the packet as a cell that is very small in size  To leap forward in the speeds of virtual circuit switching. Cell switching is designed to cause minimal network delay ensuring at the same time efficient utilisation of network resources. You are aware of MTU and the associated problem of possible segmentation and reassembly. This problem is completely avoided in cell switching. The entire infrastructure uses a standard cell size of 53 bytes. The cell has 48 bytes of payload and 5 bytes of header. Now let us understand the merits of cell switching. Ceil switching is built on a very reliable and ultra fast network infrastructure. The reliable technology almost rules out cell tosses. Even if a cell or two is lost very rarely, the effect is unnoticeable in real time services like voice and video. The small size of the cell makes the loss imperceptible to hearing or viewing. In data services of course, recovery is required. Cell switching uses virtual circuit principle. The cells are guaranteed to be delivered in sequence. Virtual circuit reduces switching overheads significantly and makes switching extremely fast. For this reason, cell switching is sometimes called fast packet switching. The networks that use cell switching are called Asynchronous Transfer Mode (ATM) networks. The reason for this is that the cells of a particular message are not switched in a fixed time frame say every millisecond. They are switched as they arrive. The arrival is a mixed bag of cells from different messages or services. They are switched in the order in which they arrive. Consecutive cells do not necessarily belong to the same message or service. In other words, the cells of a message or service are not continuous or synchronous in time. Hence, the term asynchronous transfer is used. Asynchronous transfer ensures effective utilisation of the network resources. The resources are not dedicated to one service. In contrast, in the conventional circuit switched networks the information transfer is continuous and synchronous. In synchronous transfers, information pieces arrive in fixed time frame, say one byte every microsecond. In ATM, the cell arrival is not time synchronous. The time gap between the arrivals of two consecutive cells of the same message is not fixed but a variable one. However, the variability is very small because of the high-speed switching of ATM. For all practical purposes, the services perceive synchronous arrival. ATM is a technique that marks the convergence of both circuit and packet switching. Hence, ATM protocols are often referred to as convergence protocols. Self-Check Exercise Note: i) Write your answers in the space given below. ii) Check your answers with the answers given at the end of this Unit.

  1. Why is cell switching superior to circuit and packet switching? ..................................................................................................................... ..................................................................................................................... ..................................................................................................................... .....................................................................................................................

9.10 MULTI PROTOCOL LABEL SWITCHING:

MPLS

As you know, virtual circuit makes routing more efficient and reduce the header overhead by using VCN. The VCN uniquely defines the source and the destination, the message and the route. Use of VCN reduces the header size and the transmission overhead. Communication Protocols and Network Addressing

Network Fundamentals Routing is made simpler as the VCN is used to index a table to find out the outgoing link. Multi Protocol Label Switching (MPLS) is an attempt to bring VCN concept to IP packets. Here, an IP packet is assigned a label that uniquely identifies the destination. In fact, the IP packet is encapsulated with the label header. The label is then used to index into a table to find out the outgoing link to be used for forwarding the packet. There is no examination of the destination address every time. This greatly simplifies routing overhead and makes the IP packets move faster through the network. This is particularly useful where large volume data transfers are involved as in the case FTP service. MPLS is a router-based solution to improve the router efficiency. This is not a protocol that runs on any user machine. User machines run only the conventional communication protocols like TCP and FTP. We need MPLS-capable routers to implement MPLS. Only MPLS-capable routers can assign labels and handle MPLS packets. There are two ways in which the labels are assigned to IP packets: data-driven and control- driven assignments. In data-driven assignment, when a packet enters a MPLS-capable router, it contacts the next MPLS-capable router and asks for a label for the destination address. The next MPLS-capable router in turn connects to the next one and the process is continued until the destination router is reached. Thus a fixed route is formed for all packets to the same destination. The first router now encapsulates the IP packet with the label supplied by the next router and forwards the packet. From then on, the label is used for routing. The name mufti protocol is used to signify the fact that the MPLS-capable routers can forward IP packets from a variety o f protocols like TCP and FTP. In control-driven assignment, a destination router creates labels for all its host computers and passes them to its neighbours. The neighbours in turn create labels and contact other neighbours. The process is continued until all the routers acquire the path. Thereafter, the label is used for routing.

9.1 1 TELEPHONE AND MOBILE NUMBERING

Every entity in any network needs to be uniquely identified. Otherwise, the entity cannot be accessed. In telephone and mobile networks the entity is a phone instrument and it is number that uniquely identifies the entity. In Internet, the entity is a computer and it is IP address that uniquely identifies the entity. Although it is called an address, IP address is also a number. At the user level, the addresses are specified by string of characters on the Internet, (e.g. ignou.com). In a sense, similar character addressing is also available in telephone networks by way of directories where one looks up the number corresponding to a name. The addressing or numbering scheme follows a structure. We discuss the telephone and mobile addressing schemes in this section and the IP addresses in Section 9.13.

9.11.1 Landline Telephone Numbering

Telephone numbering worldwide follows an international standard set by International Telecommunications Union (ITU). The details are specified in the standards E.160 - E.163 of ITU. In ITU parlance, the numbering scheme is called numbering plan. As per the plan, the world is divide into 9 zones with each zone being identified by a zone code as indicated in Table 9.1. The zone names in Table 9.1 are representative. For exact delineation, one is advised to refer to the standards. Europe is given two codes, as there are many countries there.