Personal tools

Internet Protocol Suite and Applications

(The Internet Procol Stack - The World Wide Web Consortium)


- The Conceptual Data Flow of The Internet Protocol Suite (TCP/IP)

The right below diagram describes the conceptual data flow in a simple network topology of two hosts (A and B) connected by a link between their respective routers. The application on each host executes read and write operations as if the processes were directly connected to each other by some kind of data pipe. After establishment of this pipe, most details of the communication are hidden from each process, as the underlying principles of communication are implemented in the lower protocol layers. In analogy, at the transport layer the communication appears as host-to-host, without knowledge of the application data structures and the connecting routers, while at the internetworking layer, individual network boundaries are traversed at each router.

Operation of the Internet Protocol suite (TCP/IP) between two Internet hosts (A and B) connected via two routers (top) and the corresponding layers of the Internet Protocol suite in use at each hop (bottom). All hosts use the Internet Layer to route packets to next hop solely based on the IP address while only the hosts need the upper layers to send or receive application data. At the Transport Layer and Application Layer the hosts can be said to have virtual connections between them (if using a connection-oriented transport protocol), but with peer-to-peer, the other layers will only know about the next-hop neighbor.  

The Internet's network-layer protocol has a name - IP, which abbreviates "Internet Protocol". IP provides logical communication between hosts. The IP service model is a best-effort delivery service. This means that IP makes its "best effort" to deliver segments between communicating hosts, but it makes no guarantees.

IP is the most important network protocol. It is a network layer protocol that supervises the transmission of packets from a source machine to a destination. Data is broken down into packets, or datagrams, up to 64 Kb long before it is transmitted. To get to their destination, the packets are free to take any path of transmission and arrive in any order. Because the routers along the way do not guarantee that all IP packets will be delivered, IP is unreliable.


- Transport Layer Protocols: TCP and UDP

There are two transport layer protocols above IP: the UDP and TCP. These transport protocols provide delivery services. UDP is a connectionless delivery transport protocol and used for message-based traffic where sessions are unnecessary. TCP is a connection-oriented protocol that employs sessions for ongoing data exchange. File transfer protocol (FTP) and Telnet are examples of applications that use TCP sessions for their transport. TCP also provides the reliability of having all packets acknowledged and sequenced. If data is dropped or arrives out-of-sequence, the stack's TCP layer will retransmit and resequence. 

UDP is an unreliable service, and has no such provisions. Applications such as the simple mail transport protocol (SMTP) and hyper text transfer protocol (HTTP) use transport protocols to encapsulate their information and/or connections. To enable similar applications to talk to one another, TCP/IP has what are called “well-known port numbers.” These ports are used as sub-addresses within packets to identify exactly which service or protocol a packet is destined for on a particular host. 

TCP/IP serves as a conduit to and from devices, enabling the sharing, monitoring, or controlling those devices. A TCP/IP stack can have a tremendous effect on a device's memory resources and CPU utilization. Interactions with other parts of the system may be highly undesirable and unpredictable. Problems in TCP/IP stacks can render a system inoperable.


- Data Transmission on The Internet

A very basic rule of data (files, e-mails, web pages et-cetera) transmission across the Internet, and actually a distinctive feature of the TCP/IP protocols used to move data, is that data is never transmitted “as such”. Instead, it is subdivided in so-called “packets” before transmission. The number of the packets depends on the size of the data. The bigger the file, the more packets will be needed to “represent” the file. 

We could summarize the journey of a file such as an e-mail message or a web page, from computer A to computer B, as follows:


File in computer A –> Subdivided in packets by TCP/IP –> Packets travel, individually, to destination –> 
TCP/IP “remounts” the packets to re-create the original file in computer B –> File in computer B


The crucial thing to understand here, is that at any given time (we re talking about milliseconds), the best route between 2 computers may change. Routers are able to determine, at the moment of sending a particular packet, the best route at this time. When sending the next packet, the best route may be different. Therefore, each packet from the same file could take a different route in order to reach the intended destination.

While files are being transferred between 2 computers, a dialogue goes on between the TCP/IP software of the sender computer and the TCP/IP software on the receiving computer, aimed at ensuring that the file transfer will be successful. If for instance a packet is missing on the receiving side, TCP/IP from this computer will send a message to TCP/IP on the sender computer, asking to re-send a particular packet (this is specifically true for the TCP protocol – other protocols such a UDP work differently). The dialogue will end when all the packets have reached the destination.


IP Stack Connection - Wikipedia)

- How Data Travels Across The Internet

The Internet works by chopping data into chunks called packets. The data is first broken into small packets of information, as it cannot be sent whole. Each packet can carry a maximum of 1,500 bytes. Around these packets is a wrapper with a header and a footer. The information contained in the wrapper tells computers what kind of data is in the packet, how it fits together with other data, where the data came from and the data's final destination. 

Each packet then moves through the network in a series of hops. Each packet hops to a local Internet service provider (ISP), a company that offers access to the network - usually for a fee. The next hop delivers the packet to a long-haul provider, one of the providers of cyberspace that quickly carrying data across the world. These providers use the Border Gateway Protocol (BGP) to find a route across the many individual networks that together form the Internet. This journey often takes several more hops, which are plotted out one by one as the data packet moves across the Internet. For the system to work properly, the BGP information shared among routers cannot contain lies or errors that might cause a packet to go off track - or get lost altogether. 

The data then travels through a physical route to its destination, commonly consisting of a possible mix of  co-axial cables, a twisted-pair cable, or fibre optics. This physical route of cables is called the transmission medium. Each transmission medium is restricted to a fixed capacity known as the the bandwidth. In order to successfully transmit data, a minimum bandwidth is required which depends on the data being transported. Data being transported at 300 Mb/s, for instance, requires a minimum of 1 gigabit of ethernet transport. 

For example, when you send an e-mail to someone, the message breaks up into packets that travel across the network. Different packets from the same message don't have to follow the same path. That's part of what makes the Internet so robust and fast. Packets will travel from one machine to another until they reach their destination. The final hop takes a packet to the recipient, which reassembles all of the packets into a coherent message. A separate message goes back through the network confirming successful delivery.


Best-Effort Delivery

The Internet generates massive amounts of computer-to-computer traffic, and insuring all that traffic can be delivered anywhere in the world requires the aggregation of a vast array of high-speed networks collectively known as the internet backbone. In computer networking, a backbone is a central conduit designed to transfer network traffic at high speeds. Backbones connect local area networks (LANs) and wide area networks (WANs) together. Network backbones are designed to maximize the reliability and performance of large-scale, long-distance data communications. The best-known network backbones are those used on the Internet. The Internet backbone is made up of many large networks which interconnect with each other.  

Best-effort delivery describes a network service in which the network does not provide any special features that recover lost or corrupted packets. These services are instead provided by end systems. By removing the need to provide these services, the network operates more efficiently. In the TCP/IP protocol suite, TCP provides guaranteed services while IP provides best-effort delivery. TCP performs the equivalent of obtaining a delivery confirmation from the recipient and returning it to the sender. Because IP provided basic packet delivery services without guarantees, it is called a best-effort delivery service. It does its best to deliver packets to the destination, but takes no steps to recover packets that are lost or misdirected.

All data transfers across the Internet work on this principle. It helps networks manage traffic - if one pathway becomes clogged with traffic, packets can go through a different route. This is different from the traditional phone system, which creates a dedicated circuit through a series of switches. All information through the old analog phone system would pass back and forth between a dedicated connection. If something happened to that connection, the call would end. 

That's not the case with traffic across IP networks. If one connection should fail, data can travel across an alternate route. This works for individual networks and the Internet as a whole. For instance, even if a packet doesn't make it to the destination, the machine receiving the data can determine which packet is missing by referencing the other packets. It can send a message to the machine sending the data to send it again, creating redundancy. This all happens in the span of just a few milliseconds.

[More to come ...]


Document Actions