Asynchronous transfer mode or ATM can be defined as the telecommunications standard regarding carrying of the whole range of user traffic (i.e., consisting of data, voice and video signals etc.) as defined by the ITU and ANSI.
– This standard was brought in to existence for unifying the computer networks and telecommunication networks.
– The technology behind this is the time–division multiplexing that it uses for encoding the data in to a number of small cells of a certain size.
– This is totally different from Ethernet and IP in the fact that they allow variable frame sizes but here it is not so.
– The data link layer services provided by ATM run over a range of other links called the OSI physical layer links.
– Much similarity has been observed between ATM, small packet switched networking and circuit switched networking.
– ATM is very much necessary for a network that requires handling low latency, real time content and high throughput data traffic.
– To achieve this, a connection oriented model is used by ATM.
– In this model, in order to exchange data, a virtual circuit must exist between the two exchange points.
– ATM acts as the core protocol in ISDN (integrated services digital network) and PSTN (public switched telephone network).
– Originally, it was for the broadband ISDN that the ATM was developed in 1980.
Technology behind Asynchronous Transfer Mode
– In other protocols, there are allowed variable sizes of packet data that causes maximum queuing delays.
– But since the ATM permits only one size, it can be switched without causing any delays.
– So ATM always uses fixed size cells for multiplexing of data streams.
– This helps to reduce the voice jitters while carrying voice traffic.
– Some data packets may introduce queuing delays that may exceed the limit which is unacceptable when it comes to speech traffic.
– Speech traffic must have as low jitters as possible in order to produce sound of good quality.
-‘Cells’ even though have been designed to reduce the queuing delays, also support data-gram traffic.
– ATM has been through a lot of argument regarding the length of the cells.
– Some wanted it to be 48 bytes, some 32 bytes and some other 64 bytes.
– Eventually, to balance the arguments 53 bytes was chosen as the standard size out which 5 were header bytes and a 48 byte payload.
– Further two types of cell formats have been defined by ATM namely:
1. NNI or network – network interface
2. UNI or user – network interface
– Out of these two UNI was the one commonly used.
– ATM adaption layers or AAL enabled the ATM to provide support to different kinds of services.
– The standardized AALs are:
1. AAL1: used for CBR i.e., constant bit rate services, synchronization and circuit emulation.
2. AAL2: used for VBR i.e., variable bit rate services.
3. AAL3: rarely used for VBR services
4. AAL4: used for VBR services.
5. AAL5: used for data.
– Which AAL is in use in a cell is determined through a pre–virtual connection basis at the end points.
– The speed of networking has increased tremendously following the design of ATM.
– The need for small cells for the reduction of jitter has also been reduced.
– This also gives a reason to replace ATM with Ethernet.
– However, it is a thing to be noted the speeds of the links cannot be used for alleviating jitter that occurs due to queuing.
– The cost of the hardware increases for the implementation of the IP service adaptation at high speeds.
– POS or packet over SONET is generally preferred over ATM.
– This is because of the 48 byte payload.
– This also makes the ATM unsuitable for serving as a data link layer directly under the IP.
– Here, the SAR is not required at the data link level.
– However, ATM does makes sense at slow links and the congested ones.