– In general, propagation delay is the time taken by the quantity for reaching its destination.
– In the context of computer networks, amount of time taken by the head of the signal to reach to receiver from sender is termed as the propagation delay.
– It can be defined as the ration of the length of the link to the propagation speed over a specific medium.
– It is equal to d/s where d and s are the distance and wave propagation speed respectively.
– The wave propagation speed is equal to the speed of light in the wireless communication.
– S lies in the range of .59 c to .77 c in a copper wire.
– This delay proves to be a major hindrance in the development of computers with high operating speeds.
– For IC systems, it is then termed as the interconnect bottleneck.
Relationship between Propagation Delay and Maximum length of the Cable
– The minimum size of the frame permitted in Ethernet network is 512 bits or 64 bytes.
– This size is inclusive of the 32 bit of the cyclic redundancy check.
– On the other side, the maximum allowed length of a segment of Ethernet cable is 500 meters in case of thick cabling (10BASE2) and it is 185 m in the case of thin cabling (10BASE2).
– There is a direct relation between those two specifications.
Propagation of Signal in Copper Medium
– The travelling speed i.e., the propagation speed of the electrical signals in a copper is almost two – thirds of the speed of light.
– The speed at which the Ethernet operates is known to us i.e., it is 10,000,000 bits per second or we can say 10 mbps.
– Using this, the length of the wire occupied by one bit can be determined and comes to 60 feet or 20 m approximately using the following calculations:
1. Speed of light in vacuum i.e., 3 x 108 m/s
2. Speed of electrical signal in copper wire i.e., 2 x 108 m/s
3. 2 x 108 m/s / 10,000,000 bits/s = 20 meters/bit
– Further, the minimum size of the Ethernet frame is determined.
– The collisions on the wire can be detected by the Ethernet controller only when it is in the transmitting mode.
– When the Ethernet NIC is done with its transmitting phase and has switched on to the receiving mode, it only thing it does then is to listen for the 64 bit preamble signals that mark the beginning of the transmission of a data frame.
– The minimum specified frame size in Ethernet is such that it is related to the propagation speed of the electrical signals in the wire.
– The Ethernet card stays in the transmitting mode and therefore keeps detecting collisions.
– There is enough time for a collision to go back to it from the most distant point.
Consider an example:
– There is a 500 m long thick Ethernet cabling to whose farthest ends, station A and B are attached.
– Suppose A beings transmitting, then it would have 25 bits transmitted by the time signal would be reaching station B which is located 500 m away from it.
– Now if B starts transmitting just before the signal from A reaches it , there will be a collision that would reach A in 25 bit times later.
– By this time A would have transmitted its 50 bits only.
– This would be an early collision.
– Now if you see, you will notice that a late collision would never occur in a wire of this length.
– But still there occur overheads. Why? There are two reasons for this:
1. If the length of wire is extended using 4 repeaters, the signal will have to travel around 2500 m to reach B or we can 5000 m for a round trip.
2. The wire spec is too strict, more than it actually needs to be, thus allowing for more errors.