What is Latency?
A certain amount of delay is acceptable in any work, but what is acceptable when it comes to networking?
Any network, application, or software that you use experiences a certain amount of delay while communicating from one point to another. Those delays may be natural or due to unavoidable situations.
This delay caused by networking is known as latency. Specifically, latency is the delay that is caused between the user's action and the response that is observed by the server or application.
Latency is measured in milliseconds as it is the time delay that is observed for the data packet to transmit between the systems. The delay could affect various other aspects of the software and application.
What are the causes of latency?
Although latency is inevitable, there are certain reasons why it occurs.
Distance The distance between two systems or communication points is an influential factor that influences latency. Because there is a path between communication to and from, the longer the distance, the longer the observed latency.
Propagation media The transmission cables that are used can be of different types. The material used also influences the latency time. Generally, it is found that traditional copper cables offer more latency when compared to modern optic fibre cables.
Routers Communication devices may also cause latency. The routers may not be efficient enough to translate data correctly, which can cause latency.
Packet size and loss The data that is transmitted is generally in the form of data packets. If the data packet size is large, then you can expect the transmission to be slow. In other cases, there can also be a loss in packets, also called jitter, which makes the data reach the end later than expected.
Other delays In the transmission path, intermediate devices like storage devices may be present. As a result, these devices may fail to store and transmit the data properly, causing network delays.
What are the types of latency?
Depending upon the application used or the mode by which the transmission occurs there can be various types of latency.
Interrupt latency: It is the time delay that occurs between the OS and the computer when the system fails to respond to the commands of the OS.
Fibre optic latency: Though fibre optic cables have lesser latency compared to other cables, the delay does occur due to defects in the cable like bends and cuts. Due to this, the transmission gets interrupted and the latency increases.
Internet latency: The Internet is widely used all over the world by a huge number of people. When the distance between two systems is large while connected to the internet there may be a delay for the data packet to travel.
WAN latency: WAN latency is directly related to internet latency. When compared with LAN, this is used by a lot more people which causes the delay in networking.
Operational latency: This is dependent on the operations that are being performed by the users. If there are multiple operations at a particular moment, latency is more likely to occur.
How to measure latency?
Latency can be caused by various factors, even if it is a natural phenomenon. There are ways to detect and measure latency. By finding out the reasons, you can also reduce latency.
The tools that are used are,
Network monitoring tools. These display the difference between the networks and bandwidth that are used. By using the collected data you can fix the issues.
Network mapping tools gives the path through which the data packet travels. The tools can point out regions where there is latency, so they can be corrected.
Traceroute tools monitor the movement of data packets through the IP address for each round trip.
But measurement of latency is usually done manually using certain factors like,
- RTT - Round Trip Time (RTT) is the time it takes for the data packet to reach the server from the client and back.
- TTFB - Time To First Byte (TTFB) is the time taken by the first byte of the data to reach the server from the client.
How to reduce latency?
Some basic alterations in the network can reduce the amount of latency in the system. Generally, methods such as tuning, tweaking or upgrading the applications and systems used can offer better performance.
The most effective way of reducing latency is by implementing a Contact Delivery Network (CDN). CDN are nothing but servers that can be placed at various locations along the network path. By doing so the data that is transmitted is stored in locations closer to the end server so that travelling and time are greatly reduced, which in turn results in lower latency.
On the other hand, latency can also be caused by problems at the user end. Though there is no issue with the network or propagation, problems at the user end can also be the reason for latency.
Latency, Bandwidth and Throughput
Latency, Bandwidth and Throughput. All three though are significantly different from one another, all three are interconnected. The performance of the network is greatly influenced by these three factors. Bandwidth: It is the capacity of data packets that can be pushed through the network path. If the bandwidth is less, then the amount of data that can propagate is also less since the capacity to push the packets becomes less.
Throughput: It is the amount of data that can be transmitted in the network from one point to another in the specified time.
All three are interconnected because a problem with one of these components will affect the other. Assuming the bandwidth is less, the latency will be high which results in lesser throughput.
In summary, communication systems will always cause delays. But when the latency crosses the desired limit it causes chaos. By implementing various tools and upgrading the systems we can greatly reduce the latency incurred.