What is jitter?

Jitter is defined as the unwanted or too large of a variation in the time delay between when a signal is transmitted and when it is received over a network connection. In other words, a network jitter is a variance in latency between packets sent over the network.

Jitter mostly influences the quality of streamed content, like video streaming, online gaming, and voice over the internet (VoIP), resulting in poor quality, delays, increased response time, and dropped packets. Network congestion, hardware limitations and malfunctions, routing changes, and other network anomalies can all cause jitter.

What is jitter in computer networks?

Jitter in computer networks is a variance in latency. In networks, this specifically means the disruption in the sequence of arriving or leaving packets from a device.

Network jitter compared to normal transmission.

The variance is generally measured in milliseconds (ms), where good connections have a reliable and consistent response time and where bad connections with a high level of jitter have an unreliable and inconsistent response time. So it influences the packet network traversal time and packet loss in the network.

Examples of network jitter

  • Constant jitter presents a roughly constant level of packet delay variation.
  • Transient jitter presents a substantial delay of a single packet.
  • Short-term jitter presents a substantial delay of some number of packets.

The causes of network jitter

  • Network congestion - occurs when a network link bandwidth is being consumed by too many devices at once.

Network congestion

  • Hardware limitation or malfunction - can provide unreliable network links and can unexpectedly reduce the bandwidth and increase the packet travel time.

  • Wireless connections - inherently provide less reliable network links which can induce reduced bandwidth and increased packet travel time.

  • Missing or misconfigured packet prioritization - if packets of bandwidth and latency-sensitive applications are not prioritized, the applications might notice reduced bandwidth and increased packet travel time.

How is network jitter measured?

We can measure jitter with multiple methods:

  • By measuring the round-trip time (RTT) of a series of packets originating from a single endpoint.
  • By measuring the variation between transmission times between two endpoints in the network.
  • By performing a bandwidth test of the network link, which can also be used to determine the jitter level.

One of the most common tools for measuring jitter is ping, which takes the differences between two consecutive packet travel times and calculates their mean value to get the average jitter in the network. We can calculate jitter for the following example:

$$\frac{|27.5-29.6|+|29.6-25.7|+...+|26.6-29.0|}{33}=19.9 $$

Measuring jitter.

Use of Quality of Services (QoS) to reduce jitter

With the ability to manage the network's inner workings with various mechanisms and technologies that fall under the name Quality of Services (QoS), we are able to mitigate jitter entirely or at least partially. The main mechanisms are:

  • Queuing - enables reordering packets in network queues, which prioritizes the delay-sensitive packets and sends them through the network before other less delay-sensitive packets.
  • Link fragmentation and interleaving (LFI) - LFI reduces the size of larger packets into smaller fragments before sending them through the network so routers can receive them.
  • Compression - enables payload and header size reduction and consequently the reduction of bits required to be transmitted through the network.
  • Traffic shaping - can intentionally increase the delay in the packet transmission to reduce drops in the network.

Other methods for reducing jitter

Properly designed networks can help reduce jitter by making traffic more predictable. We can also monitor the network, its latency, and its bandwidth to detect network congestion and other issues so we can fix them before they become problematic. We can also try to reduce unnecessary bandwidth usage during work hours by scheduling updates and backups outside of business hours.

Services that require constant low latency and high bandwidth, like video conferences and VoIP generally implement jitter buffering that intentionally delays incoming data packets. Buffering can cause poor call quality if the buffer isn't timed correctly. If the buffer is too small, then too many packets may be discarded. If the buffer is too large, then the additional delay can cause issues.

For a home user, usually, the wireless network link is the main reason for the unpredictable network latency and bandwidth reductions. Users can upgrade their wireless equipment or switch to physical Ethernet cables whenever possible if they need constant low latency and high bandwidth.

Glossary

Latency

A measure of the time between a request being made and fulfilled. Latency is usually measured in milliseconds.

Ping

A tool to measure latency on the 3rd layer.

Jitter

Unwanted size variation in latency over a network.