Last Updated on July 31, 2021 by Admin
You have been alerted that TCP traffic leaving an interface has been reduced to near zero, while UDP traffic is steadily increasing at the same time.What is this behavior called and what causes it?
- jitter, caused by lack of QoS
- latency, caused by the MTU
- starvation, caused improper configuration of QoS queues
- windowing, caused by network congestion
This behavior is called starvation and is caused by improper configuration of QoS queues. When TCP and UDP flows are assigned to the same QoS queue, they compete with one another. This is not a fair competition because the TCP packets will react to packet drops by throttling back TCP traffic, while UDP packets are oblivious to drops and will take up the slack created by the diminishing TCP traffic. The results from mixing UDP and TCP traffic in the same queue are:
- Lower throughput
While it is true that jitter can be caused by a lack of QoS, jitter is not what is being described in the scenario. Jitter is the variation in latency as measured in the variability over time of the packet latency across a network. This phenomenon seriously impacts time-sensitive traffic, such as VoIP, and can be prevented by placing this traffic in a high-priority QoS queue.
While latency can be caused by the maximum transmission unit (MTU) in the network, this is not a case of latency, although latency may be one of the perceived effects of starvation. Latency is the delay in reception of packets. The MTU is the largest packet size allowed to be transmitted, and an MTU that is set too large can result in latency.
While windowing can be caused by network congestion, this is not a case of windowing. This is a technique used to adjust the number of packets that can acknowledged at once by a receiving computer in a transmission. In times of congestion the window, or number of packets that can be acknowledged at a time, will be small. Later, when congestion goes down, the window size can be increased.
Describe UDP operations