04/21/2026 | Press release | Distributed by Public on 04/21/2026 00:33
QUIC (RFC 9000) is a transport-layer protocol widely adopted by large content providers (or 'hypergiants '). It promises low latency, paired with encryption and enhanced privacy. Despite these privacy protections, we found that passive measurements can reveal detailed information about the QUIC deployments of large content providers.
The starting point of our study was a simple question: What can we learn about QUIC deployments just by listening to unsolicited QUIC traffic? This question becomes specifically exciting since QUIC aims to enhance privacy by obfuscating metadata.
In our study, we used a network telescope and QUIC flow records. Network telescopes passively capture Internet Background Radiation - traffic from (malicious and benign) scanners, and responses to packets with a spoofed source address (backscatter). Notably, our analysis makes benign use of the response traffic (backscatter) likely caused by attackers. To assess completeness and verify our results, we also performed active measurements.
Our telescope data came from the UCSD network telescope covering a /9 and a /10 IPv4 prefix. We considered one month of QUIC packets captured each year between 2021 and 2025.
The Czech National Research Network (CESNET) provides the flow data from the same month of telescope traffic in 2024 and 2025. We used this dataset to verify our telescope observations. Every property in this blog post was also verified with observations in flow records.
We analysed different properties of QUIC packets received by the telescope, focusing in this post on traffic from Cloudflare, Google, and Meta servers. More details can be found in our paper.
Configuration of re-transmissions
Figure 1 shows the time gaps between the first received packet and subsequent retransmissions within the same connection. QUIC servers resend packets when they receive spoofed traffic that appears to come from unresponsive clients. Peaks indicate common configurations for when and how frequently servers resend packets.
In 2025, the first retransmission happened after 0.1 s in 94% of QUIC connections from Meta, 0.3 s in 51% from Google, and 1.0 s in 67% from Cloudflare. Except for Cloudflare (0.3 s in 31% of the connections in 2022), this observation was stable throughout our entire measurement period, but the share of the most frequent retransmission varied. At 0.1 ms, we observed more retransmissions than QUIC connections from Meta. We found that these connections did two re-sends at around 0.1 ms.
We observed exponential backoff in re-transmission timeouts from Cloudflare, Meta, and Google servers. The maximum number of re-sends differed between hypergiants, showing that servers require differing resources to keep connection states.
Overall, we detected the shortest re-send timeouts and most re-transmissions from Meta. This indicates that Meta reacts faster to packet loss and expects shorter delays between clients and servers than Google and Cloudflare. In comparison, Google and Cloudflare reserve fewer resources to cope with faulty connections. This reduces vulnerability to QUIC flood attacks that build up state.
Retry packets are rarely used to counter DoS attacks
QUIC Retry packets enable QUIC servers to verify the client address by mandating that the client reconnect with a Retry token given by the server. Retries can mitigate QUIC floods, but add one round-trip time. We observed that this defence strategy is rarely deployed. For example, Cloudflare introduced retries in 2025, but only 3% of QUIC packets from Cloudflare were retries. This indicates that deployments favour low-latency connections over Denial-of-Service (DoS) mitigation.
QUIC connection IDs carry information
QUIC uses Connection IDs (CIDs) to associate packets with connections, instead of the source and destination ports used by TCP and UDP. Server CIDs can leak data if hypergiants encode information in them. Such encoding distorts the random distribution of server CIDs.
Figure 2 visualizes the frequencies of server CID nibble values as monitored in the backscatter traffic. Frequencies that diverge from the expected random distribution (shown in light-yellow and green) show server CIDs encoding specific information.
Entirely passively, backscatter reveals that all hypergiants encode information in CIDs. The resulting distinct patterns can be used to detect off-net deployments outside of the Autonomous System (AS) of the hypergiant. The method and its accuracy are discussed further in our paper.
In 2021 and 2022, the server CIDs from Google followed a random distribution. In 2023 and 2024, the load balancer configuration changed. We observed server CIDs starting with 0b11 in 99% (2023) and 0b111 in 99.9% (2024) of long header packets.
According to the IETF Draft for 'Generating Routable QUIC Connection ID', this indicates that Google was not encoding server CIDs with information to aid in load balancing. Surprisingly, Google's QUIC implementation adopts format changes of the Internet Draft, but does not use CIDs for load balancing.
Meta's QUIC implementation, mvfst, encodes details about hosts, workers, processes, and the version of this encoding within the server CID (see Figure 4). The presence of higher densities of some values in the first five bytes indicates that Meta currently encodes information in server CIDs.
Observing load balancer configuration migration
Tracking the reception of QUIC backscatter from Meta allowed us to observe the migration to a new load balancer configuration in July 2023 (Figure 3). The migration took 10 days, and follow-up active measurements confirmed the changes.
Before this migration, host IDs denominated individual load balancer instances. After the migration, Meta used the same host IDs in different clusters. The next section describes the method used to infer cluster structure.
Revealing the load balancer count of Meta
After identifying information encoding in server CIDs, the study assessed how complete the passive view was and whether active measurements could fill gaps. To this end, we scanned all QUIC IP addresses active at the time (up to 7,355 IP addresses in 2025) in the Meta AS 32934. For each IP address, we completed 20,000 handshakes while successively decreasing the client port.
IP addresses were grouped into clusters when the same host ID appeared across multiple addresses in the same /24 prefix. We confirmed the derived structure of clusters using reverse DNS. The International Air Transport Association (IATA) airport code encoded in DNS Pointer (PTR) records of all cluster virtual IPs was identical, showing that clusters are limited to a single /24. This method revealed the number of load balancers per cluster. Figure 4 shows the distribution of cluster sizes for clusters observed worldwide. You can find more details about cluster structure and usage in our paper.
Comparing backscatter with active measurements shows that in 2023, 2,366 QUIC connections revealed 93% of host IDs in a cluster. In 2025, the new structure allowed 100% coverage with only 545 connections. Backscatter detected the largest share of Meta host IDs in 2023, at 29%.
Why use passive measurements?
Our analysis showed that:
Observation from passive measurements is reproducible with active measurements, but active measurements require prior knowledge of potential targets, cost additional network traffic, and might trigger intrusion detection. Telescope measurements improve active measurements by revealing real-world QUIC behaviour.
Will the deployment of structured CIDs increase? Can we apply the same methods to other deployments?
Structured CIDs may serve as a fingerprint of specific hypergiants. We found that Google migrated to such CIDs in 2023, and we observed distinct information encoding from Akamai, Amazon, Apple, Cloudflare, Fastly, Meta, and Microsoft in backscatter. The use of structured CIDs will increase over time because they simplify fine-grained provider-controlled routing.
However, standardization might limit the uniqueness of the identification properties and our ability to associate them with specific hypergiants. Advanced QUIC features, such as client migration, even require additional data encoded in such IDs to reduce overhead from synchronizing connection state.
Our detection of off-net deployments applies to other deployments and measurement methods, such as flow records without ground-truth knowledge from open source implementations. Detection of Layer 7 load balancers is limited to Meta, since only Meta uses a cleartext encoding.
What are the implications of knowing the number of load balancers?
Encoding the destination load balancer into a CID enables clients to steer traffic to specific load balancer instances. This is unwanted behaviour because attackers could direct traffic to a single load balancer, bypassing single-point-of-failure mitigation.
Although load balancer counts do not reveal the underlying capacity, knowing the distribution in a geographic region or the size of a single cluster can be used to estimate the load necessary to overload that POP.
This information can also benefit competitors, allowing them to anticipate business opportunities and local competition, assess the importance of a region, and improve their own infrastructure.
Why detecting off-net deployments matters to network operators
Predicting traffic between networks is difficult but necessary to deliver stable, low-latency Internet service. Inferring infrastructure details, such as server roles, can improve capacity planning and help detect unwanted traffic.
Fine-grained infrastructure details embedded in QUIC CIDs may reveal cache replication between off-net deployments, an activity normally billed as transit from the hosting networks.
For more detailed information about other large deployment configurations like Akamai, Amazon, Apple, Fastly, and Microsoft, read our paper 'Waiting for QUIC: Passive Measurements to Understand QUIC Deployments', which was presented at CoNEXT in December 2025. If you want to investigate QUIC traffic in more depth, CESNET, one of the data providers for this work, publishes a dataset covering detailed QUIC flows recorded at the network edge of CESNET for one year from June 2024 to May 2025.
Jonas Mücke is a PhD student and research associate at the Chair of Distributed and Networked Systems at TU Dresden. His research focuses on active and passive measurements to better understand complex Web infrastructures. He is particularly interested in additional insights gained from the emerging transport protocol QUIC.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.