Cloudflare’s international community and spine in 2026.
Cloudflare’s community lately handed a serious milestone: we crossed 500 terabits per second (Tbps) of exterior capability.
Once we say 500 Tbps, we imply whole provisioned exterior interconnection capability: the sum of each port dealing with a transit supplier, non-public peering associate, Web trade, or Cloudflare Community Interconnect (CNI) port throughout all 330+ cities. This isn’t peak site visitors. On any given day, our peak utilization is a fraction of that quantity. (The remaining is our DDoS finances.)
It’s a good distance from the place we began. In 2010, we launched from a small workplace above a nail salon in Palo Alto, with a single transit supplier and a reverse proxy you possibly can arrange by altering two nameservers.
The early days of transit and peering
Our first transit supplier was nLayer Communications, a community most individuals now know as GTT. nLayer gave us our first capability and our first hands-on firm expertise in peering relationships and the cautious steadiness between value and efficiency.
From there, we grew metropolis by metropolis: Chicago, Ashburn, San Jose, Amsterdam, Tokyo. Every new knowledge heart meant negotiating colocation contracts, pulling fiber, racking servers, and establishing peering by way of Web exchanges. The Web is not really a cloud, in fact. It’s a assortment of particular rooms stuffed with cables, and we spent years studying the nuances of each one in every of them.
Not each metropolis was a simple deployment, having to cope with lacking {hardware}, customs strikes, and even dental floss. In a single month in 2018, we opened up in 31 cities in 24 days: from Kathmandu and Baghdad to Reykjavík and Chișinău. Once we opened our 127th knowledge heart in Macau, we have been defending 7 million Web properties. At present, with knowledge facilities in 330+ cities, we shield greater than 20% of the net.
When the community grew to become the safety layer
As our footprint grew, prospects requested for extra than simply web site caching. They wanted to guard staff, substitute ageing Multiprotocol Label Switching (MPLS) circuits, and safe total enterprise networks. As an alternative of conventional home equipment, we constructed programs to determine safe tunnels to non-public subnets and promote enterprise IP area immediately from our international community by way of BGP.
The dimensions of threats grew in parallel. In 2025, we mitigated a 31.4 Tbps DDoS assault lasting 35 seconds. The supply was the Aisuru-Kimwolf botnet, together with many contaminated Android TVs. It was one in every of over 5,000 assaults we blocked that day. No engineer was paged.
A decade in the past, an assault of that magnitude would have required nation-state sources to counter. At present, our community handles it in seconds with out human intervention. That’s what working at a 500 Tbps scale requires: shifting the intelligence to each server in our community so the community can defend itself.
How our community responds to an assault
Here’s what really occurs when an assault hits our community. Packets arrive on the community interface card (NIC) and instantly enter an eXpress Information Path (XDP) program chain managed by xdpd, working in driver mode. Among the many first applications in that chain is l4drop, which evaluates every packet towards mitigation guidelines in prolonged Berkeley Packet Filter (eBPF). These guidelines are generated by dosd, our denial of service daemon, which runs on each server in our fleet. Every dosd occasion samples incoming site visitors, builds a desk of the heaviest hitters it sees, and broadcasts that desk to each different occasion within the colo. The result’s a shared colo-wide view of site visitors, and since each server works from the identical knowledge, they attain the identical mitigation determination.
When dosd detects an assault sample, the ensuing rule is utilized regionally by way of l4drop and propagates globally by way of Quicksilver, our distributed key-value (KV) retailer, reaching each server in each knowledge heart inside seconds. Solely after surviving l4drop do packets attain Unimog, our Layer 4 (L4) load balancer, which distributes them throughout wholesome servers within the knowledge heart. For Magic Transit prospects routing enterprise community site visitors by way of our edge, flowtrackd provides an additional layer of stateful TCP inspection, monitoring connection state and dropping packets that do not belong to reputable flows.
The 31.4 Tbps assault we mitigated adopted precisely this path. No site visitors was backhauled to a centralized scrubbing heart. No human intervened. Each server within the focused knowledge facilities independently acknowledged the assault and started dropping malicious packets at line charge, earlier than these packets consumed a single CPU cycle of utility processing. The software program is barely half the story: none of it really works if the ports aren’t there to soak up the site visitors within the first place.
A distributed developer platform
Operating code on each server in our community was a pure consequence of controlling the complete stack. If we already ran eBPF applications on each machine to drop assault site visitors, we may run buyer utility code there too. That perception grew to become Employees, and later KV and Sturdy Objects.
Our developer platform runs in each metropolis we function in, not in a handful of cloud areas. In 2025, we added Containers to Employees, so heavier workloads can run on the edge too. V8 isolates and customized filesystem layers decrease chilly begins. Your code runs the place your customers are, on the identical servers that drop assault site visitors at line charge by way of l4drop. Assault site visitors is dropped earlier than it reaches the community stack. Your utility by no means sees it.
Ahead-looking protocols: IPv6, RPKI, ASPA
We have been early adopters of IPv6 and Useful resource Public Key Infrastructure (RPKI). BGP hijacks trigger actual outages and safety breaches. RPKI permits us to drop invalid routes from friends, guaranteeing site visitors goes the place it’s alleged to. We signal Route Origin Authorizations (ROAs) for our prefixes and implement Route Origin Validation on ingress. We reject RPKI-invalid routes, even when that often breaks reachability to networks with misconfigured ROAs.
Autonomous System Supplier Authorization (ASPA) is subsequent. RPKI validates who owns a prefix. ASPA validates the trail it took to get right here. RPKI is a passport test on the vacation spot, confirming the fitting proprietor, whereas ASPA is a flight manifest test: it verifies each community the site visitors handed by way of. A route leak is sort of a passenger who boarded within the improper metropolis; RPKI wouldn’t catch it, however ASPA will.
Present ecosystem adoption for ASPA appears to be like like RPKI did in 2015. We have been one of many first networks to deploy RPKI at scale, and right this moment, 867,000 prefixes within the international routing desk have legitimate RPKI certificates, up from close to zero a decade in the past. At our scale, the protocols we select have actual penalties for the broader Web. We push for adoption early as a result of ready means extra hijacks and extra leaks within the meantime.
AI brokers and the evolving Web
AI has modified what it means to have a presence on the internet. For a lot of the Web’s historical past, site visitors was human-generated, by individuals clicking hyperlinks in browsers. At present, AI crawlers, mannequin coaching pipelines, and autonomous brokers now account for greater than 4% of all HTML requests throughout our community, corresponding to Googlebot itself. “User action” crawling, the place an AI visits a web page as a result of a human requested it a query, grew over 15x in 2025 alone.
AI crawlers behave otherwise than browsers on the infrastructure degree. Browsers load a web page and cease. Crawlers as a substitute fetch each linked useful resource at most throughput with no pause between requests. At our scale, distinguishing reputable AI crawling from precise assaults is an actual engineering drawback. Our detection programs use a mixture of verified bot IP ranges, TLS fingerprinting, behavioral evaluation, and robots.txt compliance alerts to make that distinction, and to offer web site homeowners the info they should determine which crawlers to permit.
On the TLS layer, for instance, a reputable browser presents a ClientHello with a predictable set of cipher suites, extensions, and ordering that matches its declared Person-Agent. A crawler spoofing that Person-Agent however utilizing a stripped-down TLS library will current a special fingerprint, and that mismatch is among the alerts our programs use to categorise the request earlier than it reaches the origin.
Assist us construct the following 500 Tbps
What began above a nail salon in Palo Alto is now a 500 Tbps community in 330+ cities throughout 125+ nations, the place each server runs our developer platform and safety providers, not simply cache. That’s sixteen years of architectural selections compounding, and we owe it to the 13,000+ networks and companions who peer with us. We’re not accomplished.
If you’re a community operator, peer with us. Our peering coverage and interconnection particulars are on PeeringDB. If you’re concerned with embedding Cloudflare infrastructure immediately inside your community, attain out to our group at [email protected], to affix the Edge Associate Program.



