Workshop

Kill the Default Route

Published on: 2026-02-15

By: Ian McCutcheon

Kill the Default Route

Somewhere in the mid-90s, somebody added a default route to the enterprise network. It was a decision. Not an accident, not an inevitability—a choice. Somebody typed it into a router, and traffic that previously had nowhere to go suddenly had somewhere to go: out.


A World Without a Default Route

If you started your career after about 2000, you've probably never worked on a network that didn't have a default route. It's just there, like gravity. Every routing table has one. Traffic that doesn't match a specific route gets sent toward the internet, through a firewall, and out. It's so fundamental that questioning it feels a little unhinged.

But it wasn't always there. And it wasn't always wanted.

Large enterprises running frame relay WANs in the 90s made a deliberate choice: no default route advertised across the wide area network. Every destination in the routing table was explicitly known. If a router didn't have a route to something, the packet died. That was the design. Not a limitation of the technology—a policy enforced through the technology.

This wasn't mainstream. And the reason wasn't always security—bandwidth was expensive. Frame relay circuits were provisioned by committed information rate, and every kilobit cost money. A default route was an invitation for unknown traffic to consume a link you were paying dearly for. If a misconfigured application or a chatty protocol started sending traffic the network didn't expect, that traffic would ride the default route straight onto your WAN and eat into capacity that was budgeted for business-critical data. A network that only carries known traffic to known destinations doesn't care whether the unexpected packet is malicious or just misconfigured—it drops it either way. Security, reliability, and operational efficiency all converge on the same design: if the traffic isn't expected, it doesn't move.

The enterprises that made this choice thought carefully about what their networks should do, not just what they could do. Every path existed because someone decided it should.

If you needed internet access, you went through an edge proxy. The proxy was your gateway to the outside world. The network itself knew nothing about the internet—it only carried traffic between destinations it had been explicitly told about.

It worked. And then it stopped working.


Why We Added the Default Route

The internet got complicated. Bandwidth got cheap.

In the early days, proxying web traffic was straightforward. HTTP requests go out, HTML comes back. An edge proxy could handle that without breaking a sweat. But then new protocols started riding TCP and UDP that didn't play well with proxies.

FTP had been around forever, but it was a nightmare—the protocol casually opens a second connection on a different port, and your proxy has to understand that, track the state, and open the right holes. Voice over IP showed up and expected low-latency UDP flows that proxies introduced unacceptable delay into. Streaming protocols needed sustained throughput that proxy architectures weren't built for. STUN was literally invented as a hack to punch through NATs and proxies because real-time communications protocols couldn't tolerate them.

Every year brought a new protocol that the proxy couldn't handle, or handled badly, or handled six months late after the vendor shipped a patch. The proxy was always behind. Always one protocol short of covering what the business needed.

So engineers did the pragmatic thing. They added a default route, pointed it at a stateful firewall, and said: let it all out, and we'll filter what we don't like. It was faster to deploy. It handled every protocol by default. And it moved the security problem from "build a proxy that understands everything" to "build a firewall rule that blocks the bad stuff."

The default route won. Not because it was better architecture—it won because the proxy lost the arms race.


Thirty Years of Compensation

Everything we've built since has been compensation for that decision.

Stateful firewalls. Next-gen firewalls. Intrusion detection. Intrusion prevention. URL filtering. Application-layer gateways. Deep packet inspection. SSL decryption engines. Data loss prevention. Every one of these technologies exists, at least in part, because we told the network "if you don't know where it goes, send it to the internet" and then spent three decades trying to make that less dangerous.

The default route is the most permissive policy in your network, and it's embedded in the routing table where nobody thinks to question it. It's not in a firewall rule, where it would get a quarterly review. It's not in a security policy document. It's in the infrastructure itself, treated as plumbing.

But it's not plumbing. It's a policy statement: any device on this network can attempt to reach any destination on the internet, and the network will carry that traffic to the edge. That's an extraordinarily generous starting position for a security architecture.


The Proxy Grew Up

The proxy won the rematch.

Security Service Edge—SSE—is the industry term for what happened. Analysts coined it, but the concept is straightforward: the security stack that used to live in your data center—your web gateway, your access broker, your zero trust enforcement—moved to the cloud and became a service.

The SSE platforms that exist today are not the proxy servers of the 90s. They're not a box in your data center running open-source software that chokes on anything that isn't HTTP. They're global cloud services with agents that run on endpoints, tunnels that connect to firewalls, and the ability to handle the breadth of protocols that killed the old proxy model.

The agent on a laptop intercepts traffic at the source. It enforces policy before the packet ever touches the network. Web traffic gets proxied through the cloud. Specific application traffic gets tunneled directly to the destination. The endpoint has a path to the internet that doesn't depend on the network providing one.

And on the firewall side, policy-based routing can send traffic to the SSE platform without advertising a default route. The firewall makes explicit decisions: this traffic class goes to the SSE cloud, that traffic class goes to an internal destination, everything else goes nowhere.

The technology that forced us to add the default route has been solved. The proxy can keep up now. So the question becomes: do we still need the thing we added when it couldn't?


What "Kill the Default Route" Actually Means

I'm not talking about deleting the default gateway on every workstation. The machine keeps its local networking. It has a default gateway pointing to a switch or a router. That's fine.

I'm talking about the core routing infrastructure. Your OSPF areas. Your BGP peers. Your SD-WAN fabric. Should those systems be advertising and carrying a default route? Should a router in the middle of your enterprise, when it receives a packet destined for an IP address it doesn't recognize, faithfully forward it toward the internet?

Or should it drop it?

The argument is that the core network should go back to what those frame relay networks did in the 90s: only carry traffic for destinations it explicitly knows about. The internet is not a destination the core network needs to know about. That's the SSE platform's job now. That's the endpoint agent's job. That's the firewall's job, via policy-based routing, for the devices that need specific external access.

The core network carries internal traffic. Period.


IoT Doesn't Get a Default Route. It Gets Permission.

An IoT device—a sensor, a camera, a building controller—doesn't need a default route. It needs to reach one, maybe two destinations. And many of these devices have a configuration most people overlook: a proxy setting. You point the device's proxy at a known address—an SSE platform, an edge firewall, whatever sits at your internet boundary—and the device sends its traffic there. No default route needed. No route injection. No dynamic DNS-to-route translation. The device talks to one place, and that place decides what gets through.

If the device supports proxy configuration, the problem is solved at the source. The network doesn't need to provide a path to the internet because the device already knows exactly where to send its traffic.

If the device doesn't support proxy configuration? Maybe it doesn't belong on your network.

If malware lands on that IoT device, there's no route to exfiltrate data. Not because a firewall rule blocks it. Because the routing table offers no path. The network itself is the security control.

That's a fundamentally different posture than "the device has a route to the internet but we hope the firewall catches the bad traffic."


Voice Is Still the Hard Case

Real-time voice and video are still awkward for SSE platforms. Most of them bypass voice traffic rather than proxying it, because the latency penalty is too high.

Does voice need a default route?

A remote worker's softphone connects to an internet-facing voice edge at the data center. That's a known destination, a specific IP, a specific service. The SSE agent on the laptop bypasses voice traffic from the tunnel and lets it connect directly. No default route involved—just a policy that says "voice traffic goes here."

An office worker's softphone does a DNS lookup, finds the internal voice system, and connects. All internal. No default route needed.

Even the protocol that broke proxies in the 90s and still gives them trouble today doesn't justify a default route. It justifies a policy. A specific, intentional, auditable policy that says exactly where voice traffic goes. That's the opposite of "send it to the default route and hope for the best."


The File Drawer

I've been building networks for a long time. From the wire to the sky, as we used to say. And what I'm describing here isn't futurism. It's an old file drawer.

Somewhere in the institutional memory of enterprise networking is a design pattern that said: the network only carries traffic to places it knows about. If you want to reach something, someone has to build the path. That was expensive and operationally heavy in the 90s, so when the internet demanded more flexibility than proxies could offer, we dropped a default route in and moved on.

That was the right call at the time. The proxies couldn't keep up, and the business needed the internet.

But the proxies can keep up now. They're cloud-scale, they're on every endpoint, and they handle the protocol breadth that used to require a wide-open network path. Policy-based routing on modern firewalls gives you granular control over what goes where without advertising a default route. SD-WAN and intent-based networking make explicit routing manageable at a scale that would have been operationally crushing in the frame relay era.

The conditions that forced us to add the default route have changed. The question is whether anyone is going to notice and act on it.


The Question for the Industry

The SSE vendors—you know who they are—have built the tools that make a no-default-route enterprise viable. Their agents handle internet access at the endpoint. Their cloud platforms proxy and inspect traffic at scale. Their integration with firewalls supports policy-based routing that replaces the blunt instrument of a default route.

But are they telling their customers this? Is anyone in the SASE conversation, the zero trust conversation, the SSE conversation, asking whether the network itself should stop being so permissive? Or are we just layering cloud security on top of the same architecture that's had a default route since the 90s?

Zero trust says "never trust, always verify." But the default route in your core network says "trust that every packet deserves a path to the internet." Those two ideas can't coexist forever.

Somewhere in the old file drawers is the blueprint. We just need someone to pull it out, dust it off, and ask the question that should have been asked a long time ago:

We added the default route because we had to.

no ip route 0.0.0.0/0

Your move.