As I make my way home through Hurricane Bawbag, I find myself thinking about the continued relevance of firewalls. For many years now, a perimeter firewall has been considered to be the aboslute mimimum level of protection required to defend networks from Internet-borne threats, so much so that they are even built into end users’ broadband routers. The ubiquity of the modern firewall is almost as broad as Internet connectivity (yes, there _are_ people out there who run without firewalls!). While overall this is a good thing, it’s becoming increasingly clear that firewalls are becoming less effective as a security tool, and may even be causing unnecessary work for network and system admins. It’s got to the stage that rulebases are so large and complex that some organisations need dedicated teams of people to look after firewalls!
I would argue that, in general, that a requirement for a team of people to look after a well-understood device type is an indication of failure both in terms of ease of management, and architecture. After all, we’re really trying to do something simple with firewalls: we want bad stuff kept out and good stuff allowed in. Unfortunately, because there is no easy way to determine what traffic is good and what is bad, we end up going one of two ways. Either we configure a nice, tight rulebase that’s a pain in the arse to manage anytime something changes, or we don’t make the rule base tight enough, with “permit ip any any” rules all over the place.
So, what’s the solution? Maybe we can use those new fangled next gen firewalls to make decisions not only on IP and port number but also on the traffic being carried in the TCP or UDP segment? What this is is essentially integrating IPS into the firewall. Great idea – it’s a logical step to allow the firewall to make more intelligent decisions. The trouble is that most IPS implementations are crap. Even when properly tuned, they make loads of mistakes and generate a massive number of alerts. In any case, implementing IPS in a firewall chassis normally has a massive impact on firewall throughput – not ideal for Internet edge devices that are already limited in terms of throughput. Basically what you end up with is a rate-limited firewall with crap IPS capabilities. No, if you’re going to deploy IPS, do it properly – you’ll thank me in the long run.
How about integrating AV/AS into firewalls? Well, again, this helps a bit, but it’s still based on signatures and therefore will catch a high number of false positives. And checking traffic for worms and other malware is just as resouorce intensive as running IPS, if not more.
I think the key thing is to consider what role the perimeter firewall is actually carrying out. Essentially firewalls dump a load of “known-bad” traffic to stop it getting into the internal network, but they don’t do so well at actually looking inside allowed traffic to see whether or not there’s any risk associated with allowing it through to the DMZ. Firewall vendors will _claim_ that their products do deep packet inspection, but you’d better hope they’re not doing too much or your firewall is going to be spending too much time analysing allowed traffic and not enough time switching packets. Fundamentally, most enterprises will have to allow at least HTTP and HTTPS through their perimiter. HTTPS in particular is a problem, becuase that traffic can’t be inspected unless the HTTPS is intercepted on the firewall – essentially an authorised man in the middle attack. By accepting that we can’t expect the firewall to do much more than act as a relatively crude packet filter, we can move onto designing the Internet architecture to take advantage of other security tools to protect the environment. By using a more layered approach to network security, we can also relax a bit more about our firewall rules and make them more manageable.
What we need is a simple metric to decide whether traffic is “good” and should be allowed in, or “bad” and should be dropped. There’s a lot to be said here for the concept of authenticating traffic, so that even if the traffic source is spoofed, don’t automatically let it in. We might get to the stage eventually where some sort of heuristic analysis on traffic is realistic in real-time, and that would be a very cool way of deciding whether traffic should be allowed through or not. Don’t hold your breath though.
I’m not saying that firewalls are totally obsolete (although I do think they won’t be around for all that much longer, at least in their current form). I think that as the need for NAT disappears with the rollout of IPv6, there will be less of a need for dedicated firewall appliances, which would mean that most firewall tasks could be carried out by the edge router – install a broad security polciy on the edge router and use other security tools inside to filter the traffic properly. We may see some more multi-role appliances integrated into the Internet perimeter, but to be honest, I’ve never been massively convinced that the UTM (Unified Threat Management) approach was all that good. Maybe that was a factor of poor integration and execution, but I’d still prefer my WAF to be separate from my IPS, not least because I don’t necessarily want the IPS, WAF, and DBF to be inspecting the same traffic.
And while we’re on this topic, get all this stuff virtualised – I’m sick of rack mounting entire server cabinets full of physical security appliances!