Saturday, January 26. 2013
Barracuda Networks Confirms Exploitable Backdoors in its Appliances
Barracuda Networks has released firmware updates that remove SSH backdoors in a number of their products and resolve a vulnerability in Barracuda SSL VPN that allows attackers to bypass access restrictions to download potentially insecure files, set new admins passwords, or even shut down the device.
The backdoor accounts are present on in all available versions of Barracuda Spam and Virus Firewall, Web Filter, Message Archiver, Web Application Firewall, Link Balancer, Load Balancer, and SSL VPN appliances.
"Our research has confirmed that an attacker with specific internal knowledge of the Barracuda appliances may be able to remotely log into a non-priveleged account on the appliance from a small set of IP addresses. The vulnerabilities are the result of the default firewall configuration and default user accounts on the unit," Barracuda explained via a tech alert published on Wednesday.
As usual, emphasis is my own. This appears to be entirely due to factory-default settings and lazy administrators who do not change/disable such defaults.
Friday, February 3. 2012
I do not yet have all of the hardware and networking gear set up on my network; so this is merely a description of what I'm looking to do, to gather some ideas and feedback, and then figure out how to proceed. If, after reading the details below, you want to join, or have suggestions, please let me know!
History / Ideas
I've been thinking about starting a "nerd net" for quite some time. I have many friends that use a Linux/*BSD machine at their border, and typically have some kind of services running inside the network. I'd like to link these networks together and share access to services. This opens up all kinds of possibilities...
To avoid a single point of failure, and saturation of any one network's bandwidth, we would avoid linking in a hub-and-spoke fashion. Instead, I propose that we maintain a list of active nodes (essentially, each person's gateway box) and try to maintain at least 3 active VPN connections at a time. In other words, each node on the network would have a VPN tunnel to at least 3 other nodes. It would be very useful to have control of a DNS zone for maintaining this list. Each node would have it's own A record; say, mynode.domain.com. Each time a node wants to connect to the network, it would request the A record for something like connect.domain.com, which would hand out A records of each of the registered nodes in a round-robin fashion.
Authentication will be central in this; so some knowledge of SSL will be beneficial. I, and possibly a few key others, would maintain access to a CA signing key and the nodes would be authenticated to the network via certificates signed by that CA key. Any node that you attempt to connect to should trust you based on that certificate; and, based on the certificate presented, you should trust any connection attempt with a valid, signed certificate. I, and possibly others, can assist with any certificate-based configuration issues that you might have.
To allow for a private network that's going to be potentially changing topology on a regular basis, we'll need a routing protocol, such as RIPv2. For simplicity, each network would receive it's own /24 of RFC1918 space, with the gateway box running the VPN software being the "node" on the network. Most likely, you'd want to set up split-tunneling on the gateway box so that any requests going to the private network route over the VPN and the rest of your traffic (web surfing, email, etc) goes out your normal internet connection.
With each node on the network being connected to at least 3 other nodes at any given time, that requires us to keep track of the various routes from one network to another. Instead of trying to keep track of this by hand, we could easily set up RIPv2 and announce the routes we "know". I imagine using netblocks in the 172.16.0.0/12 range, with each network having a /24 and being multi-homed (connected to 3+ other nodes), there could potentially be several routes from one network to the other. A light-weight, distance-vector routing protocol like RIPv2 seems to be a good fit; open-source implementations, simple, and proven. The route to any network from yours would be the one with the least intermittent hops.
Another, more complex, possibility would be using OSPF and OpenBSD's open-source implementation. Given that each node will have different bandwidth, this may be a good idea to try.
Services and other ideas
This would be the whole reason for the network!
I would be interested in providing several services to the network, but not publicly. For one, a Linux- or FreeBSD-based shell server with access to the internet, compilers/development tools, documentation, email (anyone remember pine+procmail?), etc. A (small-ish) public web space to let people know that you're part of the network; something like Apache's mod_userdir. A blog application accessible only from within the nerd-net.
I would also maintain the internal network's intranet site. This could be a site used to post updates of system maintenance, new services being offered/tested, and a way to maintain an up-to-date list of all of the nodes.
A private IRC server is definitely on the TODO list. Any and all bots would be allowed; if anyone would be interested in linking the IRC daemons, I'll likely be using the Blackened or UltimateIRCd.
I could provide SMTP services for the network. If we find a zone to use, I can provide email services for that zone. Technically, unlimited user@zone email addresses. I would also be willing to host DNS services for the internal network; providing dynamic DNS and a "view" for those coming in via the network.
Another service I'd consider offering would be your own PostgreSQL database for development and testing. A big part of what I envision for this network is that it'll be a big collection of computer-savvy geeks; what better place to deploy and test code than on a network inhabited solely by those with the clue to help you in debugging, vuln testing, etc.
Other possibilities include shared-CPU time using tools like distcc(1). Maybe remote storage (NFS/iSCSI)?
The benefits of a private network are many, and extend beyond just the sharing of services. This is a chance to build a real-world, potentially large-scale network with changing topology. Any member wishing to add/configure new services or features is welcomed to; and the services could be advertised/listed on the internal network's intranet site.
Other benefits include the ability to policy-route certain traffic. For instance, I would be interested in routing my DNS traffic over the VPN to be routed out someone else's connection; it's low traffic, but my ISP mangles my DNS traffic to route to their servers regardless of what server I point to. Such configuration could even be set up where my DNS traffic goes out through a different VPN's node each time (i.e., a type of load-balancing).
As the network uses a VPN to set up, exactly what it is, a Virtual Private Network, we should seek to keep the network private. Configuring an Apache reverse-proxy, or some type of port-forward from your external IP address into the network would be frowned upon. Any need to create such accesses for the outside should be discussed with the entire group, and hopefully some sort of consensus reached.
It would also be the responsibility of each node's owner to keep the machine secure and up-to-date. Any breach of one of the nodes would lead to an open route to the network.
I will create the CA key used to sign all certificates used to access the VPN; but, as I do not wish to be the sole decider in who is allowed on the network and who is not, I plan to create a handful of sub-CA certificates to be distributed to trusted associates who may also sign certificates for potential users. If you've been given a sub-CA certificate, you're being trusted to know who you're letting into our private little clubhouse. :) Using multiple sub-CA certificates will also allow for anyone to validate who provided access for this particular user/network; and the ability to revoke access as necessary (hoping that it wouldn't ever become necessary).
A "nerd net" would be a fun project to gather a bunch of us geeks together and share services. This is the first time I'm really throwing the idea out there (beyond mentioning it to a few people here and there), so if anyone else thinks it would be fun, get ahold of me!
Monday, May 17. 2010
Secret Forum Reveals Oz Firewall Backroom Dealing
Circumvention legal, but you can't tell anyone how[.]
Emphasis is theirs.
Now say what? It will be legal to circumvent (technical details at the bottom), but illegal to explain to someone else how to perform this perfectly legal configuration. I wonder how this might affect a corporate or ISP helpdesk perform VPN connectivity setup?
Australia’s plans for a firewall to protect its population from smut on the internet are rapidly evolving from farce to total chaos. Weekly revelations on bulletin boards suggest that Stephen Conroy, the man behind the big idea, does not know what forthcoming legislation on the topic will say, when it will be introduced or how the firewall will work in practice.
This time, emphasis is mine. I want to continue to point out how big of an asshat this particular Australian politician is. He is the "Minister for Broadband, Communications and the Digital Economy." He's the one that floated the idea of this nation-wide "firewall" (which is technically a proxy since it will be filtering at layer 7 - hence the technical problems) to "protect" citizens from illegal, immoral, or "dangerous" content. This is nearly the same thing the Chinese and Iranians are doing, just using layer 7 proxy devices instead of what's assumed to be basic layer 3 IP filtering of destination hosts. Skip to the very end of the post for the technical details behind this.
To say this whole thing began as a farce is hitting the nail right on the head.
Meanwhile, it turns out that the Minister’s own Department of Broadband, Communications and the Digital Economy (DBCDE) has been hosting a secret forum for discussions with ISPs likely to be affected by proposals. Along the way it floated the idea of making it a crime to advise surfers on how to do things that are perfectly legal to do. Confused? You will be.
First up is the time scale for plans to introduce the new firewall. As already reported, the question of when legislation will be introduced has now been bouncing between the offices of Prime Minister Kevin Rudd and Communications Minister Stephen Conroy. Severe wriggling from Conroy’s office suggests that plans for an early introduction of legislation have been put on the back burner for now.
Conroy wants to shelve the legislation until after the elections. He's technically incompetent, but he's smart enough to realize that this is going to be a screw-up of biblical proportions and it will likely cost him the election. It's "on the back burner for now," but it's by no means dead.
Meanwhile further digging inside this forum revealed that departmental officials appear to have been discussing the possibility of making it a criminal offen[s]e to advise individuals of means that would enable them to circumvent the filter – even where the means themselves were perfectly legal.
I would say that this equates to information being illegal. In a way, that's in the same league as banning books.
As the EFA suggests, this answer raises more issues than it addresses, and relies on the degradation of the Australian network being gradual, rather than catastrophic. It does appear, however, that the government has no plans to deal with a possible overload of its firewall bringing the Australian internet to its knees – beyond setting up a review when such an event actually happens.
Why should there be any degradation of bandwidth at all? I suspect that if this goes through, there's going to be a noticeable difference in download speeds and initial access to websites.
Circumvention of these filters will be trivial; you can wrap your request in SSL (such as https:// if the website supports it), by using a VPN provider outside Australia (many more found on the link for the word "using"), by using Tor (which uses a technique known as Onion Routing), or even by viewing blocked pages via the Google cache.
This filtering is to take place with proxies (at the Application  layer) as opposed to the traditional large-scale deployments of firewalls (at the Network  and Transport ) layers). The deeper you have to inspect a packet, the more CPU and memory required to process the filters. It costs - in many ways, from actual dollars for the hardware and software, to performance impact, to configuration complexity to man-hours of maintenance - considerably more to filter at layer 7 with a proxy than layers 3/4 with a firewall.
The one benefit to filtering at layer 7 is that you block only what is intended to be blocked. In today's world (where we've been running out of IPv4 space for a dacade now) a lot of websites are configured using virtual hosts. This allows web hosting providers to host a virtually unlimited number of websites on a single IP address. Let's say there are two websites, both hosted on the same virtual host IP address, where one is banned and the other is not:
With a layer 7 proxy, when the user attempts to reach a website, the proxy intercepts the request, checks the request (including hostname and URI), and then either blocks the request, or requests the page on behalf of the end-user and returns her the requested webpage. So your mom can still access www.momsrecipes.co.au while nobody can access www.bannedwebsite.co.au. With a proxy, you can return HTML to the end-user explaining why access to this particular website is blocked and possibly a method of contact to dispute the denial of access.
() Finer-grained control of what's filtered
() Less "false positives"
() Expensive in many aspects (mentioned above)
() Complex configuration
() Considerable service impact due to use of DPI at Application  layer
() Slightly easier to circumvent; using https is the only circumvention measure mentioned that does not tend to work with the firewall approach - the rest should work against both types
With a layer 3/4 firewall, access to the virtual host IP address (or even the subnet it's part of) will be blocked. When anyone tries to go to www.bannedwebsite.co.au, they are unable to, which is the intended result. They will get a different error; the browser will just report that website was unreachable. End of explanation. If anyone tries to go to www.momsrecipies.co.au, they will also be denied with the same uninformative unreachable error. Since both websites are on the same IP address, the firewall has no way of knowing which website you're looking for, so it blocks everything.
() Cheaper to deploy
() Simpler configuration - hundreds of hosts/subnets vs. thousands of hostnames
() Can often be implemented on existing hardware - edge or core routers utilization IP ACLs
() Faster, more responsive access to allowed websites; less service impact
() Collateral damage - legitimate sites on the same virtual host as banned site are also blocked
() Slightly more difficult to circumvent (a websites https site will likely be in the same blocked subnet)
Comparison with Other Instances of State-Controlled Internet Access:
I see three major differences in the Australian proposal as opposed to the other major regimes implementing state-wide filtering of websites (China and Iran). They are as follows:
Another side effect of this proposal, from an economic standpoint, is that it is likely to put smaller ISPs out of business. Instead of putting the smaller burden on the backbone providers, with considerably more capital, it will place a more expensive burden on ISPs with less resources at their disposal. If these filters become legally mandatory, this will likely put smaller ISPs out of business. A smaller provider may not have access to the resources (money, manpower, and know-how) to meet these requirements and will thus have to shut down operations.
There is one other somewhat commonly used filtering technique involving DNS. The ISP or corporate gateway will transparently route all DNS requests by the end-user to DNS servers under their control. The DNS servers will be configured as authoritative for the blocked domains; typically configured to return an IP address that connects you to a website telling you that your access is blocked and possibly why. This is similar to the Walled Garden approach.
Tuesday, December 8. 2009
Graphical Network Simulator 3
This simulator is absolutely awesome. It requires that you have the Cisco IOS images as it comes with a MIPS emulator and actually emulates a real Cisco router, switch, or PIX firewall. It's so realistic that you can design a network, configure the routers and switches, and then drop the running configurations onto real network gear.
It certainly helps to have plenty of RAM available to run this. 1 GB or more is almost a necessity.
Thursday, March 20. 2008
pfSense is (yet another) all-in-one router/firewall/VPN device. It's based on the m0n0wall firewall, so it's based on FreeBSD and the entire system configuration is contained in one XML file. The entire rc process is written in PHP, making the XML parsing easy and also allowing for easy extendability.
I've seen about a million of these all-in-one devices, but what sets this one apart for me is the GUI. This looks to be the simplest, yet most-powerful, all-inclusive web-based GUI I've seen on such a platform.
Here's a quick rundown of the features included:
SSL web interface
stateful packet filtering
PPPoE and PPTP support on the WAN interface
IPsec VPN tunnels (IKE; with support for hardware crypto cards and mobile clients)
PPTP VPN (with RADIUS server support)
caching DNS server
bridging firewall ("invisible" firewall)
(Page 1 of 1, totaling 5 entries)