Saturday, January 26. 2013
Belkin Helps Cisco Exit Consumer Space by Acquiring its Home Networking Division, Including Linksys
Belkin on Thursday announced plans to acquire Cisco’s Home Networking Business Unit, including its products, technology, employees, and even the well-known Linksys brand. Belkin says it plans to maintain the Linksys brand and will offer support for Linksys products as part of the transaction, financial details for which were not disclosed.
Anyone who has spent any time designing, maintaining, securing, or even implementing an Enterprise-grade network already knows that Cisco is the de facto standard. Cisco has ruled the Enterprise-grade market for decades, and will continue to do so into the foreseeable future. What hasn't been so obvious is where/how Cisco will benefit from it's recent (the past, say, 5 - 10 years) interest in the consumer market.
Entering the consumer-grade market was their first mistake.
Cisco, it's R&D "group" long-consisting of a can of spray-paint and bridge stencil, decided to buy it's way into the consumer market through the purchase of Linksys. Instead of doing it's own R&D, creating it's own consumer-grade hardware and software, and entering the market with nothing more than it's "good name," (the Cisco brand); they chose to use some of their huge cache of cash (pun intended), result of corporate hoarding as is the norm in Fortune 500 these days, to buy them some assurance of success upon entering the consumer market.
Their entry into the consumer-grade market with re-branded equipment was their second mistake. The Linksys name has always been synonymous with "piece of shit" to anyone familiar enough to properly design and implement a network starting with the Visio diagram and ending with a fully-functional network passing traffic. Cisco first had to change the market perception that Linksys was most definitely sub-par equipment. They made modest attempts at this with the packaging of their gear with the Cisco logo, references to "Linksys powered by Cisco", etc. Some credit must be given to Cisco for at least purchasing the best of the consumer-grade brands on the market. I'm at a loss to think of any other company they could have bought, with an established market presence, at any cost, that would have given them a better starting-point (in reference to quality and technology) than Linksys.
One can only assume that Cisco's entry into the consumer space was an attempt to "own the stack," as Oracle successfully did with it's acquisition of Sun Microsystems a few years back. With Oracle now owning Sun; it's hardware, operating system, Java software, and all of their intellectual property, they (Oracle) can, and do, provide every piece of the puzzle when deploying a wide array of systems and services. They can provide the hardware (servers, network-based storage, clustering-oriented network gear, etc), the software (Solaris, ZFS, Oracle Application Server, Oracle database, etc), licensing, and support from the absolute top to bottom of the "stack." In Cisco's case, being that it's related to networking as opposed to, say, services-based web applications, there is much less benefit to be derived from "owning the stack," and it is much, much more difficult to position a company as such in the network world.
Cisco's third mistake was in over-estimating the benefit to "owning the stack," and under-estimating the cost to do so. Largely due to the fact that there are so many standards (IEEE, RFCs, etc) that a piece of networking gear must adhere to, there are many other competitors in any particular market segment. Oracle doesn't have to provide open APIs and documentation on most of it's products. It's often to their benefit to do so, but not strictly required. With Cisco, and networking equipment in general, the standards (such as TCP/IP) are "open" standards and visible to anyone. Let's say that Cisco did manage to replace every other consumer-grade NAT router on every at-home desktop with their own Linksys-derived gear; what, really, has it gained them? Sales of equipment? Sure. The level of vendor lock-in that Oracle hopes to achieve with it's Sun-derived gear? Not even close.
Cisco's upper management should well have known that the money they'd make, even with a successful entrance into the consumer-grade market, from selling routers and switches to "mom and pop" would be minimal, at best. That's assuming that income from sales outpaces the cost of supporting the equipment. Cisco has long made a good portion of it's money on support contracts from larger corporations who will not purchase from an OEM that doesn't/can't provide that engineering-level of support as necessary. Home and small-business consumers cannot afford that level of support. That type of customer is used to free, or nearly free, support provided by the vendor.
The last thing that I'll add to this has to do with security. I realize that Cisco has been looking to dump Linksys for some time (IOW, the only part of this article that is news is that they have found their buyer), but I can't help but wonder if the recently-disclosed vulnerability in all models of Linksys gear didn't cause Cisco to go ahead and accept some concessions on their part to hurry up and get rid of Linksys. Stated differently, I think that the vulnerability might have caused Cisco to, for instance, accept a lower price-per-share selling price to go ahead and rid themselves of the whole Linksys debacle sooner rather than later.
REFERENCE: Cisco Linksys Remote Preauth 0day Root Exploit
Saturday, January 26. 2013
Barracuda Networks Confirms Exploitable Backdoors in its Appliances
Barracuda Networks has released firmware updates that remove SSH backdoors in a number of their products and resolve a vulnerability in Barracuda SSL VPN that allows attackers to bypass access restrictions to download potentially insecure files, set new admins passwords, or even shut down the device.
The backdoor accounts are present on in all available versions of Barracuda Spam and Virus Firewall, Web Filter, Message Archiver, Web Application Firewall, Link Balancer, Load Balancer, and SSL VPN appliances.
"Our research has confirmed that an attacker with specific internal knowledge of the Barracuda appliances may be able to remotely log into a non-priveleged account on the appliance from a small set of IP addresses. The vulnerabilities are the result of the default firewall configuration and default user accounts on the unit," Barracuda explained via a tech alert published on Wednesday.
As usual, emphasis is my own. This appears to be entirely due to factory-default settings and lazy administrators who do not change/disable such defaults.
Monday, July 30. 2012
U.S. Census Bureau Offers Public API for Data Apps
For any software developers with an urge to play around with demographic or socio-economic data: the U.S. Census Bureau has launched an API for Web and mobile apps that can slice that statistical information in all sorts of nifty ways.
The API draws data from two sets: the 2010 Census (statistics include population, age, sex, and race) and the 2006-2010 American Community Survey (offers information on education, income, occupation, commuting, and more). In theory, developers could use those datasets to analyze housing prices for a particular neighborhood, or gain insights into a city’s employment cycles.
The APIs include no information that could identify an individual.
For those itchin' to get to the the APIs, click the link. There's also an "app gallery."
I, for one, am looking forward to the unique and useful ways in which this data will tell us new things about ourselves.
Friday, February 3. 2012
I do not yet have all of the hardware and networking gear set up on my network; so this is merely a description of what I'm looking to do, to gather some ideas and feedback, and then figure out how to proceed. If, after reading the details below, you want to join, or have suggestions, please let me know!
History / Ideas
I've been thinking about starting a "nerd net" for quite some time. I have many friends that use a Linux/*BSD machine at their border, and typically have some kind of services running inside the network. I'd like to link these networks together and share access to services. This opens up all kinds of possibilities...
To avoid a single point of failure, and saturation of any one network's bandwidth, we would avoid linking in a hub-and-spoke fashion. Instead, I propose that we maintain a list of active nodes (essentially, each person's gateway box) and try to maintain at least 3 active VPN connections at a time. In other words, each node on the network would have a VPN tunnel to at least 3 other nodes. It would be very useful to have control of a DNS zone for maintaining this list. Each node would have it's own A record; say, mynode.domain.com. Each time a node wants to connect to the network, it would request the A record for something like connect.domain.com, which would hand out A records of each of the registered nodes in a round-robin fashion.
Authentication will be central in this; so some knowledge of SSL will be beneficial. I, and possibly a few key others, would maintain access to a CA signing key and the nodes would be authenticated to the network via certificates signed by that CA key. Any node that you attempt to connect to should trust you based on that certificate; and, based on the certificate presented, you should trust any connection attempt with a valid, signed certificate. I, and possibly others, can assist with any certificate-based configuration issues that you might have.
To allow for a private network that's going to be potentially changing topology on a regular basis, we'll need a routing protocol, such as RIPv2. For simplicity, each network would receive it's own /24 of RFC1918 space, with the gateway box running the VPN software being the "node" on the network. Most likely, you'd want to set up split-tunneling on the gateway box so that any requests going to the private network route over the VPN and the rest of your traffic (web surfing, email, etc) goes out your normal internet connection.
With each node on the network being connected to at least 3 other nodes at any given time, that requires us to keep track of the various routes from one network to another. Instead of trying to keep track of this by hand, we could easily set up RIPv2 and announce the routes we "know". I imagine using netblocks in the 172.16.0.0/12 range, with each network having a /24 and being multi-homed (connected to 3+ other nodes), there could potentially be several routes from one network to the other. A light-weight, distance-vector routing protocol like RIPv2 seems to be a good fit; open-source implementations, simple, and proven. The route to any network from yours would be the one with the least intermittent hops.
Another, more complex, possibility would be using OSPF and OpenBSD's open-source implementation. Given that each node will have different bandwidth, this may be a good idea to try.
Services and other ideas
This would be the whole reason for the network!
I would be interested in providing several services to the network, but not publicly. For one, a Linux- or FreeBSD-based shell server with access to the internet, compilers/development tools, documentation, email (anyone remember pine+procmail?), etc. A (small-ish) public web space to let people know that you're part of the network; something like Apache's mod_userdir. A blog application accessible only from within the nerd-net.
I would also maintain the internal network's intranet site. This could be a site used to post updates of system maintenance, new services being offered/tested, and a way to maintain an up-to-date list of all of the nodes.
A private IRC server is definitely on the TODO list. Any and all bots would be allowed; if anyone would be interested in linking the IRC daemons, I'll likely be using the Blackened or UltimateIRCd.
I could provide SMTP services for the network. If we find a zone to use, I can provide email services for that zone. Technically, unlimited user@zone email addresses. I would also be willing to host DNS services for the internal network; providing dynamic DNS and a "view" for those coming in via the network.
Another service I'd consider offering would be your own PostgreSQL database for development and testing. A big part of what I envision for this network is that it'll be a big collection of computer-savvy geeks; what better place to deploy and test code than on a network inhabited solely by those with the clue to help you in debugging, vuln testing, etc.
Other possibilities include shared-CPU time using tools like distcc(1). Maybe remote storage (NFS/iSCSI)?
The benefits of a private network are many, and extend beyond just the sharing of services. This is a chance to build a real-world, potentially large-scale network with changing topology. Any member wishing to add/configure new services or features is welcomed to; and the services could be advertised/listed on the internal network's intranet site.
Other benefits include the ability to policy-route certain traffic. For instance, I would be interested in routing my DNS traffic over the VPN to be routed out someone else's connection; it's low traffic, but my ISP mangles my DNS traffic to route to their servers regardless of what server I point to. Such configuration could even be set up where my DNS traffic goes out through a different VPN's node each time (i.e., a type of load-balancing).
As the network uses a VPN to set up, exactly what it is, a Virtual Private Network, we should seek to keep the network private. Configuring an Apache reverse-proxy, or some type of port-forward from your external IP address into the network would be frowned upon. Any need to create such accesses for the outside should be discussed with the entire group, and hopefully some sort of consensus reached.
It would also be the responsibility of each node's owner to keep the machine secure and up-to-date. Any breach of one of the nodes would lead to an open route to the network.
I will create the CA key used to sign all certificates used to access the VPN; but, as I do not wish to be the sole decider in who is allowed on the network and who is not, I plan to create a handful of sub-CA certificates to be distributed to trusted associates who may also sign certificates for potential users. If you've been given a sub-CA certificate, you're being trusted to know who you're letting into our private little clubhouse. :) Using multiple sub-CA certificates will also allow for anyone to validate who provided access for this particular user/network; and the ability to revoke access as necessary (hoping that it wouldn't ever become necessary).
A "nerd net" would be a fun project to gather a bunch of us geeks together and share services. This is the first time I'm really throwing the idea out there (beyond mentioning it to a few people here and there), so if anyone else thinks it would be fun, get ahold of me!
Friday, September 16. 2011
PostgreSQL 9.1 Released
Among the many new features, here's a snippet from the News page:
Advancing the State of the Art
Our community of contributors innovates with cutting-edge features. Version 9.1 includes several which are new to the database industry, such as:
The SE-Postgres looks to be particularly interesting. It allows you to use SELinux Mandatory Access Controls on the PostgreSQL users and data. Neat. Particularly the SECURITY LABEL.
Beware, however, that there are some incompatibilities with previous releases!
From the 9.1 Release Notes:
By default, backslashes are now ordinary characters in string literals, not escape characters. This change removes a long-standing incompatibility with the SQL standard. escape_string_warning has produced warnings about this usage for years. E'' strings are the proper way to embed backslash escapes in strings and are unaffected by this change.
Also from the Release Notes, it mentions the addition of synchronous replication.
PostgreSQL streaming replication is asynchronous by default. If the primary server crashes then some transactions that were committed may not have been replicated to the standby server, causing data loss. The amount of data loss is proportional to the replication delay at the time of failover.
Synchronous replication offers the ability to confirm that all changes made by a transaction have been transferred to one synchronous standby server. This extends the standard level of durability offered by a transaction commit. This level of protection is referred to as 2-safe replication in computer science theory.
When requesting synchronous replication, each commit of a write transaction will wait until confirmation is received that the commit has been written to the transaction log on disk of both the primary and standby server. The only possibility that data can be lost is if both the primary and the standby suffer crashes at the same time. [...] ...it also necessarily increases the response time for the requesting transaction. The minimum wait time is the roundtrip time between primary to standby.
There's also a What's New in PostgreSQL 9.1 wiki page that explains much of these new features in detail.
Friday, September 16. 2011
CRTC Tells Rogers to Stop Slowing Down the Speed of Online Games
Canada's telecommunications regulator on Friday gave Rogers Communications Inc., mere days to come up with a plan to solve a problem that could be unfairly slowing down the speed of online games.
Rogers now has until Sept. 27 to present a plan to the regulator to deal with the issue.
While Internet service providers have said they need to manage online traffic to deal with network congestion during peak hours, the CRTC has instituted a policy stipulating that the noticeable degradation of time-sensitive Internet traffic requires prior commission approval under Canada's Telecommunications Act.
Emphasis is my own. This is my entire point:
If they're having bandwidth issues during peak-usage, then they are over-subscribing their bandwith ( and/or maxing out the capabilities of their network infrastructure ) and customers notice. Customers also tend to vote with their dollars.
Sunday, August 28. 2011
A good place to start is here. And here you can find several example programs with accompanying source code.
Here is the link to the documentation, which looks to be pretty decent.
Saturday, August 27. 2011
Coordinated ATM Heist Nets Thieves $13M
An international cybercrime gang stole $13 million from a Florida-based financial institution earlier this year, by executing a highly-coordinated heist in which thieves used ATMs around the globe to cash out stolen prepaid debit cards, KrebsOnSecurity has learned.
Jacksonville based Fidelity National Information Services Inc. (FIS) bills itself as the world’s largest processor of prepaid debit cards; FIS claims to process more than 775 million transactions annually. The company disclosed the breach in its first quarter earnings statement issued May 3, 2011. But details of the attack remained shrouded in secrecy as the FBI and forensic investigators probed one of the biggest and most complex banking heists of its kind.
Ed. Note: Emphasis and links are from original article.
This sounds a lot like the attack on RBS WorldPay back in 2008. What I can't seem to figure is how they increase/remove the daily withdrawal limits.
FIS said it had incurred a loss of approximately $13 million related to unauthorized activities involving one client and 22 prepaid cards on its Sunrise, Fla. based eFunds Prepaid Solutions, formerly WildCard Systems Inc., which was acquired by FIS in 2007.
FIS stated: “The Company has identified that 7,170 prepaid accounts may have been at risk and that three individual cardholders’ non-public information may have been disclosed as a result of the unauthorized activities. FIS worked with the impacted clients to take appropriate action, including blocking and reissuing cards for the affected accounts. The Company has taken steps to further enhance security and continues to work with Federal law enforcement officials on this matter.” The disclosure was scarcely noted by news media.
So, approximately $13,000,000 on 22 cards. That works out to over $590,000 per card.
Apparently, the crooks were able to drastically increase or eliminate the withdrawal limits for 22 prepaid cards that they had obtained. The fraudsters then cloned the prepaid cards, and distributed them to co-conspirators in several major cities across Europe, Russia and Ukraine.
Sources say the thieves waited until the close of business in the United States on Saturday, March 5, 2011, to launch their attack. Working into Sunday evening, conspirators in Greece, Russia, Spain, Sweden, Ukraine and the United Kingdom used the cloned cards to withdraw cash from dozens of ATMs. Armed with unauthorized access to FIS’s card platform, the crooks were able to reload the cards remotely when the cash withdrawals brought their balances close to zero.
This explains how they were able to pull this off in such a short amount of time. With cloned cards in at least 6 different countries, the totals could add up rather quickly. It appears that they obtained the cash over about 36 hours.
$13,000,000 / 36 hours = $361,111/hour
It’s still not clear who was responsible for this attack on FIS. The company declined comment. The FBI would neither confirm nor deny that it is investigating. But the breach is eerily similar to an intricate 2008 attack against RBS WorldPay, an Atlanta-based unit of the Royal Bank of Scotland. In that heist, crooks obtained remote access to RBS’s systems and used 44 counterfeit prepaid cards to withdraw more than $9 million from at least 2,100 ATM terminals in 280 cities worldwide. The attack was so sophisticated and alarming that President Obama referred to it in a landmark cybersecurity speech.
Considering that these "cloned" cards have to be inserted into an ATM to obtain cash, I would figure that there are ATM images of the (guilty) people withdrawing money. It's likely only a matter of time before some/all of these guys are identified.
Tuesday, August 16. 2011
How Linux mastered Wall Street
When it comes to the fast-moving business of trading stocks, bonds and derivatives, the world's financial exchanges are finding an ally in Linux, at least according to one Linux kernel developer working in that industry.
As an alternative to traditional Unix, Linux has become a dominant player in finance, thanks to the operating-system kernel's ability to pass messages very quickly, Lameter said in an interview with IDG. In fact, the emerging field of high-frequency trading (HFT) would not be possible without the open-source operating system, he argued. Lameter himself was hired as a consultant by one exchange -- he won't say which one -- based on his work in assembling large-scale Linux clusters.
An interesting read regarding the use of Linux in high-frequency trading applications.
Thursday, July 28. 2011
Survey: Most Enterprises Will be on IPv6 by 2013
IT professionals overwhelmingly say they want their companies to be leaders not laggards in IPv6 adoption, Network World survey finds.
Warning: This article contains a slideshow.
Otherwise, it's pretty interesting to see how important IPv6 adoption has become in the networking world now that it's starting to sink-in that IPv4-space is quickly dwindling. See here. The last /8's were allocated in early February 2011.
Thursday, July 28. 2011
Tunneling nmap through Tor
I looked at how to reduce your exposure using Tor earlier in the week. We installed Tor and Privoxy and configured our system to browse the Internet anonymously. We can use Tor and another great program called proxychains to Torify our network scans with nmap.
I checked this out and it seems to work well. Other than the fact that Tor doesn't seem to carry UDP traffic (other than DNS) or ICMP traffic, you're limited to TCP traffic alone, which isn't too much of a limitation.
It is a little slow getting your scan results back - especially if you don't pass -p<port1>,<port2> to nmap(1), but it's certainly more secure than just trying to use an open proxy server out on the 'net.
All in all, a neat trick.
Wednesday, May 4. 2011
AppleCrate II: A New Apple II-Based Parallel Computer
The AppleCrate II is made from 17 Enhanced Apple //e main boards. (Fifteen of these boards were obtained in the same eBay auction that netted the eight unenhanced boards for the original AppleCrate.) Because they are enhanced ROMs, the original NadaNet boot ROM code would not fit and a new boot protocol had to be developed, as described below.
Instead of mounting the cards vertically in a frame, as in the original, I decided to mount them horizontally in a stack secured with standoffs—3/4" long hexagonal rods, each with a screw protruding from one end and a tapped hole in the other. The AppleCrate II has nine "columns" of these standoffs—six metal columns at the back and corners of the boards and three nylon columns interior to the boards to add stiffness[...]
This has got to be one of the coolest hacks I've seen in a very long time. I will always have a fondness for the Apple II series; not the least of which because it's based on the MOS Technologies 6502 microprocessor.
It's well worth hitting the link just to see a picture of this rig. It's really cool looking.
The boards are powered by a PC AT power supply. The average power consumed by an Apple //e board is about 4.2 watts, so the whole 17-board crate consumes only about 70 watts in total, and both the AppleCrate and the power supply run only a few degrees above ambient temperature.
Tuesday, April 5. 2011
There are several kernel hardening features that have appeared in other hardened operating systems that would improve the security of Ubuntu, and Linux in general. They have been controversial, so this page attempts to describe them, including their controversy and discussion over the years, so as much information is available to make an educated decision about potential implementations.
This page describes various means of securing the Linux kernel on Ubuntu Linux systems. This includes symlink protection; ptrace protection; and NX emulation, among others.
Thursday, February 10. 2011
Two /8s allocated to APNIC from IANA
APNIC received the following IPv4 address blocks from IANA in February 2011 and will be making allocations from these ranges in the near future:
The allocation of these blocks left the IANA with 5 /8 blocks left; which triggers a clause saying that when the pool gets down to 5 remaining blocks, each of the 5 *NICs get one of the remaining blocks. It's just unfortunate that 2 of the last 6 or 7 blocks are completely wasted by being routed to what essentially amounts to an Internet cesspool (APNIC). I regularly pull down the IANA assignments, parse out the netblocks assigned to APNIC, and then null-route them all.
Note: The aforementioned clause is stated in the "Global policy for the allocation of the remaining IPv4 address space."
Here's a ticker, from Hurricane Electric, that estimates the eventual exhaustion of IPv4 addresses from the regional registries.
You might take this time to register for a block of IPv6 addresses - It's free!
Sunday, January 16. 2011
Common Threads: OpenSSH Key Management, part One
Many of us use the excellent OpenSSH [...] as a secure, encrypted replacement for the venerable telnet and rsh commands. One of OpenSSH's more intriguing features is its ability to authenticate users using the RSA and DSA authentication protocols, which are based on a pair of complementary numerical keys. As one of its main appeals, RSA and DSA authentication promise the capability of establishing connections to remote systems without supplying a password. While this is appealing, new OpenSSH users often configure RSA/DSA the quick and dirty way, resulting in passwordless logins, but opening up a big security hole in the process.
Yet another of the DeveloperWorks! series. I love these articles.
The vulnerability in question has to do with pub-key (RSA/DSA) authentication and leaving a null/blank passphrase on the keypair. This article describes how to configure ssh-agent to cache the decrypted private keys so you only have to type the passphrase once per session. This has the benefit of allowing you to use scripted SSH logins without being prompted for a password, but also means that the keypair is still relatively secure even if someone else manages to compromise them via the filesystem.
OpenSSH's RSA and DSA authentication protocols are based on a pair of specially generated cryptographic keys, called the private key and the public key. The advantage of using these key-based authentication systems is that in many cases, it's possible to establish secure connections without having to manually type in a password.
While the key-based authentication protocols are relatively secure, problems arise when users take certain shortcuts in the name of convenience, without fully understanding their security implications. In this article, we'll take a good look at how to correctly use RSA and DSA authentication protocols without exposing ourselves to any unnecessary security risks. In my next article, I'll show you how to use ssh-agent to cache decrypted private keys, and introduce keychain, an ssh-agent front-end that offers a number of convenience advantages without sacrificing security.
A more in-depth description of what I mentioned above.
For those already familiar with ssh-agent(1) - which you should be - then you can skip ahead to Part Two and Part Three.
Sunday, October 31. 2010
Linux Software RAID 1 Setup
This is the article I used in creating my 2-device (2 x 250 GB SATA) RAID1 in my desktop.
The documentation is fairly straight-forward and easy to follow. One thing this article assumes is that you're going to boot off a soft-RAID. This was not the case in my scenario; I just wanted a redundant storage slice.
Now, to enable crypto on the slice.
Thursday, June 17. 2010
Much Ado About NULL: Exploiting a Kernel NULL Dereference
Last time, we took a brief look at virtual memory and what a NULL pointer really means, as well as how we can use the mmap(2) function to map the NULL page so that we can safely use a NULL pointer. We think that it’s important for developers and system administrators to be more knowledgeable about the attacks that black hats regularly use to take control of systems, and so, today, we’re going to start from where we left off and go all the way to a working exploit for a NULL pointer dereference in a toy kernel module.
The article then goes on to provide a hand's on example of how to exploit a NULL pointer dereference to execute arbitrary code. Pretty good read.
Monday, May 17. 2010
Secret Forum Reveals Oz Firewall Backroom Dealing
Circumvention legal, but you can't tell anyone how[.]
Emphasis is theirs.
Now say what? It will be legal to circumvent (technical details at the bottom), but illegal to explain to someone else how to perform this perfectly legal configuration. I wonder how this might affect a corporate or ISP helpdesk perform VPN connectivity setup?
Australia’s plans for a firewall to protect its population from smut on the internet are rapidly evolving from farce to total chaos. Weekly revelations on bulletin boards suggest that Stephen Conroy, the man behind the big idea, does not know what forthcoming legislation on the topic will say, when it will be introduced or how the firewall will work in practice.
This time, emphasis is mine. I want to continue to point out how big of an asshat this particular Australian politician is. He is the "Minister for Broadband, Communications and the Digital Economy." He's the one that floated the idea of this nation-wide "firewall" (which is technically a proxy since it will be filtering at layer 7 - hence the technical problems) to "protect" citizens from illegal, immoral, or "dangerous" content. This is nearly the same thing the Chinese and Iranians are doing, just using layer 7 proxy devices instead of what's assumed to be basic layer 3 IP filtering of destination hosts. Skip to the very end of the post for the technical details behind this.
To say this whole thing began as a farce is hitting the nail right on the head.
Meanwhile, it turns out that the Minister’s own Department of Broadband, Communications and the Digital Economy (DBCDE) has been hosting a secret forum for discussions with ISPs likely to be affected by proposals. Along the way it floated the idea of making it a crime to advise surfers on how to do things that are perfectly legal to do. Confused? You will be.
First up is the time scale for plans to introduce the new firewall. As already reported, the question of when legislation will be introduced has now been bouncing between the offices of Prime Minister Kevin Rudd and Communications Minister Stephen Conroy. Severe wriggling from Conroy’s office suggests that plans for an early introduction of legislation have been put on the back burner for now.
Conroy wants to shelve the legislation until after the elections. He's technically incompetent, but he's smart enough to realize that this is going to be a screw-up of biblical proportions and it will likely cost him the election. It's "on the back burner for now," but it's by no means dead.
Meanwhile further digging inside this forum revealed that departmental officials appear to have been discussing the possibility of making it a criminal offen[s]e to advise individuals of means that would enable them to circumvent the filter – even where the means themselves were perfectly legal.
I would say that this equates to information being illegal. In a way, that's in the same league as banning books.
As the EFA suggests, this answer raises more issues than it addresses, and relies on the degradation of the Australian network being gradual, rather than catastrophic. It does appear, however, that the government has no plans to deal with a possible overload of its firewall bringing the Australian internet to its knees – beyond setting up a review when such an event actually happens.
Why should there be any degradation of bandwidth at all? I suspect that if this goes through, there's going to be a noticeable difference in download speeds and initial access to websites.
Circumvention of these filters will be trivial; you can wrap your request in SSL (such as https:// if the website supports it), by using a VPN provider outside Australia (many more found on the link for the word "using"), by using Tor (which uses a technique known as Onion Routing), or even by viewing blocked pages via the Google cache.
This filtering is to take place with proxies (at the Application  layer) as opposed to the traditional large-scale deployments of firewalls (at the Network  and Transport ) layers). The deeper you have to inspect a packet, the more CPU and memory required to process the filters. It costs - in many ways, from actual dollars for the hardware and software, to performance impact, to configuration complexity to man-hours of maintenance - considerably more to filter at layer 7 with a proxy than layers 3/4 with a firewall.
The one benefit to filtering at layer 7 is that you block only what is intended to be blocked. In today's world (where we've been running out of IPv4 space for a dacade now) a lot of websites are configured using virtual hosts. This allows web hosting providers to host a virtually unlimited number of websites on a single IP address. Let's say there are two websites, both hosted on the same virtual host IP address, where one is banned and the other is not:
With a layer 7 proxy, when the user attempts to reach a website, the proxy intercepts the request, checks the request (including hostname and URI), and then either blocks the request, or requests the page on behalf of the end-user and returns her the requested webpage. So your mom can still access www.momsrecipes.co.au while nobody can access www.bannedwebsite.co.au. With a proxy, you can return HTML to the end-user explaining why access to this particular website is blocked and possibly a method of contact to dispute the denial of access.
() Finer-grained control of what's filtered
() Less "false positives"
() Expensive in many aspects (mentioned above)
() Complex configuration
() Considerable service impact due to use of DPI at Application  layer
() Slightly easier to circumvent; using https is the only circumvention measure mentioned that does not tend to work with the firewall approach - the rest should work against both types
With a layer 3/4 firewall, access to the virtual host IP address (or even the subnet it's part of) will be blocked. When anyone tries to go to www.bannedwebsite.co.au, they are unable to, which is the intended result. They will get a different error; the browser will just report that website was unreachable. End of explanation. If anyone tries to go to www.momsrecipies.co.au, they will also be denied with the same uninformative unreachable error. Since both websites are on the same IP address, the firewall has no way of knowing which website you're looking for, so it blocks everything.
() Cheaper to deploy
() Simpler configuration - hundreds of hosts/subnets vs. thousands of hostnames
() Can often be implemented on existing hardware - edge or core routers utilization IP ACLs
() Faster, more responsive access to allowed websites; less service impact
() Collateral damage - legitimate sites on the same virtual host as banned site are also blocked
() Slightly more difficult to circumvent (a websites https site will likely be in the same blocked subnet)
Comparison with Other Instances of State-Controlled Internet Access:
I see three major differences in the Australian proposal as opposed to the other major regimes implementing state-wide filtering of websites (China and Iran). They are as follows:
Another side effect of this proposal, from an economic standpoint, is that it is likely to put smaller ISPs out of business. Instead of putting the smaller burden on the backbone providers, with considerably more capital, it will place a more expensive burden on ISPs with less resources at their disposal. If these filters become legally mandatory, this will likely put smaller ISPs out of business. A smaller provider may not have access to the resources (money, manpower, and know-how) to meet these requirements and will thus have to shut down operations.
There is one other somewhat commonly used filtering technique involving DNS. The ISP or corporate gateway will transparently route all DNS requests by the end-user to DNS servers under their control. The DNS servers will be configured as authoritative for the blocked domains; typically configured to return an IP address that connects you to a website telling you that your access is blocked and possibly why. This is similar to the Walled Garden approach.
Monday, May 17. 2010
GNU libnss_db Local Information Disclosure Vulnerability
According to the "Discussion" tab:
The GNU 'libnss_db' library is prone to a local information-disclosure vulnerability.
Local attackers can exploit this issue to read the first line of arbitrary local files. This may lead to further attacks.
libnss_db 2.2.3 is vulnerable; other versions may also be affected.
I was not able to reproduce this on my machine as I did not already have the libnss-db package installed, and the package for my distro has already been fixed, so it does no good to install it.
The discussion shows this as an example:
sudo apt-get install libnss-db
Now if you already have sudo(8) privs to stop/start init.d services and use ln(1), I'm guessing there are probably easier ways of obtaining root. Every attack vector should be corrected, but this just seems a like the shooting fish in a barrel with sudo privs as such.
Wednesday, April 28. 2010
Rough Justice for Terry Childs
A San Francisco jury found Terry Childs guilty of one count of felony denial of service yesterday. The count carries a maximum sentence of five years in prison. Considering that he's already served nearly two years to date, he may actually be released on parole at his June 14 sentencing hearing, or he may be facing another three years behind bars. His lawyers stated that they will appeal.
This ruling brings a chill to my spine. While Childs could have handled the situation with a little more grace, I don't believe that any crime was actually committed. I've worked under some pretty shoddy conditions before - lack of procedures, lack of accountability - but this sets precedent for criminal prosecution.
Knowing firsthand how difficult this would be, I'd have just let the lackluster-at-best management sink. I would have turned over the passwords along with my resignation. Anyone with a CCIE can find another job, even in this economy. If it comes down to risking my freedom and clean criminal record because my boss is a moron, then it's time to move on. I can't imagine how painful it would have to be to create such a complex, intricate system, only to have to turn it over to inept cretins who will undoubtedly destroy it.
Sunday, April 25. 2010
RIM Buys QNX to Tie Phones to Cars
Research in Motion said Friday (04/09/2010) that it had signed a deal with Harman International to acquire its QNX Software Systems unit to help tie its BlackBerry smartphones to car navigation systems.
Terms of the deal were not announced. It is expected to close within 30 to 45 days if it passes regulatory approvals.
QNX designs a real-time embedded OS, that it has tied to ARM, MIPS, PowerPC and other processors and embedded designs.
"The car is going to become the first-class citizen of the cloud, where inside the car you're going to have access to all the connected media, all the social services that are out there, and it will truly revolutionize the driving experience, the experience of the automotive makers making those cars, the ecosystem of people that are going to make applications for those cars," said Dan Dodge, the chief executive of QNX, in a recent video made with Alcatel-Lucent to retrofit a Toyota car with a cloud-connected entertainment system networked via the wireless LTE standard. "It's probably one of the most exciting times in automotive history."
I wouldn't have named QNX as the software to buy if you're looking to get into car navigation systems, but they're certainly a good choice. I've used QNX here and there, but it's been quite a few years back. The software was always really neat looking (Photon is a beautiful GUI), was blazing fast, tiny footprint, and as stable as anything I've ever encountered.
For those unfamiliar, QNX is a Real-Time Operating System (RTOS) that's a perfect example of a microkernel architecture.
Wednesday, March 17. 2010
Cisco's New Router: Trouble for Hollywood
Cisco's CRS-3 router made a bit of a splash when it was announced on March 9, but the power of this new device hasn't yet sunk in. Consider: The CRS-3, a network routing system, is able to stream every film ever made, from Hollywood to Bombay, in under four minutes. That's right — the whole universe of films digested in less time than it takes to boil an egg. That may sound like good news for consumers, but it could be the business equivalent of an earthquake for the likes of Universal Studios and Paramount Pictures.
I'm not sure that the comparison of streaming the entire Hollywood movie collection in less than 4 minutes is completely accurate; I'd like to know how many movies they're estimating, how big each DVD image is (4.7GB vs. 9.4GB, for instance), and what Layer 1/2 technologies they're talking about (is this ethernet over fiber?). If you're just talking about passing the data across the 322 Tb/sec backplane, then it might be possible; but if you're talking about carrying all that data across multiple hops, each connected by, say, a 10 Gbps ethernet-over-fiber link, it's just not doable. The 10 Gbps link would definitely be a bottleneck.
I also have my doubts as to the likelihood of a piece of networking equipment meaning the end of the world for any sector of business. Sure, as internet connections get faster more people will start downloading/streaming their content; so, unless the people running the MPAA and RIAA are complete morons (which I'm not ruling out), all they have to do is change their business model to incorporate downloads. That's not exactly an overnight change, but it's entirely possible.
But routers are not the only cause of bottlenecks, and Cisco is not alone in working to maximize the Internet's full potential. Google is also concerned about the speed limitations imposed by wires that run to the home. Last month, Google, best known for its search engine, announced plans to test ultra-high-speed broadband networks that would deliver Internet content to residential subscribers at speeds of 1 gigabit per second — 100 times as fast as the top speed available today. This would allow consumers to complete a PC download of a Hollywood blockbuster like Avatar in about 72 seconds.
I don't understand the phrase, "100 times as fast as the top speed available today." That would seem to indicate that a 10Mb/sec connection is the fastest available today. I'm currently on a 15 Mb/sec connection as I write this, and my ISP offers at least 20 Mb/sec.
Downloading a movie at full-speed on a 1 Gbps connection, over 72 seconds, results in 8,640 MB of data. That's almost a full double-layer DVD. We'll assume 120 MB/sec (bytes) over the 1 Gbps link, which is right about the theoretically maximum without figuring in the overhead and framing (for brevity); times 72 seconds = 8,640MB.
The ability to download albums and films in a matter of seconds is a harbinger of deep trouble for the Motion Picture Association of America (MPAA) and the Recording Industry Association of America (RIAA), which would prefer to turn the clock back, way back.
Consider that the MPAA, whose members include Disney and Universal, attacked the VCR in congressional hearings in the 1980s with a Darth Vader–like zeal, predicting box-office receipts would collapse if consumers were allowed to freely share and copy VHS tapes of Hollywood movies. A decade later, the MPAA fought to block the DVD revolution, mainly because digital media could be copied and distributed even more easily than videocassettes.
"Fair Use" has held up in court many times that as a consumer we are allowed to make 1 archival copy in case our normal store-bought copy gets scratched/lost/stolen. It's also been proven in several instances where customers are more likely to pay, and pay more, for DRM-free movies and music and more likely to pirate any "restricted" content. If I pay for a CD, I damn well expect to be able to listen to it at home in my stereo, in my car's CD player, and to be able to rip the tracks to my iPod. If I'm not allowed to do those things, there's no point in me buying the music...it's not like any new music has come out in the last 10 - 15 years worth raising a stink about, anyway. The more difficult it is for someone like me to rip their store-bought copy of an album onto their iPod, the more likely they are to go out and pirate a "cracked" copy that will let them transfer it with relative ease.
To quote the Borg: "Resistance is futile."
The hard fact is that the latest developments at Cisco, Google and elsewhere may do more than kill the DVD and CD and further upset entertainment-business models that have changed little since the Mesozoic Era. With superfast streaming and downloading, indie filmmakers will soon be able to effectively distribute feature films online and promote them using social media such as Facebook and Twitter.
This is probably the best part of the article. The idea of taking the RIAA out of the picture just makes me smile. The sooner they're gone, the sooner we can stop being spoon-fed this pop-formula Nickelback type shit. If independents can start their own online market and promote/sell their music without the need for a label, then music might actually be worth saving. As it stands, I say let the music industry dry up and never press or sell a single CD again. Sure, we'd be losing some of the greats, but there is so much crap out there that those rare gems make up a tiny fraction of 1% of the albums out there. Now if music is readily available from the independents, there might actually be some music not only worth listening to, but worth buying. I know there are bands out there much better than Nickelback, but they haven't been "discovered" or "signed" -- that's why they're still playing the local dive bar. But if they could market themselves, this might drastically change the landscape of the music industry, and for the better.
Wednesday, February 17. 2010
FCC to Propose Faster Broadband Speeds
The U.S. Federal Communications Commission unveiled a plan on Tuesday that would require Internet providers to offer minimum home connection speeds by 2020, a proposal that some telecommunications companies panned as unrealistic.
It's "unrealistic" if they wish to keep their huge profit margins. With the average broadband speed in the U.S. being under 4 mbit (mentioned later in the article), they will be receiving approximately 1/25th the amount of profit per megabit that they're currently making. Internet providers in Asia and other parts of the world that are subject to more regulation, or even state-run, have been providing 100Mbps - 1Gbps for several years. It's more than technical possible, and financially feasible; you just have to be in a market where the monopolistic telecoms aren't allowed to gouge you at-will.
The FCC wants service providers to offer home Internet data transmission speeds of 100 megabits per second (Mbps) to 100 million homes by a decade from now, Commission Chairman Julius Genachowski said.
Industry estimates generally put average U.S. Internet speeds at below 4 Mbps.
I suppose I've been fortunate; I've had access to somewhat reasonably priced connectivity at 10Mbps - 15Mbps. I do, however, know several people with connectivity well below the 4Mbps mark.
The proposal is part of the FCC's National Broadband Plan, due next month. It comes a week after Google Inc rattled Internet service providers with its plan to build a super-fast Internet network.
I've already nominated the city I live in. With the high population density, diversity of professionals represented, and easy access to large amounts of bandwidth nearby, I think there's a fair chance that my city may be one of the chosen.
"A 100 meg is just a dream," Qwest Communications International Inc Chief Executive Edward Mueller told Reuters. "We couldn't afford it."
"First, we don't think the customer wants that. Secondly, if (Google has) invented some technology, we'd love to partner with them," Mueller added.
"...we don't think the customer wants that." Excuse me? That is the most ridiculous argument I've ever heard. Bandwidth is like RAM, you can never have too much. If you've got a connection faster than you're using at this very moment, you haven't lost anything. If you don't have enough bandwidth for what you're planning on doing, then your experience will suffer.
Google has not "invented" any new technology for what they plan to roll out, they're simply willing to spend the capital to build-out a fiber-to-the-home network (at least in select markets, initially). As far as Qwest being willing to "partner" with Google on such projects, I'm sure they'll be happy to let Google spend the capital to build out the fiber network and then try to make money through advertising or other avenues, not requiring them to spend a dime on the infrastructure side.
Verizon, the third-largest provider, and one that has a more advanced network than many competitors, said it has completed successful trials of 100 Mbps and higher through its fiber-optic FiOS network.
"(One gigabit per second) as discussed in current news reports is a lot of signal; typically enough for many massive business operations," Verizon said in a statement that referred to Google's plan to test a network with those speeds. "But we could make it happen over the FiOS network without much trouble, should a market for it develop."
One analyst questioned whether the FCC's proposal could lead to a sustainable business model.
"In order to earn a return for investors, you have to be conscious of what consumers will pay. I don't know this is something consumers will pay for," Piper Jaffray analyst Christopher Larsen said. "It's a nice goal, but it's a little on the over ambitious side."
It's been proven to be a sustainable business model in many parts of the world. Nobody is more starved for bandwidth, as far as users go, than the U.S.
I doubt that the FCC proposal requires that the providers provide no less than 100Mbps to every customers, just that it's an affordable (to the majority of people) option. If a customer doesn't want to pay, say, $80/month for a 100Mbps connection, then offer a 50Mbps option at $50/month. That's incentive for the customer to pay the less than double price for the double bandwidth, and you're still meeting your obligation of providing 100Mbps service to those who want it. I don't know of any network technology that will carry 100Mbps that won't let you throttle it back to 30, 50, 75 Mbps or any other arbitrary speed.
The United States ranked 19th in broadband speed, trailing Japan, Korea and France, according to a 2008 study by the Organization for Economic Co-operation and Development.
Data shows that about 64 percent of U.S. households used a high-speed Internet service in 2009, the Commerce Department said on Tuesday. That is a 25 percent increase from 51 percent two years earlier.
This is truly sad. We're the world leader in technological development, but due largely to greed, we're 19th in the world in terms of broadband speed.
Wednesday, February 17. 2010
20 Years of Adobe Photoshop
One of the most impressive things about the company is the fact that one gifted family, consisting of an engineering prof, a PHD engineering student, and a talented special effects whiz working at Industrial Light and Magic came up with the core idea of Photoshop.
Thomas Knoll, the PHD student, is still heavily involved with Photoshop years later.
Glen Knoll was a college professor with two sons and two hobbies; computers and photography.
He had a darkroom in his basement, and an Apple II Plus that he was allowed to bring home from work.
Thomas Knoll adopted his father’s photography habit throughout high school, while his brother, John Knoll, purchased one of the first Macs available to the public.
Fast forward to 1987: Thomas Knoll was a PHD student studying Engineering at the University of Michigan. His brother was working at Industrial Light and Magic.
Thomas Knoll wrote a subroutine for a program to translate monochrome images on his monitor to grayscale.
The successful subroutine led Knoll to create more and very soon he had a number of processes for achieving photographic effects on digital images.
After his brother John saw what Thomas was doing, he recommended that Thomas turn what he was doing into a full-featured image editor.
And lo, the world's most powerful - and likely most used - image editing software was born.
It's hard to believe it's been 20 years. I started tinkering with Photoshop 3.0, around 1995. I've had access to versions 3.0, 4.0, 5.0, 5.5, 6.0 and 7.0 over the years. I definitely do not have the eye for graphics design, but it's fun tinkering around.
1994 – Photoshop 3.0
The big story for Adobe Photoshop 3.0 was layers. Layers were and are a lifesaver for any marginally complex design.
Prior to their introduction, designers would save different versions of designs so that they could go back and grab them if needed; layers made this practice redundant.
Layers are individual slices of the image that go together to make the final “sandwich” of the image. Different images, such as those used in the image above in the 3.0 splash screen, are assigned their own layers, making it easy to work on those images without tampering with other areas of the image.
This is the first version I tried. The layers feature is a life-saver.
Thomas Knoll, the original creator of the program, was responsible for their development. Other engineers made improvements in the program’s performance with Power Mac chips and bringing the Windows version up to the same level as the Mac version. Tabbed palettes also had their debut in 3.0.
Adobe engineers included Adobe Transient Witticisms (ATW) with this version. They were little Easter Egg funny one-liners that would appear only when you pressed obscure combinations of keys.
Here is a small list of known "Easter Eggs" contained within Photoshop. They're a huge waste of code, CPU, and memory, but usually worth checking out.
Friday, February 12. 2010
European Swift Bank Data Ban Angers U.S.
The European Parliament has blocked a key agreement that allows the United States to monitor Europeans' bank transactions - angering Washington.
I'm sure the nanotech engineers are currently working on the world's tiniest violin.
The US started accessing Swift data after the 11 September 2001 terror attacks on New York and Washington.
But the fact that the US was secretly accessing such data did not come to light until 2006.
My fear is not that this data mining was used to track terrorists; far from it. I'm inclined to believe that this monitoring was used for other purposes. Purposes such as finding and prosecuting tax cheats. That, in itself, isn't a bad thing either. My belief is that the only tax cheats that will be prosecuted will be the ones who failed to line the campaign coffers of our elected officials; the ones who've paid off the right people will continue to get away with whatever it is they're getting away with.
Swift handles millions of transactions daily between banks and other financial institutions worldwide. It holds the data of some 8,000 banks and operates in 200 countries.
(Page 1 of 14, totaling 340 entries) » next page