Saturday, January 26. 2013
Belkin Helps Cisco Exit Consumer Space by Acquiring its Home Networking Division, Including Linksys
Belkin on Thursday announced plans to acquire Cisco’s Home Networking Business Unit, including its products, technology, employees, and even the well-known Linksys brand. Belkin says it plans to maintain the Linksys brand and will offer support for Linksys products as part of the transaction, financial details for which were not disclosed.
Anyone who has spent any time designing, maintaining, securing, or even implementing an Enterprise-grade network already knows that Cisco is the de facto standard. Cisco has ruled the Enterprise-grade market for decades, and will continue to do so into the foreseeable future. What hasn't been so obvious is where/how Cisco will benefit from it's recent (the past, say, 5 - 10 years) interest in the consumer market.
Entering the consumer-grade market was their first mistake.
Cisco, it's R&D "group" long-consisting of a can of spray-paint and bridge stencil, decided to buy it's way into the consumer market through the purchase of Linksys. Instead of doing it's own R&D, creating it's own consumer-grade hardware and software, and entering the market with nothing more than it's "good name," (the Cisco brand); they chose to use some of their huge cache of cash (pun intended), result of corporate hoarding as is the norm in Fortune 500 these days, to buy them some assurance of success upon entering the consumer market.
Their entry into the consumer-grade market with re-branded equipment was their second mistake. The Linksys name has always been synonymous with "piece of shit" to anyone familiar enough to properly design and implement a network starting with the Visio diagram and ending with a fully-functional network passing traffic. Cisco first had to change the market perception that Linksys was most definitely sub-par equipment. They made modest attempts at this with the packaging of their gear with the Cisco logo, references to "Linksys powered by Cisco", etc. Some credit must be given to Cisco for at least purchasing the best of the consumer-grade brands on the market. I'm at a loss to think of any other company they could have bought, with an established market presence, at any cost, that would have given them a better starting-point (in reference to quality and technology) than Linksys.
One can only assume that Cisco's entry into the consumer space was an attempt to "own the stack," as Oracle successfully did with it's acquisition of Sun Microsystems a few years back. With Oracle now owning Sun; it's hardware, operating system, Java software, and all of their intellectual property, they (Oracle) can, and do, provide every piece of the puzzle when deploying a wide array of systems and services. They can provide the hardware (servers, network-based storage, clustering-oriented network gear, etc), the software (Solaris, ZFS, Oracle Application Server, Oracle database, etc), licensing, and support from the absolute top to bottom of the "stack." In Cisco's case, being that it's related to networking as opposed to, say, services-based web applications, there is much less benefit to be derived from "owning the stack," and it is much, much more difficult to position a company as such in the network world.
Cisco's third mistake was in over-estimating the benefit to "owning the stack," and under-estimating the cost to do so. Largely due to the fact that there are so many standards (IEEE, RFCs, etc) that a piece of networking gear must adhere to, there are many other competitors in any particular market segment. Oracle doesn't have to provide open APIs and documentation on most of it's products. It's often to their benefit to do so, but not strictly required. With Cisco, and networking equipment in general, the standards (such as TCP/IP) are "open" standards and visible to anyone. Let's say that Cisco did manage to replace every other consumer-grade NAT router on every at-home desktop with their own Linksys-derived gear; what, really, has it gained them? Sales of equipment? Sure. The level of vendor lock-in that Oracle hopes to achieve with it's Sun-derived gear? Not even close.
Cisco's upper management should well have known that the money they'd make, even with a successful entrance into the consumer-grade market, from selling routers and switches to "mom and pop" would be minimal, at best. That's assuming that income from sales outpaces the cost of supporting the equipment. Cisco has long made a good portion of it's money on support contracts from larger corporations who will not purchase from an OEM that doesn't/can't provide that engineering-level of support as necessary. Home and small-business consumers cannot afford that level of support. That type of customer is used to free, or nearly free, support provided by the vendor.
The last thing that I'll add to this has to do with security. I realize that Cisco has been looking to dump Linksys for some time (IOW, the only part of this article that is news is that they have found their buyer), but I can't help but wonder if the recently-disclosed vulnerability in all models of Linksys gear didn't cause Cisco to go ahead and accept some concessions on their part to hurry up and get rid of Linksys. Stated differently, I think that the vulnerability might have caused Cisco to, for instance, accept a lower price-per-share selling price to go ahead and rid themselves of the whole Linksys debacle sooner rather than later.
REFERENCE: Cisco Linksys Remote Preauth 0day Root Exploit
Monday, July 30. 2012
U.S. Census Bureau Offers Public API for Data Apps
For any software developers with an urge to play around with demographic or socio-economic data: the U.S. Census Bureau has launched an API for Web and mobile apps that can slice that statistical information in all sorts of nifty ways.
The API draws data from two sets: the 2010 Census (statistics include population, age, sex, and race) and the 2006-2010 American Community Survey (offers information on education, income, occupation, commuting, and more). In theory, developers could use those datasets to analyze housing prices for a particular neighborhood, or gain insights into a city’s employment cycles.
The APIs include no information that could identify an individual.
For those itchin' to get to the the APIs, click the link. There's also an "app gallery."
I, for one, am looking forward to the unique and useful ways in which this data will tell us new things about ourselves.
Thursday, July 28. 2011
Tunneling nmap through Tor
I looked at how to reduce your exposure using Tor earlier in the week. We installed Tor and Privoxy and configured our system to browse the Internet anonymously. We can use Tor and another great program called proxychains to Torify our network scans with nmap.
I checked this out and it seems to work well. Other than the fact that Tor doesn't seem to carry UDP traffic (other than DNS) or ICMP traffic, you're limited to TCP traffic alone, which isn't too much of a limitation.
It is a little slow getting your scan results back - especially if you don't pass -p<port1>,<port2> to nmap(1), but it's certainly more secure than just trying to use an open proxy server out on the 'net.
All in all, a neat trick.
Wednesday, May 4. 2011
AppleCrate II: A New Apple II-Based Parallel Computer
The AppleCrate II is made from 17 Enhanced Apple //e main boards. (Fifteen of these boards were obtained in the same eBay auction that netted the eight unenhanced boards for the original AppleCrate.) Because they are enhanced ROMs, the original NadaNet boot ROM code would not fit and a new boot protocol had to be developed, as described below.
Instead of mounting the cards vertically in a frame, as in the original, I decided to mount them horizontally in a stack secured with standoffs—3/4" long hexagonal rods, each with a screw protruding from one end and a tapped hole in the other. The AppleCrate II has nine "columns" of these standoffs—six metal columns at the back and corners of the boards and three nylon columns interior to the boards to add stiffness[...]
This has got to be one of the coolest hacks I've seen in a very long time. I will always have a fondness for the Apple II series; not the least of which because it's based on the MOS Technologies 6502 microprocessor.
It's well worth hitting the link just to see a picture of this rig. It's really cool looking.
The boards are powered by a PC AT power supply. The average power consumed by an Apple //e board is about 4.2 watts, so the whole 17-board crate consumes only about 70 watts in total, and both the AppleCrate and the power supply run only a few degrees above ambient temperature.
Thursday, February 10. 2011
Two /8s allocated to APNIC from IANA
APNIC received the following IPv4 address blocks from IANA in February 2011 and will be making allocations from these ranges in the near future:
The allocation of these blocks left the IANA with 5 /8 blocks left; which triggers a clause saying that when the pool gets down to 5 remaining blocks, each of the 5 *NICs get one of the remaining blocks. It's just unfortunate that 2 of the last 6 or 7 blocks are completely wasted by being routed to what essentially amounts to an Internet cesspool (APNIC). I regularly pull down the IANA assignments, parse out the netblocks assigned to APNIC, and then null-route them all.
Note: The aforementioned clause is stated in the "Global policy for the allocation of the remaining IPv4 address space."
Here's a ticker, from Hurricane Electric, that estimates the eventual exhaustion of IPv4 addresses from the regional registries.
You might take this time to register for a block of IPv6 addresses - It's free!
Sunday, October 31. 2010
Linux Software RAID 1 Setup
This is the article I used in creating my 2-device (2 x 250 GB SATA) RAID1 in my desktop.
The documentation is fairly straight-forward and easy to follow. One thing this article assumes is that you're going to boot off a soft-RAID. This was not the case in my scenario; I just wanted a redundant storage slice.
Now, to enable crypto on the slice.
Monday, May 17. 2010
Secret Forum Reveals Oz Firewall Backroom Dealing
Circumvention legal, but you can't tell anyone how[.]
Emphasis is theirs.
Now say what? It will be legal to circumvent (technical details at the bottom), but illegal to explain to someone else how to perform this perfectly legal configuration. I wonder how this might affect a corporate or ISP helpdesk perform VPN connectivity setup?
Australia’s plans for a firewall to protect its population from smut on the internet are rapidly evolving from farce to total chaos. Weekly revelations on bulletin boards suggest that Stephen Conroy, the man behind the big idea, does not know what forthcoming legislation on the topic will say, when it will be introduced or how the firewall will work in practice.
This time, emphasis is mine. I want to continue to point out how big of an asshat this particular Australian politician is. He is the "Minister for Broadband, Communications and the Digital Economy." He's the one that floated the idea of this nation-wide "firewall" (which is technically a proxy since it will be filtering at layer 7 - hence the technical problems) to "protect" citizens from illegal, immoral, or "dangerous" content. This is nearly the same thing the Chinese and Iranians are doing, just using layer 7 proxy devices instead of what's assumed to be basic layer 3 IP filtering of destination hosts. Skip to the very end of the post for the technical details behind this.
To say this whole thing began as a farce is hitting the nail right on the head.
Meanwhile, it turns out that the Minister’s own Department of Broadband, Communications and the Digital Economy (DBCDE) has been hosting a secret forum for discussions with ISPs likely to be affected by proposals. Along the way it floated the idea of making it a crime to advise surfers on how to do things that are perfectly legal to do. Confused? You will be.
First up is the time scale for plans to introduce the new firewall. As already reported, the question of when legislation will be introduced has now been bouncing between the offices of Prime Minister Kevin Rudd and Communications Minister Stephen Conroy. Severe wriggling from Conroy’s office suggests that plans for an early introduction of legislation have been put on the back burner for now.
Conroy wants to shelve the legislation until after the elections. He's technically incompetent, but he's smart enough to realize that this is going to be a screw-up of biblical proportions and it will likely cost him the election. It's "on the back burner for now," but it's by no means dead.
Meanwhile further digging inside this forum revealed that departmental officials appear to have been discussing the possibility of making it a criminal offen[s]e to advise individuals of means that would enable them to circumvent the filter – even where the means themselves were perfectly legal.
I would say that this equates to information being illegal. In a way, that's in the same league as banning books.
As the EFA suggests, this answer raises more issues than it addresses, and relies on the degradation of the Australian network being gradual, rather than catastrophic. It does appear, however, that the government has no plans to deal with a possible overload of its firewall bringing the Australian internet to its knees – beyond setting up a review when such an event actually happens.
Why should there be any degradation of bandwidth at all? I suspect that if this goes through, there's going to be a noticeable difference in download speeds and initial access to websites.
Circumvention of these filters will be trivial; you can wrap your request in SSL (such as https:// if the website supports it), by using a VPN provider outside Australia (many more found on the link for the word "using"), by using Tor (which uses a technique known as Onion Routing), or even by viewing blocked pages via the Google cache.
This filtering is to take place with proxies (at the Application  layer) as opposed to the traditional large-scale deployments of firewalls (at the Network  and Transport ) layers). The deeper you have to inspect a packet, the more CPU and memory required to process the filters. It costs - in many ways, from actual dollars for the hardware and software, to performance impact, to configuration complexity to man-hours of maintenance - considerably more to filter at layer 7 with a proxy than layers 3/4 with a firewall.
The one benefit to filtering at layer 7 is that you block only what is intended to be blocked. In today's world (where we've been running out of IPv4 space for a dacade now) a lot of websites are configured using virtual hosts. This allows web hosting providers to host a virtually unlimited number of websites on a single IP address. Let's say there are two websites, both hosted on the same virtual host IP address, where one is banned and the other is not:
With a layer 7 proxy, when the user attempts to reach a website, the proxy intercepts the request, checks the request (including hostname and URI), and then either blocks the request, or requests the page on behalf of the end-user and returns her the requested webpage. So your mom can still access www.momsrecipes.co.au while nobody can access www.bannedwebsite.co.au. With a proxy, you can return HTML to the end-user explaining why access to this particular website is blocked and possibly a method of contact to dispute the denial of access.
() Finer-grained control of what's filtered
() Less "false positives"
() Expensive in many aspects (mentioned above)
() Complex configuration
() Considerable service impact due to use of DPI at Application  layer
() Slightly easier to circumvent; using https is the only circumvention measure mentioned that does not tend to work with the firewall approach - the rest should work against both types
With a layer 3/4 firewall, access to the virtual host IP address (or even the subnet it's part of) will be blocked. When anyone tries to go to www.bannedwebsite.co.au, they are unable to, which is the intended result. They will get a different error; the browser will just report that website was unreachable. End of explanation. If anyone tries to go to www.momsrecipies.co.au, they will also be denied with the same uninformative unreachable error. Since both websites are on the same IP address, the firewall has no way of knowing which website you're looking for, so it blocks everything.
() Cheaper to deploy
() Simpler configuration - hundreds of hosts/subnets vs. thousands of hostnames
() Can often be implemented on existing hardware - edge or core routers utilization IP ACLs
() Faster, more responsive access to allowed websites; less service impact
() Collateral damage - legitimate sites on the same virtual host as banned site are also blocked
() Slightly more difficult to circumvent (a websites https site will likely be in the same blocked subnet)
Comparison with Other Instances of State-Controlled Internet Access:
I see three major differences in the Australian proposal as opposed to the other major regimes implementing state-wide filtering of websites (China and Iran). They are as follows:
Another side effect of this proposal, from an economic standpoint, is that it is likely to put smaller ISPs out of business. Instead of putting the smaller burden on the backbone providers, with considerably more capital, it will place a more expensive burden on ISPs with less resources at their disposal. If these filters become legally mandatory, this will likely put smaller ISPs out of business. A smaller provider may not have access to the resources (money, manpower, and know-how) to meet these requirements and will thus have to shut down operations.
There is one other somewhat commonly used filtering technique involving DNS. The ISP or corporate gateway will transparently route all DNS requests by the end-user to DNS servers under their control. The DNS servers will be configured as authoritative for the blocked domains; typically configured to return an IP address that connects you to a website telling you that your access is blocked and possibly why. This is similar to the Walled Garden approach.
Sunday, April 25. 2010
RIM Buys QNX to Tie Phones to Cars
Research in Motion said Friday (04/09/2010) that it had signed a deal with Harman International to acquire its QNX Software Systems unit to help tie its BlackBerry smartphones to car navigation systems.
Terms of the deal were not announced. It is expected to close within 30 to 45 days if it passes regulatory approvals.
QNX designs a real-time embedded OS, that it has tied to ARM, MIPS, PowerPC and other processors and embedded designs.
"The car is going to become the first-class citizen of the cloud, where inside the car you're going to have access to all the connected media, all the social services that are out there, and it will truly revolutionize the driving experience, the experience of the automotive makers making those cars, the ecosystem of people that are going to make applications for those cars," said Dan Dodge, the chief executive of QNX, in a recent video made with Alcatel-Lucent to retrofit a Toyota car with a cloud-connected entertainment system networked via the wireless LTE standard. "It's probably one of the most exciting times in automotive history."
I wouldn't have named QNX as the software to buy if you're looking to get into car navigation systems, but they're certainly a good choice. I've used QNX here and there, but it's been quite a few years back. The software was always really neat looking (Photon is a beautiful GUI), was blazing fast, tiny footprint, and as stable as anything I've ever encountered.
For those unfamiliar, QNX is a Real-Time Operating System (RTOS) that's a perfect example of a microkernel architecture.
Wednesday, March 17. 2010
Cisco's New Router: Trouble for Hollywood
Cisco's CRS-3 router made a bit of a splash when it was announced on March 9, but the power of this new device hasn't yet sunk in. Consider: The CRS-3, a network routing system, is able to stream every film ever made, from Hollywood to Bombay, in under four minutes. That's right — the whole universe of films digested in less time than it takes to boil an egg. That may sound like good news for consumers, but it could be the business equivalent of an earthquake for the likes of Universal Studios and Paramount Pictures.
I'm not sure that the comparison of streaming the entire Hollywood movie collection in less than 4 minutes is completely accurate; I'd like to know how many movies they're estimating, how big each DVD image is (4.7GB vs. 9.4GB, for instance), and what Layer 1/2 technologies they're talking about (is this ethernet over fiber?). If you're just talking about passing the data across the 322 Tb/sec backplane, then it might be possible; but if you're talking about carrying all that data across multiple hops, each connected by, say, a 10 Gbps ethernet-over-fiber link, it's just not doable. The 10 Gbps link would definitely be a bottleneck.
I also have my doubts as to the likelihood of a piece of networking equipment meaning the end of the world for any sector of business. Sure, as internet connections get faster more people will start downloading/streaming their content; so, unless the people running the MPAA and RIAA are complete morons (which I'm not ruling out), all they have to do is change their business model to incorporate downloads. That's not exactly an overnight change, but it's entirely possible.
But routers are not the only cause of bottlenecks, and Cisco is not alone in working to maximize the Internet's full potential. Google is also concerned about the speed limitations imposed by wires that run to the home. Last month, Google, best known for its search engine, announced plans to test ultra-high-speed broadband networks that would deliver Internet content to residential subscribers at speeds of 1 gigabit per second — 100 times as fast as the top speed available today. This would allow consumers to complete a PC download of a Hollywood blockbuster like Avatar in about 72 seconds.
I don't understand the phrase, "100 times as fast as the top speed available today." That would seem to indicate that a 10Mb/sec connection is the fastest available today. I'm currently on a 15 Mb/sec connection as I write this, and my ISP offers at least 20 Mb/sec.
Downloading a movie at full-speed on a 1 Gbps connection, over 72 seconds, results in 8,640 MB of data. That's almost a full double-layer DVD. We'll assume 120 MB/sec (bytes) over the 1 Gbps link, which is right about the theoretically maximum without figuring in the overhead and framing (for brevity); times 72 seconds = 8,640MB.
The ability to download albums and films in a matter of seconds is a harbinger of deep trouble for the Motion Picture Association of America (MPAA) and the Recording Industry Association of America (RIAA), which would prefer to turn the clock back, way back.
Consider that the MPAA, whose members include Disney and Universal, attacked the VCR in congressional hearings in the 1980s with a Darth Vader–like zeal, predicting box-office receipts would collapse if consumers were allowed to freely share and copy VHS tapes of Hollywood movies. A decade later, the MPAA fought to block the DVD revolution, mainly because digital media could be copied and distributed even more easily than videocassettes.
"Fair Use" has held up in court many times that as a consumer we are allowed to make 1 archival copy in case our normal store-bought copy gets scratched/lost/stolen. It's also been proven in several instances where customers are more likely to pay, and pay more, for DRM-free movies and music and more likely to pirate any "restricted" content. If I pay for a CD, I damn well expect to be able to listen to it at home in my stereo, in my car's CD player, and to be able to rip the tracks to my iPod. If I'm not allowed to do those things, there's no point in me buying the music...it's not like any new music has come out in the last 10 - 15 years worth raising a stink about, anyway. The more difficult it is for someone like me to rip their store-bought copy of an album onto their iPod, the more likely they are to go out and pirate a "cracked" copy that will let them transfer it with relative ease.
To quote the Borg: "Resistance is futile."
The hard fact is that the latest developments at Cisco, Google and elsewhere may do more than kill the DVD and CD and further upset entertainment-business models that have changed little since the Mesozoic Era. With superfast streaming and downloading, indie filmmakers will soon be able to effectively distribute feature films online and promote them using social media such as Facebook and Twitter.
This is probably the best part of the article. The idea of taking the RIAA out of the picture just makes me smile. The sooner they're gone, the sooner we can stop being spoon-fed this pop-formula Nickelback type shit. If independents can start their own online market and promote/sell their music without the need for a label, then music might actually be worth saving. As it stands, I say let the music industry dry up and never press or sell a single CD again. Sure, we'd be losing some of the greats, but there is so much crap out there that those rare gems make up a tiny fraction of 1% of the albums out there. Now if music is readily available from the independents, there might actually be some music not only worth listening to, but worth buying. I know there are bands out there much better than Nickelback, but they haven't been "discovered" or "signed" -- that's why they're still playing the local dive bar. But if they could market themselves, this might drastically change the landscape of the music industry, and for the better.
Wednesday, February 17. 2010
FCC to Propose Faster Broadband Speeds
The U.S. Federal Communications Commission unveiled a plan on Tuesday that would require Internet providers to offer minimum home connection speeds by 2020, a proposal that some telecommunications companies panned as unrealistic.
It's "unrealistic" if they wish to keep their huge profit margins. With the average broadband speed in the U.S. being under 4 mbit (mentioned later in the article), they will be receiving approximately 1/25th the amount of profit per megabit that they're currently making. Internet providers in Asia and other parts of the world that are subject to more regulation, or even state-run, have been providing 100Mbps - 1Gbps for several years. It's more than technical possible, and financially feasible; you just have to be in a market where the monopolistic telecoms aren't allowed to gouge you at-will.
The FCC wants service providers to offer home Internet data transmission speeds of 100 megabits per second (Mbps) to 100 million homes by a decade from now, Commission Chairman Julius Genachowski said.
Industry estimates generally put average U.S. Internet speeds at below 4 Mbps.
I suppose I've been fortunate; I've had access to somewhat reasonably priced connectivity at 10Mbps - 15Mbps. I do, however, know several people with connectivity well below the 4Mbps mark.
The proposal is part of the FCC's National Broadband Plan, due next month. It comes a week after Google Inc rattled Internet service providers with its plan to build a super-fast Internet network.
I've already nominated the city I live in. With the high population density, diversity of professionals represented, and easy access to large amounts of bandwidth nearby, I think there's a fair chance that my city may be one of the chosen.
"A 100 meg is just a dream," Qwest Communications International Inc Chief Executive Edward Mueller told Reuters. "We couldn't afford it."
"First, we don't think the customer wants that. Secondly, if (Google has) invented some technology, we'd love to partner with them," Mueller added.
"...we don't think the customer wants that." Excuse me? That is the most ridiculous argument I've ever heard. Bandwidth is like RAM, you can never have too much. If you've got a connection faster than you're using at this very moment, you haven't lost anything. If you don't have enough bandwidth for what you're planning on doing, then your experience will suffer.
Google has not "invented" any new technology for what they plan to roll out, they're simply willing to spend the capital to build-out a fiber-to-the-home network (at least in select markets, initially). As far as Qwest being willing to "partner" with Google on such projects, I'm sure they'll be happy to let Google spend the capital to build out the fiber network and then try to make money through advertising or other avenues, not requiring them to spend a dime on the infrastructure side.
Verizon, the third-largest provider, and one that has a more advanced network than many competitors, said it has completed successful trials of 100 Mbps and higher through its fiber-optic FiOS network.
"(One gigabit per second) as discussed in current news reports is a lot of signal; typically enough for many massive business operations," Verizon said in a statement that referred to Google's plan to test a network with those speeds. "But we could make it happen over the FiOS network without much trouble, should a market for it develop."
One analyst questioned whether the FCC's proposal could lead to a sustainable business model.
"In order to earn a return for investors, you have to be conscious of what consumers will pay. I don't know this is something consumers will pay for," Piper Jaffray analyst Christopher Larsen said. "It's a nice goal, but it's a little on the over ambitious side."
It's been proven to be a sustainable business model in many parts of the world. Nobody is more starved for bandwidth, as far as users go, than the U.S.
I doubt that the FCC proposal requires that the providers provide no less than 100Mbps to every customers, just that it's an affordable (to the majority of people) option. If a customer doesn't want to pay, say, $80/month for a 100Mbps connection, then offer a 50Mbps option at $50/month. That's incentive for the customer to pay the less than double price for the double bandwidth, and you're still meeting your obligation of providing 100Mbps service to those who want it. I don't know of any network technology that will carry 100Mbps that won't let you throttle it back to 30, 50, 75 Mbps or any other arbitrary speed.
The United States ranked 19th in broadband speed, trailing Japan, Korea and France, according to a 2008 study by the Organization for Economic Co-operation and Development.
Data shows that about 64 percent of U.S. households used a high-speed Internet service in 2009, the Commerce Department said on Tuesday. That is a 25 percent increase from 51 percent two years earlier.
This is truly sad. We're the world leader in technological development, but due largely to greed, we're 19th in the world in terms of broadband speed.
Tuesday, October 27. 2009
Netflix Streaming Coming to PS3
Netflix online streaming is coming to the PlayStation 3.
A Netflix press release spelled out the details of deal. The good news is that the streaming feature -- which enables Netflix subscribers to access thousands of movies and TV shows on-demand via the Internet -- is available at no extra charge beyond the monthly Netflix DVD-by-mail subscription, which can be as low as $9 a month.
They go on to mention that the Xbox version requires access to Xbox Live for an additional $50/year.
The bad news: PS3 owners will need to put a special Blu-ray disc in the game console, which will enable streaming via the Blu-ray's BD Live functionality. That's a departure from all other Netflix-enabled devices (including the Xbox 360 and other Blu-ray players), which just have the Netflix option as a built-in feature. [...] However, the press release specifies that the disc will be needed "initially," so perhaps a future software upgrade will add Netflix as a built-in feature on the PS3.
With as often as my PS3 wants to download updates (basically every time I power it on), I imagine that built-in support for Netflix streaming will be added relatively soon.
Still, the Netflix feature of the 360 has long been envied by PS3 owners, so its inclusion--even with the need to be launched from a disc--will be welcome news. The Netflix site says that the feature will be available before the end of the year, and Netflix subscribers who own a PS3 can reserve a copy of the Netflix disc as of now.
So, what do you guys think: does the addition of Netflix put the PS3 at the top of the game console heap, or is the Xbox 360 an all-around better deal?
My thoughts on the current generation consoles still hold true. The Nintendo Wii got the most fun gameplay and a revolutionary new controller system; the PlayStation 3 got, by far, the best technology (see the Cell Broadband Engine processor); and the Xbox got, well, Halo.
Friday, August 21. 2009
Image Gallery: Building an Apple-I Replica From Scratch
At KansasFest 2009, held July 21 to 26 in Kansas City, Mo., retrocomputing fans from around the world gathered to celebrate the Apple II, the computer that launched Apple Computer Inc. to fame.
But going back even further than that is the Apple-1 (a.k.a. the Apple I or the Apple 1), the machine Steve Wozniak invented and first demonstrated at the Palo Alto Homebrew Computer Club in 1976.
In attendance at KansasFest was Vince Briel, who has created an authorized reproduction of this classic machine. Briel's Replica 1 sells for $149 and comes as an unassembled kit. He held a workshop at KansasFest to help new owners put together their own working Apple-1 machines.
There are 18 pages of images detailing the process of building your own replica. I'm not much of a hardware hacker, but this is just too cool.
A little further into the gallery...
The Replica 1 includes 88 component parts. There's also a packing list and instruction manual to help you unpack and assemble the machine.
Friday, August 21. 2009
DOJ Approves $7.4B Oracle-Sun Deal
Oracle on Thursday said the U.S. Department of Justice (DOJ) has approved its $7.4 billion acquisition of Sun Microsystems, although the deal is subject to certain conditions and still needs the blessing of European regulators.
Oracle first announced its bid in April and Sun shareholders approved the acquisition on July 16.
The combined company will give Oracle an array of new assets, including a stake in the computer hardware market, the open-source MySQL database and stewardship of the Java programming language.
Oracle will undoubtedly cut a large portion of lesser-performing sectors of the company. I'm afraid that this might be the death-knell for SPARC-based processors; including the Niagara and UltraSPARC T-2.
Sun certainly has it right with these processors; they boast very low power consumption and up to 64-way SMT on 8 cores per chip. Compare that to your 4-way SMT AMD64 Phenoms and the like.
I don't forsee [Open]Solaris going anywhere anytime soon. Solaris has long been the platform of choice for large Oracle installations, and I see the Solaris+Java combination as being the crown jewels to Oracle. Oracle has embraced open-source to a pretty fair degree thus far, so I see no reason that they would try to close OpenSolaris or anything similar.
I could honestly not care less what becomes of MySQL. It's been a sub-standard RDBMS from the very get-go. PostgreSQL serves just fine for single-database solutions; and I'd recommend Oracle RAC for clustered/multi-master replication scenarios.
Thursday, August 20. 2009
Japanese Supercomputer Calculates Pi to Record 2.5 Trillion Digits
The T2K Open Supercomputer, located at the University of Tsukuba’s Center for Computational Sciences, reaching 2,576,980,377,524 decimals in an approximately 73 hours and 36 minutes long calculation, according to an announcement made to the Japanese press on August 17th. The Center said it was in the process of applying for the record book.
Friday, July 3. 2009
London Stock Exchange to Abandon Failed Windows Platform
Anyone who was ever fool enough to believe that Microsoft software was good enough to be used for a mission-critical operation had their face slapped this September when the LSE (London Stock Exchange)'s Windows-based TradElect system brought the market to a standstill for almost an entire day. While the LSE denied that the collapse was TradElect's fault, they also refused to explain what the problem really wa. Sources at the LSE tell me to this day that the problem was with TradElect.
Whoever Microsoft sent to sell this idea to the LSE could, undoubtedly, sell ice cubes to eskimos. Foolish, indeed.
Since then, the CEO that brought TradElect to the LSE, Clara Furse, has left without saying why she was leaving. Sources in the City -- London's equivalent of New York City's Wall Street -- tell me that TradElect's failure was the final straw for her tenure. The new CEO, Xavier Rolet, is reported to have immediately decided to put an end to TradElect.
The article goes on to describe the system's short-comings and compare it to their competitor, a Linux-based solution. Someone commented that the system should have been based on OpenVMS due to it's stability and availability; and I can't argue.
When it's mission-critical, it doesn't have to be Linux, it just can't be Windows. Simple as that.
Sunday, September 7. 2008
NASA ISS On-Orbit Status 6 September 2008
After transitioning the JSL (Joint Station LAN) network to the new Netgear wireless APs (Access Points, WAPs) which provide the ISS with WiFi (wireless+Ethernet) connectivity, Gregory today repeated functionality tests, abandoned earlier this week, in three Kibo JPM (JEM Pressurized Module) locations from the wireless SSC -11 laptop, and later also in the COL (Columbus Orbital Laboratory). Afterwards switching to "Proxim" APs, the new WiFi "Dolphin" BCRs (Barcode Readers) were also tested.
There's just this one little blurb about the wireless network in the whole article. I wonder how long before one of the laptops gets the latest 0day worm that turns it into a SPAM zombie?
Wednesday, July 23. 2008
Child Online Protection Act Overturned
A federal appeals court struck down as unconstitutional a Clinton-era law that would have forced websites with adult material to verify visitors' ages, dealing another blow to the government in a 10-year court battle over net censorship.
The 3rd U.S. Circurt Court of Appeals upheld on Tuesday a 2007 lower-court decision that the Child Online Protection Act violated the First Amendment since it was not the most effective way to keep children from visiting adult websites.
Emphasis is my own.
My question is this: Would this law be constitutional (in other words, not a breach of first amendment rights) if it were the most effective way of keeping children out of "adult" websites?
Both courts also found that the standards for material that had to be hidden from open browsing were so loosely defined that any content not suitable for a four-year-old would have been hidden behind a age-verification firewall.
While I have not read the law myself, nor would I likely comprehend the majority of it, I cannot imagine how one could legally outline unsuitable content. It seems the way to go, in most cases, is to be overly broad. You know, just to make sure you get everyone.
I also find it funny that they mention an "age-verification firewall." Are there firewalls now that can determine the age of the user on the machine that generated those packets? I think not.
"Unlike COPA, filters permit adults to determine if and when they want to use them and do not subject speakers to criminal or civil penalties," the court wrote.
This is the correct way to police your internet connection. If you have children in your home and are concerned about the websites they visit, there is no shortage of parental filtering software available. This is a clear case of over-governance.
What shocks me is the technical savvy and knowledge of filtering alternatives regarding technology by the courts.
COPA was intended to be a narrower version of the 1996 Communications Decency Act, which would have catastrophically extended the rules of television 'decency' to the internet had the Supreme Court not emphatically rejected it in 1997.
"It is apparent that COPA, like the Communications Decency Act before it, 'effectively suppresses a large amount of speech that adults have a constitutional right to receive and to address to one another,' Reno, 521 U.S. at 874, 117 S.Ct. at 2346, and thus is overbroad. For this reason, COPA violates the First Amendment," the judges wrote. "These burdens would chill protected speech."
Again, emphasis is my own.
They [the ACLU] also argued the law would apply to anyone who wrote about mature subjects who also happened to have Google or Yahoo ads on their personal blog.
For its part, the government says the law was intended to apply to pornographic websites, not news sites. It also argues COPA's age restrictions would work with content filters.
It was "intended to apply to pornographic websites". Intended. If it were intended for just that, and not just to gain control over part of the internet, the law would have been written as such, in a much narrower fashion. As I mentioned above, it's hard to narrowly define what is "unsuitable", but there is already existing precedent on what is considered "pornographic."
I'm not advocating giving pornography to children, but this is a clear matter of parental responsibility, not government responsibility. Just as the V-chip allows parents to block what their children watch on television, there are many alternatives to blocking websites and content types just the same.
Wednesday, July 16. 2008
There was an article on Slashdot earlier today bemoaning the uselessness of CAPTCHAs. While I don't agree that they're useless, they have been fairly fruitless at stopping comment spam on my blog here.
The article mentions two possible "successors" to the standard image and audio CAPTCHA, let's take a look at them.
This one comes from a site called spamfizzle.com. Their approach to going beyond the standard 2-dimensional image is to go 3D. Each object in a scene will be represented by a letter. Hundreds, if not thousands, of 3D images can be created using the exact same 3D "scene" but shown from different angles and with different lighting sources. You will then be required to enter, in order, the letter on the cat's tail, the letter in the upper left window pane, and the number of branches on the tree. Using only letters, no numbers, the possible combinations of the CAPTCHAs requiring only 3 letters is as follows: 26*25*24 = 15,600. At 4 letters, it becomes 358,800 possibilities. At 5 letters, it's now 7,893,600 possible combinations. There are also several other features that make nuking CAPTCHA-cracking programs obsolete within minutes; as well as features for making it easier for humans to work with. I really don't feel that I can do justice to this article by summarizing it here. I strongly encourage you to read the article itself (even though it is hosted on a Windows platform).
This site requires that you really know your math. How many people are going to know enough geometry, trigonometry, and calculus to figure out this challenge/response?
What can I say?
Wednesday, July 16. 2008
Open Security Foundation To Maintain Attrition.org's Data Loss Database - Open Source
The Open Security Foundation (OSF) is pleased to announce that the DataLossDB (also known as the Data Loss Database - Open Source (DLDOS) currently run by Attrition.org) will be formally maintained as an ongoing project under the OSF umbrella organization as of July 15, 2008.
Attrition.org's Data Loss project, which was originally conceptualized in 2001 and has been maintained since July 2005, introduced DLDOS to the public in September of 2006. The project's core mission is to track the loss or theft of personally identifying information not just from the United States, but across the world. As of June 4, 2008, DataLossDB contains information on over 1,000 breaches of personal identifying information covering over 330 million records.
DataLossDB has become a recognized leader in the categorization of dataloss incidents over the past several years. In an effort to build off the current success and further enhance the project, the new relationship with OSF provides opportunities for growth, an improved data set, and expanded community involvement. "We've worked hard to research, gather, and make this data open to the public," says Kelly Todd, one of the project leaders for DataLossDB. "Hopefully, the migration to OSF will lead to more community participation, public awareness, and consumer advocacy by providing an open forum for submitting information."
The Open Security Foundation's DataLossDB will be free for download and use in non-profit work and research. The new website launch (http://www.datalossdb.org/) builds off of the current data set and provides an extensive list of new features. DataLossDB has attained rapid success due to a core group of volunteers who have populated and maintained the database. However, the new system will provide an open framework that allows the community to get involved and enhance the project. "For a data set as dynamic as this, it made sense to build it into a more user-driven format.", states David Shettler, the lead developer for the Open Security Foundation. "With the release of this new site, the project can now be fed by anyone, from data loss victims to researchers".
This site is actually pretty neat. Not only does it have a searchable index, it also provides quick links to things like the latest incidents, largest incidents, most discussed incidents; and even breaks it down by type of loss (credit card numbers, social security numbers, and even medical records!).
This site manages to index so much information in so many useful ways, it's certainly worth supporting!
Tuesday, June 10. 2008
Steganography of VoIP Streams
In this paper, we circumscribe available steganographic techniques that can be used for creating covert channels for VoIP (Voice over Internet Protocol) streams. Apart from characterizing existing steganographic methods we provide new insights by presenting two new techniques.
First one is network steganography solution and exploits free/unused fields of the RTCP (Real-Time Control Protocol) and RTP (Real-Time Transport Protocol) protocols. The second method provides hybrid storage-timing covert channel by utilizing delayed audio packets.
The results of the experiment, that was performed, regardless of steganalysis, to estimate a total amount of data that can be covertly transferred in VoIP RTP stream during the typical call, are also included in this article.
I've reformatted the overview here for readability. This is a truly brilliant idea. I'm already a huge fan of cryptography; steganography in particular. There are plenty of applications out there for "hiding" messages in the least-significant-bits of images, MP3s, and several other file formats. This article brings steganography into the realm of real-time, two-way communication.
I've not yet had the chance to read the entire paper (16 pages), but it's loaded with formulas and figures that should give you a fairly realistic estimate of exactly how much bandwidth you have. The conclusion states that they were able to achieve 1.3 Mbit/sec of one-way throughput. A typical POTS telephone line requires only 64 Kbit/sec to carry voice.
Picture this: The modem in your computer places an outbound call, you pick up your headset and put it on. The other end answers, and you're immediately placed on hold. Are you? While you're listening to the muzak on the other end, your computer is pulling out bits here and there. You hear a voice say "Hello?" You speak, and the "hold" music stops; now your computer is playing music to the other party. The person on the other end hears your voice, and upon recognizing it, responds with a hearty "hello!"
Couple this technique with a strong, public-key-based encryption algorithm and you've got truly secure real-time communication. Using this technique in combination with strong cryptography makes your conversation exponentially more secure. Obfuscated amongst the elevator music playing back and forth is PKI-encrypted voice. Assuming someone is snooping on your communications, this in itself makes it difficult to detect the "out of band" voice chatter. With the added benefits of PKI cryptography, you have the ultimate in caller ID (only the caller's public key would decrypt any useful voice data; and in theory, only the caller would have access to their private key to encrypt that voice data), confidentiality to an extreme degree, and guaranteed integrity (any altered data would not checksum out correctly and would immediately be identified as having been altered).
I would expect to see some implementation of this theory very soon. A likely project to be "first to market" with it would be Asterisk.
Tuesday, June 10. 2008
Military Supercomputer Sets Record
An American military supercomputer, assembled from components originally designed for video game machines, has reached a long-sought-after computing milestone by processing more than 1.026 quadrillion calculations per second.
These "components originally designed for video game machines" are Sony's [along with Toshiba and IBM] Cell processors, such as the one in the PlayStation 3. I've long known these processors were destined for huge things.
The new machine is more than twice as fast as the previous fastest supercomputer, the I.B.M. BlueGene/L, which is based at Lawrence Livermore National Laboratory in California.
The new $133 million supercomputer, called Roadrunner in a reference to the state bird of New Mexico, was devised and built by engineers and scientists at I.B.M. and Los Alamos National Laboratory, based in Los Alamos, N.M. It will be used principally to solve classified military problems to ensure that the nation’s stockpile of nuclear weapons will continue to work correctly as they age. The Roadrunner will simulate the behavior of the weapons in the first fraction of a second during an explosion.
There's already talk on top500.org about the Roadrunner. Presumably, it will take the top spot in the June 2008 list, set to come out in a week. The list will be released June 17, 2008, during the International Supercomputing Conference in Dresden, Germany.
In all fairness, the Roadrunner isn't quite twice as fast as BlueGene/L anymore. It was when BGL was installed (at 478,200 PetaFLOPs), but it's received quite an upgrade since. It currently performs at 596,378 PetaFLOPs, which still pales in comparison to the Roadrunner's 1,026,000 PetaFLOPs.
To put the performance of the machine in perspective, Thomas P. D’Agostino, the administrator of the National Nuclear Security Administration, said that if all six billion people on earth used hand calculators and performed calculations 24 hours a day and seven days a week, it would take them 46 years to do what the Roadrunner can in one day.
This formula also assumes 100% accuracy, which is something us humans rarely achieve, and never sustain.
The high-performance computing goal, known as a petaflop — one thousand trillion calculations per second — has long been viewed as a crucial milestone by military, technical and scientific organizations in the United States, as well as a growing group including Japan, China and the European Union. All view supercomputing technology as a symbol of national economic competitiveness.
"This is equivalent to the four-minute mile of supercomputing," said Jack Dongarra, a computer scientist at the University of Tennessee who for several decades has tracked the performance of the fastest computers.
The Roadrunner is based on a radical design that includes 12,960 chips that are an improved version of an I.B.M. Cell microprocessor, a parallel processing chip originally created for Sony’s PlayStation 3 video-game machine. The Sony chips are used as accelerators, or turbochargers, for portions of calculations.
The Roadrunner also includes a smaller number of more conventional Opteron processors, made by Advanced Micro Devices, which are already widely used in corporate servers.
Roadrunner, which consumes roughly three megawatts of power, or about the power required by a large suburban shopping center, requires three separate programming tools because it has three types of processors. Programmers have to figure out how to keep all of the 116,640 processor cores in the machine occupied simultaneously in order for it to run effectively.
By breaking the petaflop barrier sooner than had been generally expected, the United States’ supercomputer industry has been able to sustain a pace of continuous performance increases, improving a thousandfold in processing power in 11 years. The next thousandfold goal is the exaflop, which is a quintillion calculations per second, followed by the zettaflop, the yottaflop and the xeraflop.
Here's something I didn't know; that we've increased processing power a thousandfold in the past 11 years. It's strange to think back to when there was a noticeable difference between my Pentium 166 Mhz and my friend's Pentium 133 Mhz. That was about 11 years ago, give or take.
All in all, they never mention what operating system this beast will run. I'm sure the details will come out, little at a time, but for now, the idea of 116,640 cores has me drooling!
Friday, May 23. 2008
Translating Twitter's Business Speak on Outages
"Twitter is, fundamentally, a messaging system. Twitter was not architected as a messaging system, however. For expediency's sake, Twitter was built with technologies and practices that are more appropriate to a content management system. Over the last year and a half we've tried to make our system behave like a messaging system as much as possible, but that's introduced a great deal of complexity and unpredictability. When we're in crisis mode, adding more instrumentation to help us navigate the web of interdependencies in our current architecture is often our primary recourse. This is, clearly, not optimal."
Translation: We've got a ton of work ahead. Flawed strategic thinking about what the service is and what it might become has put Twitter in a hole of its own making.
Twitter has got to be the single biggest waste of bits since HamsterDance. Unfortunately, for the moment, it's here and we all have to live with it. While these guys are experiencing "growing pains", I'm counting the days until it goes the way of the FlashMob and Milli Vanilli. Yes, I provided links for those of you fortunate enough to have lived through those "fads" and managed to forget them already.
In the meantime, I have a few choice "tweets" on Twitter that I'll attribute to my buddy, known simply as "Aaron":
Aaron is twittering: I have come to the conclusion that unless you are an A-list internet celeb or have a large ecosystem of followers, twitter is boring.
Aaron is twittering: Insert random thing I'm doing that no one cares about.
And last, but not least...
Aaron Twitter's new slogan should be; "Twitter, Reliable as Windows ME".
Friday, May 16. 2008
Icahn Takes on Yahoo Board
Billionaire investor Carl Icahn has sent a letter to Yahoo's board announcing he is nominating 10 candidates to replace all incumbent directors at the company's shareholders meeting in July.
The move, rumored since earlier this week, is intended to ultimately reignite merger negotiations between Yahoo and Microsoft.
You can be sure that Microsoft's monopoly money is backing this move.
In the letter, distributed this morning to the press and addressed to Yahoo's board Chairman Roy Bostock, Icahn charges the board with acting irrationally and losing the faith of shareholders and Microsoft.
Microsoft declined to comment. Yahoo did not immediately respond to a request for comment.
"I and many of your shareholders strongly believe that a combination between Yahoo and Microsoft would form a dynamic company and more importantly would be a force strong enough to compete with Google on the Internet," Icahn wrote.
Icahn decided to launch a proxy fight because in the past week "a number" of shareholders asked him to lead such an effort to oust the current board members and attempt to revive merger negotiations between Yahoo and Microsoft.
"I believe that a combination between Microsoft and Yahoo is by far the most sensible path for both companies," he wrote.
Instead of elevating Microsoft to Yahoo's level, all this deal would do is devalue Yahoo to Microsoft's level. Microsoft has proven time and again that they are behind the curve when it comes to the internet. Do they really believe that they can just buy their way into the market? Well, it's worked for them before in almost every case, but they've yet to really lock horns with Google.
Nonetheless, Microsoft officials have said repeatedly since withdrawing the offer that the company is no longer interested in acquiring Yahoo, something Icahn doesn't address in his letter. All along, Microsoft had indicated its readiness to launch a proxy fight to oust Yahoo's board and replace it with its own candidates, but eventually Microsoft decided against that option, saying that it wasn't interested in engaging in a hostile and potentially long process.
They publicly say they're not interested in a hostile takeover, but look where we are. Coincidence?
Icahn also requested of Bostock that the board not engage in any actions that might "in any way impede a future Microsoft merger" before allowing shareholders "to opine on them."
This is likely a reference to so-called poison pill maneuvers or to partnerships that Microsoft has indicated it would find undesirable, like the possible outsourcing of a part of Yahoo's search ad business to Google.
"I sincerely hope you heed the wishes of your shareholders and move expeditiously to negotiate a merger with Microsoft, thereby making a proxy fight unnecessary," Icahn wrote in closing the letter.
Various big Yahoo shareholders have expressed their displeasure with Yahoo's board and management for, in their view, not negotiating in good faith with Microsoft and causing the talks to collapse. Yahoo formally rejected Microsoft's original offer on Feb. 11, saying it undervalued the company.
Microsoft has proven time and time again that there is no such thing as "good faith" as far as they're concerned.
I would hate to see such a deal go through, as it would result in the demise of Yahoo, one of the veterans of the internet. On the other hand, it would be a huge waste of money for Microsoft, which I am all for. Bleed the beast! What Microsoft has not realized is that if this deal goes through, there will be only one clear winner: Google. The engineers and management worth their salt that work for Yahoo do so because they do not want to work for Microsoft. They're betting on the underdog. If Microsoft buys Yahoo, at any price, there will be a mass exodus of talent from Yahoo. Google knows this, and will snatch them up immediately. There's no "retention package" in the world that will change this fact. Microsoft will end up buying a worthless hull of a company, containing no real substance, for a huge price. They lose. Yahoo will disappear, becoming just another cog in the Microsoft machine. They lose. Google will snatch up a whole bunch of talent and a great mindshare. They win.
Sunday, April 20. 2008
CNN Web Site Targeted
CNN was targeted Thursday by attempts to interrupt its news Web site, resulting in countermeasures that caused the service to be slow or unavailable to some users in limited areas of Asia.
It appears that this is your run-of-the-mill denial of service. Almost guaranteed that it was carried out by a botnot. If I wanted to take it one step further, I'd say it was almost certainly a TCP syn-flood.
"CNN took preventative measures to filter traffic in response to attempts to disrupt our Web site. A small percentage of CNN.com users in Asia are impacted," the network said in a statement.
CNN uses Akamai last I knew. Akamai uses anycast to direct your web request to the nearest available server. Most likely, they asked Akamai to deny requests coming from certain netblocks and certain servers handling the anycast. That would mean that those who did get through, would be routed to a more distant server, thus the "users in Asia are impacted" by slow speeds and non-connections.
A CNN spokesman said the Web site began to notice problems around midday Thursday and took measures to isolate the trouble by limiting the number of users who could access it from specific geographic areas.
This is what makes me think that Akamai was involved in mitigating this attack.
The attempt came as tech-oriented Web sites in Asia were reporting calls from hacker groups in China for denial-of-service attacks to be launched against the CNN Web site on Saturday over the network's coverage of unrest in Tibet.
Well, this is either a frame-up, or the Chinese government is making poor attempts to silence information that might make them look bad.
Friday, April 4. 2008
We Spied on 36,000 Customers Using the Internet, Admits BT
BT tested secret "spyware" on tens of thousands of its broadband customers without their knowledge, it admitted yesterday.
It carried out covert trials of a system which monitors every internet page a user visits.
All for the sake of "targeted advertising." *sigh*
An investigation into the affair has been started by the Information Commissioner, the personal data watchdog.
I'm glad to see that someone is investigating the legalities of this. Being in the UK, the laws are certainly more restrictive than they are in the U.S. regarding privacy (if you can believe that).
Privacy campaigners reacted with horror, accusing BT of illegal interception on a huge scale. Yesterday, the company was forced to admit that it had monitored the web browsing habits of 36,000 customers.
The scandal came to light only after some customers stumbled across tell-tale signs of spying. At first, they were wrongly told a software virus was to blame.
"A software virus was to blame." When working tech support, when all else fails, blame the problem on a virus or software issue. Viruses and spyware have become a blanket explanation to most anything a tech support group does not know how to handle. This usually results in the user taking their system to a professional who can then examine the system hands-on.
One of two things were happening here; (a) the tech support people were not made aware of the "testing" so as not to divulge this information to customers (quite plausable), or (b) they knew, but were not allowed to speak of it.
It then scans every website a customer visits, silently checking for keywords and building up a unique picture of their interests.
If a user searches online to buy a holiday or expensive TV, for example, or looks for internet dating services or advice on weight loss, the Phorm system will add all the information to their file.
Typical keyword-gathering advertising, much like Google's AdWords, except not quite so transparent as to how it works and what it logs.
Nicholas Bohm, of the Foundation for Information Policy Research, said BT's actions amounted to illegal data interception.
He told the BBC: "It seems a clear-cut case of illegal interception of communication."
Again, I am not a lawyer, and I'm certainly not familiar with UK law, but if that's not "illegal data interception", it certainly should be.
Sir Tim Berners-Lee, the British inventor and founding father of the worldwide web, said a person's data and web history are private property.
"It's mine - you can't have it," he added.
"If you want to use it for something, then you have to negotiate with me. I have to agree, I have to understand what I'm getting in return."
Exactly. I don't care if I'm banking online or just reading Slashdot, my packets are my packets. I don't have anything to hide in my data (other than the occasional CC number when ordering online), but it's merely the principle of the matter. I pay a monthly fee to have my IP packets sent and received without tampering, snooping, or filtering.
This is yet another strong example of why encryption should be mandatory. I encrypt everything I can, from https traffic to PGP signed/encrypted email, to SSL connections between my applications as their database back-end.
Some good places to start in regards to providing some security to your data are as follows:
GNU Privacy Handbook (GPG - Free OpenPGP implementation)
PGP (Commercial, proprietary OpenPGP implementation)
OpenSSH (ssh, scp, sftp, etc)
CrytoLoop file-system (Linux)
There are many other ways to encrypt data on your local hard drives and in-transit across the network. A search through Google for "encryption software" should be a good place to start.
(Page 1 of 4, totaling 80 entries) » next page