Friday, February 3. 2012
I do not yet have all of the hardware and networking gear set up on my network; so this is merely a description of what I'm looking to do, to gather some ideas and feedback, and then figure out how to proceed. If, after reading the details below, you want to join, or have suggestions, please let me know!
History / Ideas
I've been thinking about starting a "nerd net" for quite some time. I have many friends that use a Linux/*BSD machine at their border, and typically have some kind of services running inside the network. I'd like to link these networks together and share access to services. This opens up all kinds of possibilities...
To avoid a single point of failure, and saturation of any one network's bandwidth, we would avoid linking in a hub-and-spoke fashion. Instead, I propose that we maintain a list of active nodes (essentially, each person's gateway box) and try to maintain at least 3 active VPN connections at a time. In other words, each node on the network would have a VPN tunnel to at least 3 other nodes. It would be very useful to have control of a DNS zone for maintaining this list. Each node would have it's own A record; say, mynode.domain.com. Each time a node wants to connect to the network, it would request the A record for something like connect.domain.com, which would hand out A records of each of the registered nodes in a round-robin fashion.
Authentication will be central in this; so some knowledge of SSL will be beneficial. I, and possibly a few key others, would maintain access to a CA signing key and the nodes would be authenticated to the network via certificates signed by that CA key. Any node that you attempt to connect to should trust you based on that certificate; and, based on the certificate presented, you should trust any connection attempt with a valid, signed certificate. I, and possibly others, can assist with any certificate-based configuration issues that you might have.
To allow for a private network that's going to be potentially changing topology on a regular basis, we'll need a routing protocol, such as RIPv2. For simplicity, each network would receive it's own /24 of RFC1918 space, with the gateway box running the VPN software being the "node" on the network. Most likely, you'd want to set up split-tunneling on the gateway box so that any requests going to the private network route over the VPN and the rest of your traffic (web surfing, email, etc) goes out your normal internet connection.
With each node on the network being connected to at least 3 other nodes at any given time, that requires us to keep track of the various routes from one network to another. Instead of trying to keep track of this by hand, we could easily set up RIPv2 and announce the routes we "know". I imagine using netblocks in the 172.16.0.0/12 range, with each network having a /24 and being multi-homed (connected to 3+ other nodes), there could potentially be several routes from one network to the other. A light-weight, distance-vector routing protocol like RIPv2 seems to be a good fit; open-source implementations, simple, and proven. The route to any network from yours would be the one with the least intermittent hops.
Another, more complex, possibility would be using OSPF and OpenBSD's open-source implementation. Given that each node will have different bandwidth, this may be a good idea to try.
Services and other ideas
This would be the whole reason for the network!
I would be interested in providing several services to the network, but not publicly. For one, a Linux- or FreeBSD-based shell server with access to the internet, compilers/development tools, documentation, email (anyone remember pine+procmail?), etc. A (small-ish) public web space to let people know that you're part of the network; something like Apache's mod_userdir. A blog application accessible only from within the nerd-net.
I would also maintain the internal network's intranet site. This could be a site used to post updates of system maintenance, new services being offered/tested, and a way to maintain an up-to-date list of all of the nodes.
A private IRC server is definitely on the TODO list. Any and all bots would be allowed; if anyone would be interested in linking the IRC daemons, I'll likely be using the Blackened or UltimateIRCd.
I could provide SMTP services for the network. If we find a zone to use, I can provide email services for that zone. Technically, unlimited user@zone email addresses. I would also be willing to host DNS services for the internal network; providing dynamic DNS and a "view" for those coming in via the network.
Another service I'd consider offering would be your own PostgreSQL database for development and testing. A big part of what I envision for this network is that it'll be a big collection of computer-savvy geeks; what better place to deploy and test code than on a network inhabited solely by those with the clue to help you in debugging, vuln testing, etc.
Other possibilities include shared-CPU time using tools like distcc(1). Maybe remote storage (NFS/iSCSI)?
The benefits of a private network are many, and extend beyond just the sharing of services. This is a chance to build a real-world, potentially large-scale network with changing topology. Any member wishing to add/configure new services or features is welcomed to; and the services could be advertised/listed on the internal network's intranet site.
Other benefits include the ability to policy-route certain traffic. For instance, I would be interested in routing my DNS traffic over the VPN to be routed out someone else's connection; it's low traffic, but my ISP mangles my DNS traffic to route to their servers regardless of what server I point to. Such configuration could even be set up where my DNS traffic goes out through a different VPN's node each time (i.e., a type of load-balancing).
As the network uses a VPN to set up, exactly what it is, a Virtual Private Network, we should seek to keep the network private. Configuring an Apache reverse-proxy, or some type of port-forward from your external IP address into the network would be frowned upon. Any need to create such accesses for the outside should be discussed with the entire group, and hopefully some sort of consensus reached.
It would also be the responsibility of each node's owner to keep the machine secure and up-to-date. Any breach of one of the nodes would lead to an open route to the network.
I will create the CA key used to sign all certificates used to access the VPN; but, as I do not wish to be the sole decider in who is allowed on the network and who is not, I plan to create a handful of sub-CA certificates to be distributed to trusted associates who may also sign certificates for potential users. If you've been given a sub-CA certificate, you're being trusted to know who you're letting into our private little clubhouse. :) Using multiple sub-CA certificates will also allow for anyone to validate who provided access for this particular user/network; and the ability to revoke access as necessary (hoping that it wouldn't ever become necessary).
A "nerd net" would be a fun project to gather a bunch of us geeks together and share services. This is the first time I'm really throwing the idea out there (beyond mentioning it to a few people here and there), so if anyone else thinks it would be fun, get ahold of me!
Wednesday, June 24. 2009
Friday, December 12. 2008
Where did my desktop go?!
I'm slowly but surely finding my programs. From the installer, I believe it was trying to reinstall what I already had, but with the newer packages. I'm not sure on that as I watching more TV than the screen. ;) I'm about 1000 packages short, so I'm guessing it missed a few.
KDE 4.1 is quite a change having been a long-time 3.x user. I'm not sure I like it just yet, but everything I've read says that after a couple days, you can't imagine computing without it. I'm willing to give it a shot.
I'm hoping to get Xen installed and configured under this new installation. I've had problems with bootloaders in the past, I'm hoping this changes things. I can certainly spare 768 - 1024 MB of RAM, 50 - 100 GB of hard drive, and a good share of one core. For as much as I don't like the idea of software virtualization (that opens a can of security worms, if you ask me), I'm really dying to try out Xen and set up a secured shell server for my friends.
Konqueror 4.1.2 works pretty darn well, too. It works with at least 95% of the AJAX and other trickery on facebook.com. It work s with the administration page of this blog; something previous versions had problems with. I give Konqueror -- just on it's own merits -- two thumbs up.
More thoughts to come...
Sunday, September 7. 2008
Apparently the I/O controller in desktop1 was dying on me, so I had to replace the motherboard. I figured if I was going to buy a new board, I'd get a new processor, too. This time, I was definitely going multi-core.
As it turns out, the one board that they had in stock that would take both a dual-core and quad-core AMD64 was not compatible with anything I already had. I ended up having to replace everything except the hard drives and the DVD burner. I even had to replace the boot/OS drive, an 80 GB IDE, because it was giving seek errors even with the new motherboard. The drive is older than dirt, so I'm not surprised.
They had single and dual-core AMD64's, but no quads (Phenom). They had single-, dual-, and quad-core Pentium chips, but all of my software is built for AMD64, and the price difference was almost 3-to-1. I went ahead and bought the dual-core AMD64 for now; but I have a feeling that within a couple months, I'll probably end up hitting PriceWatch and buying a quad-core AMD64 Phenom.
In having to replace everything, I decided to splurge a little and put together a machine that won't need to be upgraded for a while. Here's a quick list of old -> new hardware.
Unknown motherboard -> MSI K9A2 Neo
3 GB PC 2700 RAM (3 x 1GB) -> 4 GB PC5300 RAM (2 x 2GB - with 2 slots left free)
Unknown video (NVidia driver) AGP 8x -> MSI-branded GeForce 8400 (256 MB VRAM) PCI-E 16x
500 Watt P/S -> 650 Watt P/S
AMD64 2.0 Ghz processor -> AMD64 X2 Dual-Core 2.3 Ghz
RealTek Rtl8139 NIC (10/100) -> RealTek Rtl8111 (10/100/1000) && 3Com 3c905 (10/100)
The sound card that came on-board has full 7.1 surround-sound, which is easily better than the PCI soundcard I had previously. This new board takes me down to 1 IDE controller and only 2 PCI slots. I put the 4-port USB controller back in one of the PCI slots, and the 3Com NIC in the other.
I previously had an 80 and 320 GB IDE hard drive, an IDE CD/DVD burner, a 250 GB SATA disk, and a 500 GB SATA disk. I had to scrap the 80 GB as it was dying, and I replaced it with another 250 GB SATA that was not in use. So now I'm down to just the 320 GB hard drive and CD/DVD burner on the one IDE controller. I also have 2 x 250 GB SATA, and a 500 GB SATA, for a grand total of 1.3 TB of storage.
The OS runs much faster being on a SATA drive instead of an older, 5400-rpm IDE hard drive. I still have 1 (of 4) SATA ports available, so any further expansion of storage will most likely be an additional SATA drive instead of something IDE.
With the new boot drive, new SMT processor, and new video, I went ahead and re-installed Linux on the machine as well. I'm now running Kubuntu 8.04.1, stock kernel, that recognizes all of my RAM, the dual cores on the processor, and the stock 'nv' driver is handling my basic desktop resolution for the time being. I'm still trying to track down all of the packages I need to reinstall; everything from the PostgreSQL client and development libs to nmap to Perl modules.
The next big project for the new hardware is to set up Xen. I've used VMware off and on for probably 5 or 6 years, and it's just a pain to have to remember to allow localhost to connect to my X session, su(1) to root, launch VMware, and then boot the secondary OS. With Xen, I'm planning on shaving off about 1/2 GB of RAM, about 1/4 of my CPU time (if I can tune it that granularly), and 20 - 50 GB of storage for the virtual disk. I'm going to turn this into a shell server that will run along-side my KDE desktop.
All in all, I have to say it was money well spent!
Thursday, May 22. 2008
A while back, I set up a transparent Squid proxy at my border to limit my exposure to "drive-by downloads." It's a pretty standard setup; Squid running on the gateway/firewall, and iptables configured to route all tcp/80 traffic back into itself on port 3128.
An unfortunate side effect of this is that tcptraceroute breaks. As port 80 is the default port that tcptraceroute uses (as the destination port), you end up with a traceroute that looks something like this:
(root@desktop1) ~# tcptraceroute www.ebay.com
Selected device eth0, address 172.25.X.XXX, port 39068 for outgoing packets
Tracing the path to www.ebay.com (184.108.40.206) on TCP port 80 (www), 30 hops max
1 hp-core.ebay.com (220.127.116.11) [open] 0.558 ms 0.384 ms 0.315 ms
I can assure you, I'm not 1 hop off from www.ebay.com. Since iptables is mangling the packet at the very first hop, my gateway/firewall, I'm receiving the SYN-ACK from that first hop.
Keep in mind, tcptraceroute will use any destination port you specify, but the default is port 80 since it's usually allowed through most firewalls, and often open.
Given that the default TTL of a Linux-based computer is 64, I can use the TTL match in iptables to selectively capture what tcp/80 traffic goes to the proxy and what does not. Consider the following rule:
iptables -t nat -A PREROUTING -i $IF_INT -p tcp --dport 80 \
-j DNAT --to-destination 172.25.X.XXX:3128
This is the rule that routes all outbound traffic, originating on the internal network, to the Squid proxy. Now if we change that rule to match only packets with a TTL larger than, say, 48, we end up with the following:
iptables -t nat -A PREROUTING -m ttl --ttl-gt 48 -i $IF_INT -p tcp --dport 80 \
-j DNAT --to-destination 172.25.X.XXX:3128
With this rule in place, a tcptraceroute headed for www.ebay.com on tcp/80 looks more like the following:
(root@desktop1) ~# tcptraceroute -f6 www.ebay.com
Selected device eth0, address 172.25.X.XXX, port 57684 for outgoing packets
Tracing the path to www.ebay.com (18.104.22.168) on TCP port 80 (www), 30 hops max
6 so-1-2-0.gar2.chi1.bbnplanet.net (22.214.171.124) 76.668 ms 23.024 ms 30.840 ms
7 ae-31-55.ebr1.Chicago1.Level3.net (126.96.36.199) 57.014 ms 34.438 ms 36.097 ms
8 ae-68.ebr3.Chicago1.Level3.net (188.8.131.52) 25.716 ms 22.857 ms 32.941 ms
9 ae-3.ebr2.Denver1.Level3.net (184.108.40.206) 84.660 ms 49.091 ms 40.722 ms
10 ae-1-100.ebr1.Denver1.Level3.net (220.127.116.11) 79.996 ms 52.921 ms 52.897 ms
11 ae-3.ebr2.SanJose1.Level3.net (18.104.22.168) 71.323 ms 70.999 ms 71.651 ms
12 ae-82-82.csw3.SanJose1.Level3.net (22.214.171.124) 66.468 ms 70.863 ms 72.099 ms
13 ae-32-89.car2.SanJose1.Level3.net (126.96.36.199) 65.688 ms 62.794 ms 59.597 ms
14 EBAY-INC.car2.SanJose1.Level3.net (188.8.131.52) 60.576 ms 65.507 ms 58.566 ms
15 10.6.1.158 71.070 ms 59.413 ms 59.670 ms
16 10.6.1.146 61.397 ms 88.846 ms 68.280 ms
17 hp-core.ebay.com (184.108.40.206) [open] 84.341 ms 59.523 ms 62.286 ms
Now that looks a little more reasonable. Given that I'm 1 hop off from the gateway/firewall, and Linux uses a default TTL of 64, then all of my packets generated by, say, Firefox, will come into $IF_INT with a TTL of 64. With 64 > 48, the DNAT rule matches, and the request gets routed through the Squid. As tcptraceroute works like any other traceroute tool, only using TCP SYN packets, the first packet will only have a TTL of 1. With 1 < 48, it does not match the DNAT rule, and passes through unchanged. The second packet will have a TTL of 2, with 2 < 48, and so on. As most all destinations on the internet are reachable in 30 hops or less, this guarantees that my browser generated requests are proxied, while my diagnostic requests are passed through unchanged.
You can view/change your default TTL as such:
(root@desktop1) ~# cat /proc/sys/net/ipv4/ip_default_ttl
Needless to say, the IP addresses have been changed to protect the innocent. ;)
Sunday, November 18. 2007
The TV system is coming along. It now POSTs, displays to both monitors, and the OS has been upgraded to the latest packages out of Kubuntu Fiesty (7.04).
It has a Radeon X1300 video card, which appears to have 512 MB of RAM.
The next steps will be to find a good resolution that will stretch across both displays, and configure the remote control software.
# cat /etc/*release
# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 79
model name : AMD Athlon(tm) 64 Processor 3500+
stepping : 2
cpu MHz : 2200.057
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow up pni cx16 lahf_lm svm cr8_legacy
bogomips : 4402.59
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp tm stc
Saturday, November 17. 2007
A Brand New TV!
There are plenty of pictures to look at here.
Basically, this setup consists of an AM64-based PC. It has a 2.2 Ghz AMD64, with 2 GB of RAM, a DVD-burner, a Hauppauge WinTV 150, and a Radeon-based video card with dual output. I have 2 x 19" widescreen (16:9) Samsung MultiSync monitors hooked up to it for video. One display is connected by VGA; the other HDMI. The audio is ran out to a 100 watt Phillips stereo and DVD player. I also have a wireless keyboard and mouse for navigating the directoy tree of content.
I'm also considering adding my external USB PVR so that I can watch one channel while recording on another. Or record two channels at once. Using either cron(8) or at(1), I should be able to schedule recordings ahead of time and just let it go.
The system runs Kubuntu Linux with a KDE dekstop. I will be using mplayer and Kaffeine to play video; and XMMS to play music.
The Hauppauge WinTV card comes with an infrared remote control, which I have not yet configured. I will be using LIRC (the Linux Infrared Remote Control) project to map remote control buttons to certain tasks and functions. Volume will be controlled via this remote; and certain tasks will be mapped to the control such as opening XMMS with a random playlist of music, opening mplayer full screen and reading from the S-Video line in.
The system also has 2 x 250 GB SATA hard drives, and an IDE DVD burner. With this hardware, the system will function as a DVR. I will be able to record shows, movies, performances, etc, with the option to record it off to a DVD. There is also a machine with 1.1 TB of storage - available via NFS - that houses most of the content I already have. This basically allows me to watch what I want at my leisure. This will also let me fast-forward through commercials.
Once the third-party software is installed and configured, I will be contacting my cable provider for another digital cable box. Most/all of these boxes come with some type of line out, whether it be S-Video or composite video. From there, it should be trivial to record video straight to the hard drive.
With a hardware configuration including a digital cable box, I will be able to watch/record in digital as well.
Once the system is complete and in-use, I will post further updates with the software used and how it's configured. For now, I have all of the hardware together and it's time to get a little sleep.
Wednesday, August 29. 2007
Tuesday, May 8. 2007
Saturday, April 14. 2007
Thursday, June 29. 2006
(Page 1 of 1, totaling 11 entries)