It's hardly a first, but I did find some of the information out there a bit spread out. So, just incase I need to go through this again, I figured a ‘blog post might be interesting - doubly so as I've not really got anything interesting from work, that I can blog about at the moment! So, a bit of background. The AppleTV (ATV) is basically a dumb x86 PC - Pentium M 1GHz, 256MB of RAM, 40 or 160GB PATA HDD, 1x USB 2, 1x IR receiver, 10/100Mb ethernet, 801.
Following up to yesterday's post on LXC: Linux Containers, I had a quick play with 2 ULAsubnets (aka RFC4193 addresses - dont forget that site-local was depreciated) - one subnet was dedicated to the LXC containers, one for my normal LAN. Perhaps unsurprisingly IPv6 appears to work perfectly well in this setup. I also altered the setup and bridged a container directly to eth0 on the host node, and watched the container assign itself a stateless address based on my prefix, and again everything appeared to work perfectly well out onto the public v6 network (courtesy of Hurricane Electric's Tunnel Brokerservice).
I've been toying with migrating my server into a containerized system, and almost bought a new server in preparation to migrate everything across. I'd chosen and tested my solution, OpenVz. All was good with the world. Until I saw that OpenVz was effectively being dropped from Ubuntu 10.04 (Lucid Lynx) and most likely Debian 6.0 (Squeeze). The reason for the drop is simply because the OpenVz patches haven't been forward ported into the current kernel.
I've been playing with Micromiserfor a few days, and wanted to graph what it claims to be saving on one of the servers. Luckily this is pretty easy with Munin (which is already running on the box), since Micromiser logs into syslog occasionally. Below is the plugin I hacked together that looks at syslog and uses sed to extract the percentage saving. It's not pretty, but it does work. Perhaps this'll save you a few minutes.
If you've noticedthat the next Ubuntu Server version (10.4, Lucid Lynx) has the Hyper-V kernel modules packaged, alebit in drivers/staging, I'd suggest not dist-upgrade'ing even your development servers for the moment. The reason is simply that you need to devote time to ensuring that the kernel modules will continue to work with each kernel version - right now you can't seem to rely on the modules actually loading successfully from the corresponding /lib/modules/2.
In that past I've created custom live unix distro CDs for myself and although it worked I found that it was so time consuming and generally such a pain in the bum that it just wasn't worth it. Yesterday I had the need (that nerd need, not because I had to, but because I wanted to) to create a customised CD for the house1. I'd heard good things about RemasterSysand figured now was the time to try it.
If you've never had to run Linux under HyperV you'll know that it runs, although it could be better. You'll also be aware that Microsoft supply drivers via connect, in a binary state with official support for only a few distros. So you can imagine how I felt when I saw the announcement on the LKML. Drivers for Linux guests, in the kernel. Ok, so it's not in the mainline yet, but it is the start of good and great things.
I've rambled onabout Karmasphere in the past, but I've not actually done anything with it since mentioning it. Sadly today was the day when spam started getting through my crazy system. This clearly was a signal from the gods themselves; to take the next step. The dreaded DNSBL. You might be surprised, but I don't like DNSBLs. In the past they've made my life hard at work - especially when we've inherited an IP that was previously used by spammers, in some way, shape or form.
Being the good little customer that I am, I tend to try and keep an eye on what upstream are doing. To that end I've been keeping an eye on VPS.net; UK2.net's newest business venture. I'm one of the lucky 80 or so that have gotten into the beta, and I've got to say, I'm really impressed with what the guys have put together so far. Basically what they're providing is a fault tolerant, brilliantly easy to use virtual server infrastructure, based on Linux and Xen.