It's hardly a first, but I did find some of the information out there a bit spread out. So, just incase I need to go through this again, I figured a ‘blog post might be interesting - doubly so as I've not really got anything interesting from work, that I can blog about at the moment! So, a bit of background. The AppleTV (ATV) is basically a dumb x86 PC - Pentium M 1GHz, 256MB of RAM, 40 or 160GB PATA HDD, 1x USB 2, 1x IR receiver, 10/100Mb ethernet, 801.
Over the last few days I've been conversing with Current Cost, a UK based company which produces energy monitoring devices. After I made a bit of a cock up (I eventually wanted a data cable after some testing with the unit in order to graph our rough power consumption and the Trec unit does not have a serial output) with the order they promptly handled and corrected my mistake. When my unit arrived there was unfortunately the wrong sort of clamp, within the day a replacement was on its way with a little extra (USB-to-Serial data cable) to say sorry.
Following up to yesterday's post on LXC: Linux Containers, I had a quick play with 2 ULAsubnets (aka RFC4193 addresses - dont forget that site-local was depreciated) - one subnet was dedicated to the LXC containers, one for my normal LAN. Perhaps unsurprisingly IPv6 appears to work perfectly well in this setup. I also altered the setup and bridged a container directly to eth0 on the host node, and watched the container assign itself a stateless address based on my prefix, and again everything appeared to work perfectly well out onto the public v6 network (courtesy of Hurricane Electric's Tunnel Brokerservice).
I've been toying with migrating my server into a containerized system, and almost bought a new server in preparation to migrate everything across. I'd chosen and tested my solution, OpenVz. All was good with the world. Until I saw that OpenVz was effectively being dropped from Ubuntu 10.04 (Lucid Lynx) and most likely Debian 6.0 (Squeeze). The reason for the drop is simply because the OpenVz patches haven't been forward ported into the current kernel.
Mark Baggett over at PauldotComput together an interesting article on running a command on every machine in your domain from the command line. I genuinely hadn't considered tying dsquery and wmi together in this way. The best thing is that with a little tweaking you can easily run the same command against a subset of your domain. For instance, say you had X terminal/web/sql servers that all lived in the same OU - just dsquery against that and you're laughing.
I've been playing with Micromiserfor a few days, and wanted to graph what it claims to be saving on one of the servers. Luckily this is pretty easy with Munin (which is already running on the box), since Micromiser logs into syslog occasionally. Below is the plugin I hacked together that looks at syslog and uses sed to extract the percentage saving. It's not pretty, but it does work. Perhaps this'll save you a few minutes.
I've had a few people asking me, via various channels, about my “sudden” change in status on the LFSforum. Rather than deal with it individually again I figured a quick post might help.It's correct that I'm no longer a moderator.It wasn't the result of anything I'd done, or any animosity between myself, any of the other mods, or the LFSdevs. Quite the opposite, in fact, and I wish the both the moderation and development team all the luck in the world with LFS.
Over Christmas we had to do a bunch of VMWare to Hyper-V conversions at work. Once you've sufficiently prepared the VM, there are a whole bunch of ways you can do this, ranging from raw converting the vmdk, to mounting the vmdk and a blank vhd and then copying the contents between. We chose it as an opportunity to play with Disk2VHDfrom SysInternals. If you're using SCSI disks in your VMWare VM then you will first need to ensure that you add the IDE controller driver, to hopefully avoid a BSOD when you boot under Hyper-V for the first time.
If you've noticedthat the next Ubuntu Server version (10.4, Lucid Lynx) has the Hyper-V kernel modules packaged, alebit in drivers/staging, I'd suggest not dist-upgrade'ing even your development servers for the moment. The reason is simply that you need to devote time to ensuring that the kernel modules will continue to work with each kernel version - right now you can't seem to rely on the modules actually loading successfully from the corresponding /lib/modules/2.