I apologise to anyone depending on it. Speaking frankly it should not be a surprise as it's effectively been unloved since Logstash 2.0 and I have been asking for others to come forward and assist for sometime.
logstash-output-jdbc is a very small project (and quickly hacked together project) that is easy to understand and is honestly ripe for either rewriting or refactoring with the new features Logstash has implemented over the years. If you feel you can take over the project please reach out, I'll unarchive the project and make you a maintainer, or I will link your fork in the project.
Sadly, I no longer use this at work and I simply cannot dedicate the personal time required to provide support and keep up with Logstash itself. Over the last few years I've worked sporadically to upgrade, fix issues and provide support. I've realised that is not fair to the users of the plugin, nor myself. So I'm calling it.
Reflecting on the project I feel I made some mistakes. Many of them are not new to open source maintainers of any scale and I feel foolish I had not stepped back and thought about it critically sooner.
This was tested under AeroFS 1.0.1 and 1.0.0 and Hyper-V 2012 R2. It may not work for your environment. This is not a supported configuration. You're on your own.
vboxmanage clonehd aerofs-appliance-1.0.1-disk1.vmdk --format VHD aerofs-disk_1.vhd
. This is
the simplest way to do it without System Center.single
to the end the kernel line. Press ctrl+x to boot. This boots us into single user mode./etc/default/irqbalance
and disable it by setting enabled=0.This was tested under AeroFS 1.1.19 and a Hyper-V 2012 R2 cluster. It may not work for your environment. This is not a supported configurion. You're on your own.
For 1.1.9 AeroFS handily provide a VHD download and say that they support Hyper-V. Unfortunately for me the networking just flat out refused to work out of the box.
init=/bin/bash
after $linux_append.ls /etc/systemd/network
you should see 1 file, edit this and you'll
probably see that under the match stanza that Name=
. The lack of any
interface name basically means that setting doesn't get applied at all.
Change this to match your network interface name. If you're not sure what
your interface naem is exit your editor and run /bin/ip addr
. For me it
was eth0.This post was written specifically whilst I was finishing up with an Exchange 2010 installation. However, should work verbatim with 2007 and some of the queries may require a little alteration for 2013. If you're still on 2003. I'm sorry.
So your Exchange server has a w3wp instance with high memory and cpu.
If you're on 2010, ensure that you're on a patch level that covers the issue described in KB2800133.
First step is to find out what the instance is running. Use task manager to show the full command line of the instance. Now check the Windows Event logs. Is there anything interesting? If not move on.
Try recycling that AppPool instance. If that doesn't help long term then we need to start analysing logs.
If you're not shipping your log files to a central location with something like logstash or nxlog, then logparser will be your friend.
If you find that it's the MSExchangePowerShellAppPool, there's probably just a console open somewhere doing a lot of talking, or recently having done a lot of talking. It'll sort itself out shortly.
If it's the MSExchangeSyncAppPool then the odds are likely good that you have a problem device. To figure out which, make sure that IIS is logging access. If it's not, wait a day. Or at least a few hours if you can't.
Now, run the IIS logs through logparser with the following query -
SELECT
TOP 500
TO_TIMESTAMP(TO_DATE(date), TO_TIME(time)) as Time,
cs-username as User,
cs(user-agent) as DeviceID,
TO_INT(EXTRACT_PREFIX(EXTRACT_SUFFIX(cs-uri-query, 0, '_RpcC'), 0, '_')) As RPCCount,
sc-status as Status,
sc-substatus as SubStatus,
sc-bytes as Bytes,
DIV(sc-bytes, 1024) AS KBytes, time-taken, DIV(time-taken, 1000) as Seconds, cs-uri-query
FROM 'path\to\log\files\*.log'
WHERE
RPCCount >= 1500
AND cs-uri-query LIKE '%Cmd=Sync%'
AND cs-uri-query LIKE '%Ty:Co%'
ORDER BY Bytes DESC
If you find a user frequently popping up to the top, it's likely their device causing the problem. Disable their ActiveSync privileges, recycle the AppPool and see how things fair. Repeat as necessary.
If you find it's a specific user, but you cannot “fix” their device, throttle their device instead, using a throttling policy.
If you find you're not getting anywhere then start looking for unusually high number of requesting devices+users -
SELECT
TOP 500
cs-username AS User, cs(User-Agent) AS DeviceType,
COUNT(*) as Hits
FROM 'path\to\log\files\*.log'
WHERE cs-uri-stem LIKE '%Microsoft-Server-ActiveSync%'
GROUP BY User, DeviceType
ORDER BY Hits, DeviceType DESC
If it's the MSExchangeOWAAppPool then you may have someone attempting to log into an account. It should be locking out if they've found a real account.
SELECT
TOP 500
c-ip AS IP, cs(User-Agent) AS DeviceType,
COUNT(*) as Hits
FROM 'path\to\log\files\*.log'
WHERE cs-uri-stem LIKE '%/OWA%'
GROUP BY IP, DeviceType
ORDER BY Hits, DeviceType DESC
If you're still not getting anywhere, revisit the Windows Event Logs. Check that there's nothing showing up in there that's relevant. If there really isn't anything then start cutting down the problem.
Try to isolate your Exchange's CAS from the internet temporarily. Does it quieten down? If not isolate them/it from the LAN. Does it quieten down? Start looking at the logs in different ways.
]]>In this particular scenario Public Folders were accessible internally, but not via Outlook Anywhere (or Outlook RPC over HTTPS if you're old).
This problem can manifests where the email address policy that applies to the Public Folder mailbox does not assign an email address that can be configured by autodiscover. i.e. publicfolder1@ad.contoso.com
The fix is to set the default address on the public folder to one where Autodiscover will work corrctly.
The easiest way to do this is either to manually set the primary SMTP address in the Active Directory attributes (email, and proxyaddresses), or alter your address list policies accordingly.
Now just wait for Outlook to pull the latest configuration. Or delete and recreate the profile if you're in a bit of a hurry.
More details are available under KB2788136.
]]># Example configuration to ensure that Hyper-V feature is installed on hv01-hv03
# Basically a block that can be used to generate a file
Configuration HyperVNodeCfg
{
Node ("hv01", "hv02", "hv03")
{
WindowsFeature HyperVFeature
{
Ensure = "Present"
Name = "Hyper-V"
}
}
}
# Compile to MOF
HyperVNodeCfg -OutputPath HyperVNodeCfg
# Apply the MOF
Start-DscConfiguration -Path HyperVNodeCfg -Wait -Verbose
It ships with Windows 8.1 and Server 2012 R2 and allows you to:
If you've ever used Puppet the syntax will look familiar, however as far as I'm aware theres currently no ability (other than via custom modules) to include configurations from other files which could result in large blocks.
Additionally, and importantly, DSC configuration blocks need to be compiled to a “MOF” file before they can be applied. Having looked at a MOF file, whilst readable I'd suggest that the configurations are stored under version control not the MOF files.
Whilst I'm sure there is a good reason this it adds an extra step between version control and deployment that may result in less experienced admins not bothering with version control, and frequently rewriting their DSC configurations. I would have prefered to see a syntax checker and if compiling really is necessary having it compiled silently before being applied just to skip this step.
System Center covers a multitude of products, most of which won't be impacted by DSC. As for the rest I'm not clairvoyant, however I can only imagine some components of System Center realigning and simply getting better, not going away.
DSC simply doesn't fulfill all of the roles that Group Policy does, so I'd expect GPOs are here to stay.
Both of these tools have started making roads towards Windows CM and it would be a shame if they stopped. It appears that at least Opscode are adding compatibility to Chef and I hope that Puppetlabs follow this trend as well. As much as I like the addage of the Best Tool For The Job(TM) sometimes One Tool To Rule Them All(TM) needs to win for political or training reasons.
I'm interested to see how DSC can be leveraged in an attack. Other than the obvious improperly secured MOF files scenario I'm yet to look into push or pull mode and if/how DSCs are signed and transported, etc. and how they can be manipulated or how data can be retreived. For example can a similar/the same method that can be used to retreive credentials from a GPO preference also be used to retreive them from DSC?
Without looking I don't know, but it's the start of a series of questions you should look at before deploying.
As always, evaluate what you already use and see if powershell DSC is what you're looking for. The powershell 4 requirement may be a problem for many.
However, if the question is should I be using some form of configuration management, the answer is yes. It's not only enterprises, MSPs, or cloud scale start ups that can benefit from configuration management. The thing that you need is for all of your operations staff to have buy-in. As soon as little bits start getting changed manually, without being added to your configuration, the less useful your chosen CM system will be.
]]>Under Ubuntu 13.10 (and Debian unstable as of the posting date) the digital device is missing out of the box.
The fix is to edit /usr/share/alsa/cards/USB-Audio.conf, find the "Plantronics GameCom 780” 999 entry (line 46 on my Debian unstable laptop and Ubuntu 13.10 desktop), under the USB-Audio.pcm.iec958_device stanza and comment it out. This entry tells alsa that this device does not have digital in/out, which in my experience is wrong and renders the device unusable.
See launchpad bug 1241449 for tracking this.
Normal sysadmin related bloggery will return shortly.
]]>Whilst a Linux/BSD box is an option, I wanted something that I couldn't fiddle with too much. I'm ultimately providing a service to my housemates, so it should Just Work(TM).
This meant I was immediately looking at higher end Drayteks, lower end Juniper and Cisco boxes, etc. I ultimately chose the Cisco rv220w because;
I was half expecting to be disappointed - some of the Cisco SOHO devices I've used have frankly been shit. This little device is awesome. It's slow to boot, but you can forgive it that - after all, how often do you reboot your routers? It comes with an awesome web GUI, so should I need to talk someone non-technical through something remotely, I can reasonably easily. It's stable, and it's damn quick. The only issue I've had is that the Xbox is being a little fussy over it - but it's not something that's bothering me enough to look into yet.
I'm yet to use the IPv6 functionality built in yet. I'm still using my Hurricane Electric tunnel on the work VLAN.
My only issues are;
For a small business, small satellite office, or a home that needs a cheap but slightly more advanced appliance than a standard home router, I would highly recommend the Cisco rv220w.
]]>I've lived with 3 other guys, who I've known for a long time now, for several years. In that time frame we're yet to have a major disagreement over anything. I know they say ‘if you can't spot the crazy in the room, then you're the crazy’, but I'm pretty sure it's all good.
This Christmas I decided to surprise everyone with personalised gifts. Normally we do a house gift, but we honestly didn't need anything. So I did what any sane mid-twenties nerd would do. Buy Nerf guns. The only hint I gave them when a large box arrived, was that it was to aid conflict resolution. Confused them no end.
Stage #1 complete. Stage #2 is personalisation. I did want to paint them, but honestly I'm not really capable of doing that well enough, so I started looking at other options. Since we're all gamers, mostly, and we'd all played Borderlands 2 recently, what better than to make up and print personalised stats cards - like you'd get in-game? I spent an afternoon in Inkscape and came up with some in-jokes to put on them.
Now, I'm no artist, but I think they came out pretty well. As I mentioned they're full of in-jokes, however if you'd like to reproduce these yourself the template is available here. You'll need the following 2 fonts as well: “Compacta Bd Bt” and “Carnevalee Freakshow”.
I got them professionally printed on A5 280gsm silk paper and they look awesome. So if you're stuck for a present idea, for your gamer kid, or adult who wants to be a kid, why not a Nerf gun, a stats card and some in-jokes?
If you do get yours professionally printed I highly recommend exporting from Inkscape as a PDF, and converting the text to paths. This way your printer doesn't need to faff with fonts and you can ensure it comes out exactly as you want them.
]]>Virtual Machine replication across sites is very attractive. You get a lot of flexibility with minimal effort. You don't need to learn about making individual services you run on a given virtual machine highly available. However users don't care about a server being highly available across multiple sites. They care about their email, their documents, the company database(s), etc. They care about the services.
With a single machine it's entirely possible for it to get pwned, for an accidental misconfiguration, or any other number of things that causes a service to become unavailable. As has always been multiple servers providing a service helps negate some of these issues.
None of this should be new information, but with the allure of “new” toys (in Microsoft's “free” virtualisation tech) it should not be forgotten. Having spoken to several clients, co-workers and peers in the Windows world I fear it's a lesson that some admins may be forgetting.
TL;DR As with any technology, use virtual machine replication wisely and, most importantly, use it appropriately. Don't forget about service/application level replication.
]]>In my mind System Administration falls under 2 general families; Enterprise and Jack of All Trades. To me enterprise administrators manage large systems, and are generally focused on their specific trade - be it systems or network. I cover everything else as Jack of All Trades. This sort of role is either in a smaller outsourced IT house, or in a smaller business with less than 5 IT employees. Historically I'm a Jack of All Trades, and it's this line of work that I believe will become under threat over the next 10 years.
From what I've seen, in the UK, many people are becoming more IT literate. At the same time a lot of small business software is becoming more plug n’ play. Microsoft's SBS 2011 Essentials offering is such an example - you don't need to assign it static addresses, muck about with DHCP or really know what an Active Directory is. For a direct service it's plug n’ play and completely wizard-ified. Mail is integrated with Office 365, Microsoft's cloud based email offering.
And that's not a bad thing. After all, IT is there to help you work. That's the point of it.
You can take it a stage further. For the market that SBS 2011 Essentials is aimed at, you can argue “who needs a server at all?". Is it acceptable to just drop in a NAS on the end of a home router? For many small businesses the answer is that it'll do.
The thing is that almost anyone can do this.
All companies grow and shrink according to their own ecosystem, and sometimes at completely unpredictable rates. For a business being looked after by the guy/girl that knows computers the time may come that it'll turn out there's 16 NAS’ spread around a couple of offices, at least a few copies of each document, no real backups (because RAID is good enough right?!) and there are people putting up with what's been setup.
Until the next person comes along who knows a little bit about IT and starts setting up cloud storage for one particular office. The people in that office love their local expert, he/she even managed to get the company to pay for their cloud storage. The issue is though that the company didn't actually pay much attention to this new outgoing or what it meant.
Given enough time and the sprawl will become unmanageable. The company just won't see the danger of what may be occurring - they may be risking legal issues, increased likelihood of data theft and so forth.
During all this time occasionally a professional geek may be called in to help fix the odd computer, or laptop, but they'll never see the whole picture.
One, I don't see, or much more likely I don't want to see, where this ends. I've been right about most predictions I've made at work over the last 5 years. I fear that if you want to be a System Administrator then you need to work for a larger enterprise or a cloud services provider. The problem with cloud services is that they're done on such a massive scale, with commodity hardware that the number of System Administrators required is much, much lower than it would've been previously.
Assuming everything eventually goes to the cloud there are 2 problems in of itself. One is that work will dry up for those in the industry who don't already have the experience with larger scale, automated systems. The second is the number of people who go into System Administration will likely fall.
Two, I worry about the future of many small companies who start off with perfectly reasonable solutions that don't scale with the company to suit their real needs. The issue here is sadly the people who aren't professionally tasked, or have the necessary experience, with looking after computer systems.
So how should a small outsourced IT support shop deal with this issue? In my mind there are two ways. One is to shift into providing a cloud service. Realistically that's unlikely - it's a completely difference skill set. The second option is to continue doing what you're doing. You should already be making a relationship with your customers to help steer their IT to match their business needs. You should also forge as many relationships with the people who will eventually become these ad-hoc IT workers. Ultimately you should also focus on becoming a broker of cloud services on the side - you won't reach everyone and you can use this to create new relationships.
Incidentally you should really search Google images for “consumerisation of IT”. It's hilarious.
]]>TL;DR: The meebox is cheap because it's cheap. Runs an old Linux kernel, given time could probably get your own/other distros running, but I called it a day due to (lack of) hardware performance, the likelihood of not increasing its performance with custom “firmware”, and the fact that the vendor site wasn't available to dismantle a firmware update file at the time.
The Meebox is basically a fanless SOC design, in an attractive looking case, running Linux 2.6.15, with support for a maximum of 2 x 3.5” SATA hard disks (by default mine came with a single WD 500GB drive), a gigabit network port and 2 USB ports. SSH is easily enabled and the default admin user account has root privileges, and you get into a busybox shell.
Physically it's one of the better cases for something like this - it's all metal and mine came in a gloss black. It wouldn't look out of place underneath a TV or on top of a hifi stack.
Unfortunately I'd pretty quickly determined that this wasn't quite what I was hoping for. I had really hoped that this was going to be capable of gigabit speeds, but it's just not. My next thought was that it could be made more useful by being a generic Debian box, rather than this concoction.
I set about with the shell and had a good poke about. The good news is that it's pretty open once you're logged in. /proc/mtd reveals some useful stuff, along with everything else. I've popped the output of /proc/cpuinfo and /proc/mtd below.
cat /proc/cpuinfo
Processor : FA526id(wb) rev 1 (v4l)
BogoMIPS : 230.19
-- snip --
Hardware : GeminiA
Revision : 0000
Serial : 0000000000000000
cat /proc/mtd
dev: size erasesize name
mtd0: 00120000 00020000 "RedBoot"
mtd1: 00200000 00020000 "Kernel"
mtd2: 00600000 00020000 "Ramdisk"
mtd3: 00600000 00020000 "Application"
mtd4: 00020000 00020000 "VCTL"
mtd5: 000a0000 00020000 "CurConf"
mtd6: 00020000 00020000 "FIS directory"
Poking at RedBoot (bootloader) didn't yield much information. As far as I can tell it's not configured to listen on the network at boot. If you dump out /dev/mtd5
you can see references to 192.168.10.1/24
, but probing that gets you no where. A port scan whilst it's booting and running yields nothing relating to RedBoot either, and sadly I did verify my crossover cable works.
The mtd utilities also do not appear to be shipped with the device, but that's a minor in convenience.
The board does seem to have what is probably a JTAG header (which could be used with Redboot to replace kernel, etc.), but that's moving out of my realm of knowledge. With the vendor site inaccessible I couldn't get ahold of a copy of the latest “firmware” to dismantle, or to request any part of their build chain, and with that I decided to call it a day.
As I've got an existing solution that performs just as well as the Meebox ever could I decided to stop. The effort I was about to expend on attempting to get a more generic distro running on the Meebox was just too hard to swallow.
I'm pretty sure it's based on a common design thats shared between other low-cost NAS solutions, however it currently doesn't seem like anyone has gone much further with it than me, and honestly I can see why. There are other solutions out there that are a little more expensive, just as quiet (fanless) and better supported by the vendor.
If you do get one of these to play with I would urge caution if you want to return or sell it - the factory reset option in the standard web gui does not wipe out any data.
]]>Turns out that the standard AHCI driver supports it just fine; it just doesn't know the vendor and product PCI identifiers. Simple fix is to teach the kernel about them post-boot;
/bin/echo 1b4b 9192 > /sys/bus/pci/drivers/ahci/new_id
This basically tells the AHCI driver to load itself for the vendor (1b4b) and product (9192) ID. I've cheated and bunged this into /etc/rc.local because I'm lazy, but equally you could let module-init-tools take care of it.
I'm yet to see any issues thus far, but as always if you do this make sure you backup your data first and regularly, I'm not responsible for any data loss, yadda, yadda, yadda. As for RAID mode, it's just standard fakeraid/dmraid and again I'm yet to see any issues with the standard dmraid, kpartx and mount tools in detecting and mounting existing partitions.
I've got a bug logged on the kernel bugzilla about getting it into mainline, so this hopefully shouldn't be necessary for all time.
]]>Due to this philosophy Microsoft have produced some pretty nice tools for developers and designers. I'm told (admittedly by a more Microsoft orientated HCI designer) that XAML is the dogs bollocks. I don't really mix with many other UI or HCI designers, so that maybe very biased and it's out of the scope of things I'm interested in. However generally speaking, I believe that Microsoft are widely considered to be behind when it comes to producing what is considered to be cool and “bleeding edge” - not just in the circles that I roam.
Combine the developer orientated philosophy and what are most likely newer, and younger, employees and I believe you get the Microsoft that the world is seeing today. A Microsoft who is contributing patches to projects that have garnered a reasonable amount of developers attention in the last year - NodeJs and Redis are two that spring to mind immediately - as well as long established projects, including the Linux kernel (I was going to ramble on that one, but I'll save it for another time).
Paul Querna brought up a related anecdote whilst talking to Venture Beat a few weeks ago; he got into developing after installing Perl on Windows. Ultimately many people hacking together code must be starting from this point because of the sheer pervasiveness of Windows. The loss of these developers as they progress is the issue at hand for the Microsoft of today and the future. Microsoft needs to retain developers on their platforms.
With the world moving into a more cloud-y (bleurgh, I mean SaaS/IaaS/PaaS - delete as appropriate) environment, retaining control over the developer will become increasingly important to Microsoft. After all, if you're developing on Windows, using the latest and coolest tech, deploying on Windows is a logical, and safe, step for most professional developers. And if the developer wants to run cloud based Windows, where else would you go, other than with the developers of Windows itself? After all, who better understands and importantly has a hot line to the developers and source?
The reason for this, painfully obvious post? Apparently it's not painfully obvious to some.
Is this something to be worried about? Not yet, and here's to hoping that it will never be.
]]>The future of my profession is one I'm very paranoid about. It's one I love, despite all my moanings, and as the world starts it's seemingly inexorable move back to the mainframe^Wcloud, I fear that as time progresses we'll be in a world where there are few of us outside of large cloud companies.
How much longer is on-premises server hardware still required? In the case of a lot of businesses you can probably get by on the current generation of Google docs, and a few apps from their marketplace. This leaves desktop administration and some simple networking in-house, and not much else really. Experience tells me that's either handled by an outside company, such as the one I work for, or the guy/girl who “knows” stuff about computers.
Lets face it, the world isn't going to loose all on-premises equipment overnight, but as technology from Apple and Google leads the way at home, users are starting to expect (and rightly so) a similar experience. It's because of the disparity that users get disenfrancised with their workplace infrastructure.
]]>However this is not who I'm aiming this post at. Or that's what I'm telling myself so that I don't feel like I've wasted a few hours of my life over the course of 2 days. This post is aimed at those who do not have a large amount of experience with IPv6, be it system or network administrators, or just general passers by. Or people on facebook, where this will end up being syndicated.
If you want to skip this ridiculously long post, and check out the extremely quickly hacked together proof of concept code, head to github.com/theangryangel/deprecate-ipv6-slaac.
I'll start off with a quick bit on IPv6 address configuration. In comparison to IPv4, there are 2 ways to dynamically configure an interface with an address. Stateless Address AutoConfiguration (commonly called SLAAC), and DHCPv6. Both SLAAC and DHCPv6 require a router to be sending out multicast packets (called Router Advertisements, or RAs), stating what prefix(es) the router is capable of routing, what length these prefixes are, and a bunch of other details. For the purposes of the rest of this post I'm going to focus on SLAAC.
SLAAC addresses have 2 lifetimes attached to them; “preferred” and “valid”. Once an address has a preferred lifetime of 0, only pre-existing network traffic may use this address. Any new traffic MUST use a different one. The address is then marked as deprecated until the valid lifetime reaches 0, at which point the device must remove the address.
Now the clever guys and girls behind the relevant RFCs designed it so that any changed options between RAs for a prefix being used by a device, such as lifetime, must be honored when different from those previously collected. The only special case is that the valid lifetime cannot be set below 2 hours.
If you're following you should see that there's a very simple way of changing the values for many of the settings.
This isn't the biggest issue in my opinion - mostly because RAs are here to stay. That's been made abundantly clear in the last few years. There is a potential fix with SEcure Neighbor Discovery (SEND) (see RFC3971), but that in itself has other issues.
No, the big issue as I see right now is down to how the current tools we use to manage IPs display this information on desktop, server and embedded operating systems. All of the more commonly used tools and places that most sysadmins interrogate, state nothing about an IP being deprecated. Until an address has reached a valid lifetime of 0, the address is still displayed. It is not until you get down into lesser used components of some network tools, that you can see the state of these addresses. All the major OS’ that I played with have this problem. Windows ipconfig, and GUIs say nothing about addresses being deprecated, and ifconfig on all the unix-likes that I've played with also have the issue. However, if you get down into netsh or ip for example, then you will see the state. Even then it's not displayed under the commands that you would generally use.
This means that for all intents and purposes, with the right administrative team who doesn't know IPv6 well enough, you can knock out parts of an IPv6 network, causing it to fall back to other prefixes, or IPv4. If you've managed to MITM, or otherwise exploit IPv4, then you're laughing.
The sheer number of administrators who are not knowledgeable enough about IPv6, and the number of networks that are driving to be dual stacked, it has to make you wonder - what's the opportunity here? For the naughty people, obviously…
What about RA guard (RA equivalent of DHCP guard)? Fragment the packets and you've basically negated RA guard. I've only had the opportunity to test this on a few implementations, but it looks like THC are way ahead of me on this one. Other than that, SEND (which has no support anywhere effectively), and MAC filtering on switch ports I can't immediately see any other methods of mitigation.
If you're interested I've published a simple proof of concept tool, using Scapy, on github.com/theangryangel/deprecate-ipv6-slaac. It has 2 modes of operation; manual and fully automatic. Manual, where you must provide all the details of your target network's router, so MAC address, link local IPv6 address, prefix and so on. Automatic, where it will listen for the first RA and then steal all the information from that packet, and then send out on your behalf.
The proof of concept is very noddy and will fall over. It's written and tested under Linux only.
Next on my play list is DHCPv6 and DHCPv6-PD next. Thinking there's some fun to be had with the RECONFIGURE part of the protocol.
]]>As someone who is running Debian and Ubuntu under Hyper-V I would heartily welcome this official support.
Sadly I suspect that if Gupta really does represent Microsoft's view, then the odds of getting on the official list is probably going to be quite low; "Gupta says Microsoft is drawing the line at ‘touching’ the Linux code. It won't provide patches.“
]]>We began the project by powering up some virtual machines and test importing the configuration from ISA 2006 to Forefront TMG 2010, and all appeared fine. The ruleset was there, the VPN configurations were there, and so on. Test data seemed to pass through nicely.
The migration went through and we put the box live, decommissioning the old ISA 2006 hardware. Everything seemed fine until larger quantities of traffic started passing through the box. The logging was showing a lot of packets getting dropped on the floor, but with no source, destination or protocol, active FTP and SIP traffic was also being problematic, and the box would randomly decide to stop passing everything, like the service had stopped. The irritating thing was that it simply wasn't consistent.
After poking into the configuration we started noticing that a lot of problems were evident in the configuration; The domain controllers computer set had entries that were flat out wrong and not present in the ISA configurationThe Web Proxy Auto-Discovery Protocol (WPAD) file was wrongDNS was starting to go down VPN tunnels, but there were no DNS addresses configured on the interfacesAnd a whole host of other niggly issues
After fixing these the box was still randomly dropping things, but as the data flow increased (and not to extreme levels - we're talking a 10Mbit/s leased line here) so did the drop outs. At this point it was starting to become more than an irritation and more of a service affecting problem. I elected to rebuild it with non-R2 Windows Server 2008, and to manually create the configuration from documentation. Although I would've loved to have got to the bottom of the problem rolling back would've been as much of a pain at this point, and the customer was rightly beginning to get fidgetity.
So why non-R2 Windows Server 2008? A couple of reasons; All our other deployments of TMG 2010 are on non-R2 and are stable, we noticed our original test box for this project was non-R2, and there are also rumblings of other people having issues with R2 on a couple of technet threads. Although I'm not 100% convinced that R2 is to blame here frankly we didn't need R2, and I only wanted to do this the once as the whole job needed to be done out of working hours.
Since the OS rebuild and manual build of the configuration, touch wood, it seems to be a lot more stable. No more weird packets getting logged, no more weird FTP or SIP problems, no more random drop outs.
My thoughts on TMG 2010 aren't favourable at this point, but it's not just because of the problems. Ostensibly it feels like ISA 2006 with a few interesting bits bolted on, but unless you require ISA or TMG in your environment, I wouldn't recommend it. There's still no real IPv6 support, without SP1 it feels very wobbly, and for a few features that you might not need its an expensive upgrade.
Realistically you can pull off the same feature set with a different combination of products; a “real” firewall, and an internal proxy server, for example. This isn't to say that you shouldn't put TMG 2010 in anywhere. It does have some very useful features, but just look at your options carefully. Perhaps you don't need to upgrade. Perhaps you may find a better fit solution.
]]>If you want the final word on the quality of the training head straight to the final paragraph, otherwise strap in; This is a long post.
I'll have to be honest, I had never heard of TrainSignal until that point, and I was wondering if it was a bit of a scam. However, several days later a set of DVDs arrive via UPS. What I received was a set of 3 DVDs, in a standard DVD case, and a little shipping note. Having not actually ordered them themself I don't know if I should've got a little “this is your training” letter, or if thats just it. I would say that a little note would have been nice, especially pointing out the interesting bits about DVD 3. To me this DVD would be the one that would most interest a lot of the busier, and perhaps younger, generation. It has pre-converted versions of all the training videos, for iPods/iPhones, and it also has audio-only versions. The README on DVD 1 and 2 didn't mention this at all, and it would've been nice.
In terms of the actual content of DVD 1 and 2, you get a DVD with a bunch of folders, one of which is a codec directory, a bunch of lesson directories, a notes directory that has a nice set of PDFs you can print to take notes on (very useful for a class environment) and another about the lab setup, along with the obligatory Windows autorun, and a small README. There are a few other files and folders, but you probably won't care too much about them.
The README itself says that the DVDs require Windows and Internet Explorer, however you can just dive into the directories and open up the AVI files using your favourite video player. In my case I watched some of the videos on my desktop, under Windows using IE, and then I switched to using VLC under OS X and later Ubuntu. If you're a “power user” understanding this won't be an issue for you, however on the off chance that a less experienced user receives these and has a non-Windows desktop it may've been nice to detail as such.
The actual content of the training videos is very professional, as you should expect. You have a voice over from J. Peter Bruzzese, and a video that has slides and screen capture, which is all clearly explained. The video starts off with an introduction and an explanation as to what you should expect from the series, and even better does tell you that if you've got experience with Exchange 2007 that you can just jump about a bit. I thought that was a nice touch. It could be argued that it's a bit redundant, however it's a nice nod to those who know the previous version nicely.
The videos will take you through the configuration, how to build a similar lab setup, and outlines a real world scenario. It's this scenario that the rest of the videos are based around. To me I think thats a very important thing to have done. A number of the other training videos I've had to sit through have been very abstract, and forced. It prevents you from really connecting with the content, and you don't always learn.
Having setup 2 production Exchange 2010 organisations in the last year, one of which is using what many will consider an “advanced feature” (Database Availability Groups), and another that was running from the Beta, I found the pace to be very slow and I actually watched all of the videos at an accelerated rate. By the end I had managed to ramp upto 2.2x speed, only dipping slower to listen in on the bits that I've not yet used or I was concerned may've been lacking. I'm not suggesting that you do this, but if you do know Exchange 2010 I'd suggest that you select the videos you want to watch carefully.
However, it's I have no doubt that it's extremely accessible if you're completely new to Exchange, or if you're coming to it from Exchange 2003 or prior.
The videos end with an outline of the Exchange 2010 certification exam. My concern with that would be that some may rely on that a little too much. It would've been nice to hear a statement outlining that you should really check to see if there have been any ammendments, or so forth.
The only other concerns that I've got are that it is pre-service pack 1, it brings up remote file servers (which I thought had been dropped from Exchange 2010, despite being left in the GUI), and I found the video on Database Availability Group to be a little lacking.
Now, in Peter's defence I've only recently setup DAG, and it is very much a feature that you should do research into before deploying. But it would've been nice to see a mention about running multiple networks, and more DAG customisation. In constrast the other “advanced” section on Unified Messaging was detailed enough to bring you upto date on what you need to know, common issues, and what you may need from your phone guys.
Ultimately J. Peter Bruzzese is a knowledgable, well spoken instructor. The training is good quality, although you certainly want to ensure that you know what you're buying. If you have been working with Exchange 2010 in production for some time, and have been playing with it during beta, you may want to look elsewhere. This is definitely training for those with little to no experience of Exchange since 2003, or prior, or none at all. However, if you have other staff who have little experience with Exchange 2010 then I heartily recommend TrainSignal's Exchange 2010 training. You won't be disappointed.
]]>The first question I can understand. Sometimes you do need to see someone else, physically there in front of you, but to be frank, I've never been a great social animal, which probably helps massively. Second I speak with the other guys that I work with quite a lot during the day (unless I've got my head stuck in a particularly complex project or issue). We use Skype, our VoIP phone system and an internal IRC server to communicate. We joke, we talk movies, share stupid links occasionally, everything you'd get in a normal office - just with a bit of geographical distance. Often people find this quite hard to grasp when I explain this.
To a certain extent I can understand the view point of the second question; It can be hard sometimes. However (and this is the big secret to working at home) if you love your job, to the point that you'd probably be doing the same sort of things if you were unemployed, then working from home shouldn't be any harder for you than working in an office. If you're not in the same boat as me, which is loving your job, then you're right getting motivated would be frakking hard work.
However, there is a bit of a downside with working from home, and that is simply balancing work and home life. I won't pretend that I have the answers to this one, because honestly I don't. I'm very bad at separating what I'm told should be 2 different ways of life. However, part of the problem is that I do have to do a reasonable amount of stuff outside of the normal 9-5 hours. Sometimes it's hard to perform maintenance on systems when the customers you're working for don't always have the financial, or other, capacity to build highly available systems.
So why the post? Partially I felt that I didn't really have anything interesting to write about from work. There's some stuff about the “new” IBM IMM (Integrated Management Module) that I've only just had the opportunity to play with, since we've not put any new servers in for sometime. At the end of the day by standard IMM is nice, but you really need the Virtual Media Key to make the most of it (which provides remote presence, and remote media features) - which for about £200 is totally worth it and necessary if you've used other fully featured remote management/lights out cards in the past.
The other reason is that I was reading an old copy of .NET magazine that I've half-inched from Chris where the 37Signals partner David Heinemeier Hansson has a page (once you remove the images) article about the worth ethic being 37Signals. One of which is that he believes workaholics should be fired, and he explains why. Great article and interesting to see how creative companies work. I just struggle to see many people in my line of work, and similar ones, that aren't workaholics, simply because they really love what they do. But does that make us workaholics?
]]>To combat it as much as possible the EVE team have used some cool and interesting tech over the years, but earlier today they announced what might be the biggest change in their hardware (that I've noticed). Being a massive nerd it's an interesting read, especially with further promises on more information. Whilst CCP are generally quite open it's great to see them continuing - which for a MMO is mightily unusual.
]]>So, a bit of background. The AppleTV (ATV) is basically a dumb x86 PC - Pentium M 1GHz, 256MB of RAM, 40 or 160GB PATA HDD, 1x USB 2, 1x IR receiver, 10/100Mb ethernet, 801.11n Broadcom WiFi and a Nvidia Geforce GO 7300 - all which uses about 17W of power in a fairly compact and quiet, form factor. What I hadn't banked on was the very retarded power supply. I knew that the ATV wasn't able to power off, but I assumed that was a software thing. Oh no. The actual power supply has no concept of switching. It's either on or off. Which is slightly annoying.
The whole “hacking” process is largely taken care of - a bunch of enterprising invididuals have got it running, over the course of several iterations. The most recent is the efforts from the ATV-Bootloader team, who have basically built a small recovery image, which translates some of the EFI structures into BIOS compatible (allowing unpatched kernels to run) and has a very cut down Linux installation which then chainloads (using kexec) another Linux kernel. The really awesome thing is that these guys have made some nice tools to stream line the efforts if you want the ATV OS and Linux to co-habit. I didn't want this particularly.
So first thing first was the old hard disk was removed and a new one was prepared under my desktop install of Linux, via a USB to IDE converter, using the instructions on the ATV-Bootloader project wiki. Pay close attention to the requirements for a patched parted.
Next I used debootstrap to install a basic Debian (squeeze) system into /dev/sda4 (ATV-Bootloader sees all drives as /dev/sd*, where as when you boot into a kexec'ed kernel they will be seen as /dev/hd* - this confused me for a few minutes - not something you want when you're scrabbling around at stupid o'clock in the morning). At this point I then chroot'ed into it and used apt to install a kernel, but no bootloader. Since the ATV-Bootloader uses kexec all you need to do is have a valid one of the following: mb_boot_tv.conf, menu.lst (grub), syslinux.cfg (ISOLinux), or kboot.conf. Having played with grub files quite a lot I thought that I knew the syntax well, but do you think I could get it to work with a grub file? No. I ended up whimping out and using a mb_boot_tv.conf (popped it into the root of the Linux partition) which is a lot simplier[1] and is infact the first file searched for (so its slightly faster to boot). If you don't fancy that then check out boot_linux.sh from the ATV-Bootloader trunk to see all the options and example configs (in-line comments). The only other things you need to remember are the usual when debootstrapping - create a valid /etc/network/interafces (man 5 interfaces, if you're unsure), make sure udev numbers your network cards correctly (/etc/udev/rules.d/70-persistent-net.rules), and of course your root password. Exit, reboot and hey presto, you should be into a very basic Debian install.
My next annoyance was the flashing LED. By default it flashes orange to tell you that its booting, and then the ATV OS would reset it to a white light. Thankfully a great chap by the name of Peter Korsgaard has written a tool (available from git and here - note needs to be compiled) called atvtool to control the LED and the fan. It's a little basic and doesn't play well with lircd at the moment (you can set atvtool to release the controller back, at which point lircd needs to be restarted), although I'm hoping to have a poke and understand why and hopefully fix this.
WiFi is also fairly important to me since I'm going to use the ATV to replace my WRT54G bridging 2 networks here. Sadly the only real options seem to be using ndiswrapper or the Broadcom-STA drivers. I opted for the Broadcom-STA and things are going well, with no issues at present - the only special thing I did, for my own brain, was to rename the adapter to wlan0 (again, udev persistent-net.rules).
From here on out, if you're running headless, everything should be working like a dream. At this point I elected to install the nvidia kernel module only to see if I could get anything useful from lm-sensors, but there wasn't much luck on that front. if you're planning on using the ATV as a media center or with a monitor/TV, then you'll definitely need it.
What does this leave you with? A low power, almost silent, fairly capable machine to run part of your network. The only sad thing, in my eyes, is the limitation of a single USB port. From here you could run forked-daapd to share your music, any of the several network file systems, DHCP, DNS, you name it. Just watch the memory usage - don't forget that there's only 256MB of RAM to play with.
[1] Vaguely what my mb_boot_tv.conf looks like - note the /dev/hda4 instead of /dev/sda4.
#try-net-boot
kernel /vmlinuz
append ro root=/dev/hda4
initrd /initrd.gz
]]>After I made a bit of a cock up (I eventually wanted a data cable after some testing with the unit in order to graph our rough power consumption and the Trec unit does not have a serial output) with the order they promptly handled and corrected my mistake.
When my unit arrived there was unfortunately the wrong sort of clamp, within the day a replacement was on its way with a little extra (USB-to-Serial data cable) to say sorry. The next day the clamp arrived and my Current Cost Envi is monitoring our apparent power consumption.
During the whole process the Current Cost staff were courteous and basically pretty damn awesome, and I'd highly recommend doing business with them to any one.
I've not yet decided how I want to grab output from the Envi yet: Perhaps an Arduino, perhaps a slightly modified WRT54G, however given that there's serial out, the fact that the units use Zigbee, and there's a fair bit of documentation out there, it's going to get done one way or another.
]]>I also altered the setup and bridged a container directly to eth0 on the host node, and watched the container assign itself a stateless address based on my prefix, and again everything appeared to work perfectly well out onto the public v6 network (courtesy of Hurricane Electric's Tunnel Broker service).
So all in all I'd say that LXC is looking pretty good so far. There are a few other things I'd like to test, like how effective iptables are in the context of containers, and whether or not it is secure enough. Unfortunately I'm not going to have time to play with these things this weekend really. Answers on a postcard to the usual address if you already know though!
]]>The reason for the drop is simply because the OpenVz patches haven't been forward ported into the current kernel. Now thats obviously a bit of a problem, since I prefer Debian no straight dist-upgrade from Lenny to Squeeze would be slightly annoying. To put it mildly.
It turns out that a few people that realised this is going to be equally annoying, should such a thing occur, and they've been working LXC: Linux Containers and got the implementation into the main kernel tree. However there are a few issues with LXC;Documentation: It's thin on the ground, but if you're capable there is enough to get you going, and if absolutely necessary you can dive into the source (although I will admit it can does take a bit of time if you're not familiar with the basics of linux kernel development [like me]).It's not as mature as OpenVz (userspace tools, for instance can do odd things, and there are data leaks [for instance in Squeeze at the moment you can see the host mounts via /proc/mounts] although many seem to be resolved in git).
The OpenVz team seem to be updating the OpenVz patches for a more recent kernel, however the decision by a few distros to migrate to LXC might provoke the same sort of effect that migrating to KVM had on Xen's popularity.
Anyway, here's my quick start guide:You need a relatively recent kernel with control groups (cgroups) and capabilities support. If you're not planning on rolling your own kernel the current kernel in Squeeze or Lucid should be fine.Use the package manager of your choice to install lxc, bridgeutils and debootstrap. That should pull in everything you need for a basic install (if you want to run Debian based container).Run lxc-checkconfig and you should see everything is enabled and ready to go, with exception of cgroup memory controller, which LXC doesnt appear to require.So now we need the control groups pseudo-file system setup
mkdir /cgroups
and mounted (add the following to /etc/fstab
cgroup /cgroups cgroup defaults 0 0
and run mount).Next choose how you're going to network your containers. In both these examples br0 is the device that the containers will be attached to. Aside from these 2 options there are other ways you could do it, but honestly I can envisage these being the most common options. You could attach them straight onto your network by creating a bridge including eth0 (for example), which your containers will attach to. In this example your /etc/network/interfaces might look something like
auto lo
iface lo inet loopback
iface br0 inet static
max_wait 0
bridge_ports eth0
address 192.168.0.3
netmask 255.255.255.0
gateway 192.168.0.1
allow-hotplug eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 gateway 192.168.0.1
iface br0 inet static max_wait 0 bridge_ports dummy0 address 10.10.10.1 netmask 255.255.255.0
</li></ul></li><li>Now you need to create the files for the container. debootstrap or febootstrap, etc. are your friends. I chose to keep my containers in /var/lxc/guests. So for my first container (called "one", which is a Lenny basic install)
debootstrap lenny /var/lxc/guests/one
You'll need to ensure that the /etc/resolv.conf is setup correctly under /var/lxc/guests/one, as is /etc/network/interfaces. For resolv.conf you can most likely copy from the host node. Your interfaces will just need eth0 and lo setup with the correct IPs.</li><li>Next you need to create a LXC config. This file is pretty much full of voodoo and dragons. I've saved this one under /var/lxc/guests/one.conf
lxc.utsname = one
lxc.tty = 4
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0 lxc.network.mtu = 1500
lxc.network.ipv4 = 192.168.0.3/24
lxc.rootfs = /var/lxc/guests/one
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm lxc.cgroup.devices.allow = c 4:0 rwm lxc.cgroup.devices.allow = c 4:1 rwm
lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 254:0 rwm
</li><li>Next create the container
lxc-create -n one -f /var/lxc/guests/one.conf
.</li><li>And now you can start it. I've discovered that you need to set the container to be daemonised otherwise lxc-start will never return:
lxc-start -n one -d
To attach to your container just run
lxc-console -n one
To stop
lxc-stop
</li><li>You will notice at this point that lxc-ls returns 2 sets of lists. The top list is the list of available containers, and the second list of the currently running containers.</li></ol>
This is obviously by no means a definitive guide, but it is just what I've done this evening to get stuff up and running. I've not yet tried getting IPv6 working into the containers, mostly because I wanted to try the v4 networking in a few different ways, and it's now bed time. However looking at the docs, it shouldn't be all that tricky working straight out of the box - something that OpenVz doesn't do in all circumstances (the same can be said for Linux Vservers).
As for whether or not it's worth it.. Lets just say it's not been unpleasant.
]]>If you're looking after any more than a handful of servers, without something like SMS/MOM/something you've rolled yourself, then this is a real time saver.
]]>Perhaps this'll save you a few minutes.
#!/bin/sh
# Plugin to graph the savings made by micromiser
if [ "$1" = "autoconf" ]; then
echo yes
exit 0
fi
if [ "$1" = "config" ]; then
echo 'graph_title Micromiser Savings (percentage)'
echo 'graph_args --upper-limit 100 -l 0'
echo 'graph_vlabel savings'
echo 'graph_category system'
echo 'savings.label savings'
echo 'savings.draw AREA'
echo 'savings.min 0'
exit 0
fi
RES=`grep Estimated /var/log/syslog | tail -1 | sed 's/.*(\([0-9\.]*\)%)$/\1/'`
echo -n "savings.value $RES"
]]>For these reasons I'm taking a break away from the forums, however I intend to return to assist with the programmers forum and various bits of support again at some [undetermined] point in the future.
]]>If you're using SCSI disks in your VMWare VM then you will first need to ensure that you add the IDE controller driver, to hopefully avoid a BSOD when you boot under Hyper-V for the first time. Why don't you just set Hyper-V to use SCSI disks? Sadly because Hyper-V cannot boot from SCSI. Once you've added the driver and rebooted to ensure that it's stuck we simply ran Disk2VHD and pumped the VHD off to a network share.
Interestingly Windows 2003 x64 and 2008 were a lot more resistant to the change in “hardware” than older Windows versions, which needed a Windows repair, however I can't fault Disk2VHD for that as it was something I was expecting anyway.
What worried me most was that the first run we did Disk2VHD produced a mangled VHD which I managed to repair and get working by doing the following;
Fortunately all other conversions didn't seem to have this issue, and as much as I would've loved to investigate why this happened, I just didn't have the time.
]]>The long and short of it is that if you're currently looking to use any flavour of Linux under Hyper-V the “old” rules still apply;Use the legacy network adapterSet static MAC addresses under the VM settings (unless you want to faff with udev)and learn to live with the performance penalty
]]>The Virtual PC Guy (Ben Armstrong) has a few more details on the problem.
]]>To disable this you can set your receive connector not to use this feature:
Set-ReceiveConnector "Connector Name" -MaxAcknowledgementDelay 0
Further details are available on technet.
]]>Two options;
Set-RpcClientAccess -identity SERVERNAME -EncryptionRequired $false
]]>I loved using your beta interface, but it's been in beta for quite sometime, and getting less reliable; the SSL issues that happened during June/July/August and the endless “Server Communication Error"s. Switching back to the normal interface just made me sad. Then I met Google Reader. I'd heard about her a few times in the past, but just brushed her off as some supermodel that I'd hear all about, but never meet. After the breaking of Google's services several times fairly recently I certainly didn't want to meet her - I wasn't sure that I could cope with breakage when I most needed it.
But whilst out this week I couldn't load your mobile interface, and I had no choice. Like some sort of rabid, sickly WoW user needing his daily heroic dungeon grinding fix (I'm over that now, honestly. I only think about it sometimes), I needed my precious news. Thats when Google Reader came around the corner and I bumped into her. She picked me up, casually imported my OPML file and everything just worked. The organisation stayed the same, the mark-as-read-on-scroll, all those things that I loved about your beta interface were there. Then I realised what you were and what you had become.
You might be thinking, “but what about all that stuff you've got pinned with me?". I realise that it's a shame, but at the end of the day I'll just have to treat it like I've forgotten to do a recent back up prior to migrating hardware. I could go back and grab the pinned stuff as and when I need it, or maybe transfer it to a wiki or bookmarks but that would be too much.
I wish you a good life Bloglines, and I hope that you can understand. Maybe we'll see each other again soon at the shopping counter and make a little awkward conversation.
]]>To the KarmaSphere team, I wish you well with your new endeavors, and I hope that you continue to come up with innovative products!
]]>Zend_Form is pretty handy, and takes care of a lot of the hard work in producing and validating forms. Unfortunately the default decorators aren't quite as sane in my opinion, which becomes obvious if you start using fieldsets, or display groups, as ZF refers to them - You'll see your fieldsets getting wrapped in an additional definition list which is basically crap if you ask me. You can get rid of them with CSS, but its tricky.
To avoid this, until today I've been using a set of custom decorators and getting increasing frustrated in having to add more and more to support ZendX_JQuery_Element's and Zend_Form_Element_File, etc. Feeling somewhat defeated as I was hitting the limits of the decorators documentation, I started taking another look at the default decorators and what could be done with the minimal amount of tedious work, in terms of drop in replacements and slight CSS alteration. I came up with a reasonably good solution that I cannot believe I didn't see before.
I've plonked the code below (sanitized from anything specific to my setup - such as HTMLPurifier integration, etc. - with a test form to help you see how it works) incase anyone happens to be interested. Given the unusual increase in commenting I've been getting against using Zend Framework and the PHP Sqlsrv extension I figured it might be worth posting (I'm sure I'm not the only person out there who has missed the obvious).
What I've basically done is tell the form to render form elements, wrapped in a form. Display groups should then only display the elements, wrapped in a dl, wrapped in the fieldset. I've added some “plain” fieldsets which I use for control elements, such as submit or reset buttons. I've also given these items classes which allows me to remove the fieldset through styling, and move the positioning easily.
class My_Form extends Zend_Form
{
// Form
protected $_formDecorator = array('FormElements', 'Form');
// Display Groups
protected $_groupDecoratorStd = array('FormElements', array('HtmlTag', array('tag'=> 'dl')), 'Fieldset');
protected $_groupDecoratorCtl = array('FormElements', 'Fieldset');
// Ctls and hidden elements
protected $_elementDecoratorCtl = array('ViewHelper');
public function __construct($options = null)
{
parent::__construct($options);
// Add our HTMLPurifier filter(s)
$this->addElementPrefixPath('My_Filter', 'my/Filter/', 'filter');
// Add our Confirmation validator
$this->addElementPrefixPath('My_Validate', 'my/Validate/', 'validate');
// Add our custom elements path
$this->addPrefixPath('My_Form_Element', 'my/Form/Element/', Zend_Form::ELEMENT);
$this->setAttrib('accept-charset', 'UTF-8');
$this->setMethod('post');
$this->setDecorators($this->_formDecorator);
$this->setDisplayGroupDecorators($this->_groupDecoratorStd);
}
public function addAntiCSRF($salt, $timeout = 300, $name = 'no_csrf_foo')
{
$this->addElement('hash', $name, array(
'decorators' => $this->_elementDecoratorCtl,
'salt' => $salt,
'timeout' => $timeout
));
}
public function addSubmit($labelSubmit = 'Submit')
{
$this->addElement('submit', 'submit', array(
'label' => $labelSubmit,
'decorators' => $this->_elementDecoratorCtl,
'class' => 'submit'
));
$this->addDisplayGroup(
array('submit'), $this->getGroupName('controls'),
array(
'legend' => 'Controls',
'class' => $this->getControlsClass(),
'decorators' => $this->_groupDecoratorCtl
)
);
}
public function addSubmitReset($labelSubmit = 'Submit', $labelReset = 'Reset')
{
$this->addElement('submit', 'submit', array(
'label' => $labelSubmit,
'decorators' => $this->_elementDecoratorCtl,
'class' => 'submit'
));
$this->addElement('reset', 'reset', array(
'label' => $labelReset,
'decorators' => $this->_elementDecoratorCtl,
'class' => 'reset'
));
$this->addDisplayGroup(
array('submit', 'reset'), $this->getGroupName('controls'),
array(
'legend' => 'Controls',
'class' => $this->getControlsClass(),
'decorators' => $this->_groupDecoratorCtl
)
);
}
protected function getGroupName($name)
{
return get_class($this).'-'.$name;
}
protected function getControlsClass()
{
return array('controls', get_class($this).'-controls');
}
protected function getInputsClass()
{
return array('controls', get_class($this).'-inputs');
}
}
class Test_Form extends My_Form
{
public function init()
{
$this->addElement('text', 'title', array(
'label' => 'Title',
'validators' => array(
array('StringLength', false, array(2,50))
),
));
$this->addElement('file', 'file', array(
'label' => 'file',
));
$elem = new ZendX_JQuery_Form_Element_DatePicker(
'dp',
array(
"label" => "Date Picker",
'validators' => array('Date'),
'required' => true
)
);
$this->addElement($elem);
$this->addElement(
'hidden', 'shush', array(
'value' => 'its quiet',
'decorators' => $this->_elementDecoratorCtl
)
);
$this->addElement('text', 'comments', array(
'label' => 'Comments',
'validators' => array(
array('StringLength', false, array(2,50))
),
));
$this->addDisplayGroup(
array('title', 'file', 'dp', 'comments'), 'inputs',
array(
'legend' => 'INputs'
)
);
$this->addSubmitReset();
}
}
Granted there are other solutions, but this works rather well for what I needed - a drop in replacement and the (valid) HTML structure that I wanted.
You should probably want to alter this if you're starting without any legacy stuff hanging over your head.
It should be noted this was written against Zend Framework 1.9.2, there's no guarantee that this will still be valid against future versions or older versions (although it's pretty likely and I've been using Zend_Form with few changes since early 1.x).
]]>The fact that SqlSrv will return PHP objects is rather nice, unless you already have existing code that assumes strings are returned, like almost all other database extensions available for PHP. The easiest “fix” to allow your code to work across as many systems as possible is to ensure that you pass in ReturnDatesAsStrings as an option.
]]>Holy crap it's awesome. Customise your setup as you want it, make sure you do the necessary system wide alterations (/etc/skel, and so on) and then just fire and forget. A few minutes later you either have an ISO or the CDFS ready to be altered before you turn it into an ISO. I didn't have to worry about clearing down rubbish, then mashing it all into squashfs. It Just Worked.
If you need to create a custom Debian based live CD quickly, then I encourage you to look at RemasterSys. It's quick and works like a dream.
[1] We have an old laptop with a mostly dead hard disk, and it's been running an Ubuntu live CD for months. Sadly it means that everytime you shut it down certain things need to be reinstalled, such as Flash support. No longer is this an issue thanks to Redcatch Linux Five Thousand.
]]>Last week we had to get a 1500 series HP PSC working on a home workers terminal server session, and it turns out that the “proper” driver isn't correct and doesn't install. Luckily it seems that a lot of the HP PSC's use the same internals as the HP Deskjet series. As most of the Deskjet series work with the Deskjet 550C driver we tried the 550C for the HP PSC 1500, and it works like a dream.
Just thought the world might be interesting in knowing.
]]>To break myself into the CouchDB world I started poking around at the capabilities, and mostly trying to not think of SQL-isms. Understanding map/reduce and getting your brain out of the SQL world is worth it, if for no other reason than to get a different perspective on data storage.
Unfortunately I decided, rather than write what I really thought would be interesting (a Thunderbird provider for couchdb, so that I can replicate my lightning calendar and contacts to my server, laptop and desktop, without using SyncKolab[1]), that it would be best if I started simple.
I chose to develop a pure CouchDB application, using the rather nice couchapp. I write web apps and this should be a gentle introduction, and fairly quick (which is what I wanted primarily). Really, How hard could it be? So I pulled down couchapp, did a bit of reading and built a VM to run couchdb 0.9, as several newer features are required than the current version in Debian stable. What I'd failed to realise at this point was that the “API” for pure couch applications are a bit in flux. After a few hours, over the course of a few evenings I've become somewhat frustrated, until I noticed a page entitled Formatting with Show and List on the CouchDB wiki tonight. A lot of the available code out there uses the new API, which explained why a hell of a lot made bugger all sense and why I'd almost started pulling down the couchdb 0.9 to have a butchers.
Now this isn't a slur on anyone except me. I was so blinkered after the joy of understanding why non-SQL databases do have a place in the universe that I failed to search the wiki correctly.
To give the whole post a sysadmin-style slant, during this I'd started noticing the CouchDB growing quickly. Now granted I was doing a lot of pushing of attachments, shows and lists, but the growth seemed rather unproportioned. After doing some tests of my own I hit the web and found that Joan Touzet has some interesting thoughts on the subject as well, which you should go and read now. Naturally without putting my tests into the real world don't take it as gospel, but if you're using couchdb you might want to ensure that you're doing things the right way, lest your sysadmin smite you with the +2 damage hammer of server resources.
I'm trying very hard not to paint a bad picture of CouchDB here. Genuinely I think it's a good project, and although it certainly has a lot of competition, it does seem to have a lot of mindshare. There are also good points to, for one, backups are easy.
[1] Not that I have a problem with SyncKolab, it's serving me well, and probably will continue to do so.
]]>So you can imagine how I felt when I saw the announcement on the LKML. Drivers for Linux guests, in the kernel. Ok, so it's not in the mainline yet, but it is the start of good and great things.
To all those involved, I salute you!
]]>Sadly today was the day when spam started getting through my crazy system. This clearly was a signal from the gods themselves; to take the next step. The dreaded DNSBL. You might be surprised, but I don't like DNSBLs. In the past they've made my life hard at work - especially when we've inherited an IP that was previously used by spammers, in some way, shape or form. You see the thing is that some are actually on the ball and will check out and change their lists easily. Others won't. The problem is that maintaining your knowledge of the good ‘uns and the bad ‘uns is… well boring.
Karmasphere is supposed to take this hassle and make it easier; you can aggregate results from multiple sources, not just DNSBLs, and let Karmasphere decide for you. So if one or two don't keep up.. well that won't matter so much. In theory.
At the moment I'm adding headers to my mails and not rejecting, just to see how things fare, however in the future (assuming all goes well) I'd actually like to see this in real use. Perhaps even at work, in conjunction with our other spam filtering solutions.
So how did I do it (remember this is just logging, not actively rejecting)? Pretty easily, it must be said (I'm not going into detail);Installed the Karmasphere client:
perl -MCPAN -e 'install Mail::Karmasphere::Client'
#! /bin/sh
#
# Starts KarmaSphere Daemon
#
set -e
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
NAME=karmad-exim
DAEMON=/usr/local/bin/$NAME
DESC="KarmaSphere Daemon"
KUSR=INSERT YOUR USERNAME HERE
KPWD=INSERT YOUR PASSWORD HERE
SCRIPTNAME=/etc/init.d/$NAME
# Gracefully exit if the package has been removed.
test -x $DAEMON || exit 0
case "$1" in
start)
echo -n "Starting $DESC: $NAME"
nohup $DAEMON --feedset=karmasphere.email-sender-ip --username=$KUSR --password=$KPWD --socketuser=Debian-exim 2>/dev/null 1>/dev/null &
echo "."
;;
stop)
echo "Stopping $DESC: $NAME."
killall -9 $NAME &> /dev/null
;;
restart)
echo "Restarting $DESC: $NAME."
$0 stop && sleep 1
$0 start
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|restart}" >&2
exit 1
;;
esac
exit 0
(and then created the relevant symlinks using update-rc.d)Created and added the following to /etc/exim4/conf.d/acl/10_karmad-exim (which is pretty much line for line what you can find on Karmasphere's website):
karma_rcpt_acl:
# Check envelope sender
warn set acl_m9 = ${readsocket{/tmp/karmad}\
{ip=$sender_host_address\nhelo=$sender_helo_name\
\nsender=$sender_address\n\n}{20s}{\n}{socket failure}}
# Continue quietly on socket error
accept condition = ${if eq{$acl_m9}{socket failure}{yes}{no}}
message = Cannot connect to karmad
# Prepare answer and get results
warn set acl_m9 = ${sg{$acl_m9}{\N=(.*)\n\N}{=\"\$1\" }}
set acl_m8 = ${extract{value}{$acl_m9}{$value}{unknown}}
set acl_m7 = ${extract{data}{$acl_m9}{$value}{}}
# Check for fail
# Once happy with testing, replace with deny & condition - or perhaps do filtering into Junk as part of ~/.forward?
warn message = X-Karma: $acl_m8: $acl_m7
accept
deny !acl = karma_rcpt_acl
If you're not aware of what's going on here, then I'd suggest reading the Debian Exim4 split config documentation, and also the Karmasphere docs - you could break your mail server.
What this will now do is mark all of my mails, if it can talk to karmad, with X-Karma: 0: Comment. The lower the number, the more likely it is to be spam. -1000 to 1000 is the range, with it defaulting to 0 for neutral/unknown.
The only thing that would need to be altered is to add my whitelisting acl, if I do go live with it, and to decide whether or not I do filtering in ~/.forward or during SMTP rejection.
]]>How exactly does the virtual machine get to a state where it knows that it's being backed up? After all the last thing you want is a restore and the VM to believe it's the time when the backup occured (which would leave it in an inconsistant/messy state).
I went looking and came across a video from TechNet which, about half way through, has a high level technical explanation of what happens when you use VSS to backup VMs, and what happens when the guest OS isn't VSS aware. Certainly interesting and potentially worrying stuff.
]]>HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DNS\Parameters\MaxCacheTtl=dword:2A300
]]>import-csv C:\Path\To\Users.csv | foreach-object {
New-QADUser -FirstName $_.givenname `
-LastName $_.surname `
-SamAccountName $_.samaccountname `
-ParentContainer my.domain/Path/To/OU/Users `
-displayname $_.displayname `
-name $_.displayname `
| `
Set-QADUser -UserPassword $_.password | Enable-QADUser
}
Just to add an SBS 2008 twist to it, if you create your users in this manner you'll find that they show up in Active Directory Users and Computers, but not the SBS Console. The reason for this a special attribute on the object which doesn't get set. There's a nice article over at the SBS Blog which explains it for groups, but it's also applicable for users.
What it doesn't tell you is that for users, you can simply head into the SBS Console > Users, run the Change User Role for Accounts wizard, select Standard User, select your users (you'll need to select the checkbox to show all users), and then let it do it's magic. It'll setup your users Exchange mailboxes and shared folders in a jiffy.
]]>Basically what they're providing is a fault tolerant, brilliantly easy to use virtual server infrastructure, based on Linux and Xen. Unlike most other (more traditional) VPS providers you simply don't buy a VPS and then manage it, you actually buy “nodes” or “slices” (they're actually refered to as both terms in the web interface at present), which you then assign to various virtual servers in any fashion you want, with the ability to reassign at a later date should things change.
There are a few cool things about the whole setup:
I'd love to see how things fare as the product launches, and it gets busier, but so far I'd be pretty happy to convert my physical server to a virtual VPS.net instance, should the circumstances arise.
]]>What will you be doing?
]]>#!/usr/bin/perl
if ($ARGV[0] and $ARGV[0] eq "config")
{
print "graph_args --base 1000 -l 0\n";
print "graph_title Uptime in days\n";
print "graph_category system\n";
print "graph_vlabel uptime\n";
print "uptime.label days\n";
print "uptime.draw AREA\n";
exit 0;
}
$uptime = `uptime`;
$uptime =~ /up (.*?) day/;
$up = int($1);
print "uptime.value $up\n";
]]>I have to say that I'm pleasantly surprised and also taken back by a few things. I'm also a little disappointed about a few bits and bobs, but I'll get to that later.
Having tested Xen previously, and having got a book on it for quick reference, getting it set up was painless, quick and easy, once I'd upgraded my dom0 to Lenny to sort a few networking and Xen related issues. Whilst I maybe a glutton for punishment for running without a GUI, I actually found setting up both Linux and Windows virtual machines painless from the command line.
One thing I was worried about, was the performance of the Windows machines I'd setup - they were awful. I really do mean it. It was like being back in front of the AMD box that we were using for virtual machine testing at my old job 3 or 4 years ago. Turns out that Windows XP/2003 and the Xen ACPI implementation don't quite play nicely and it's a case of making Window use the “standard pc” “drivers”. Once I'd done this it was a hell of a lot better.
Continuing on the subject I was also a little apprehensive of using LVM2 in my setup, as last time I'd used it I'd ended up with mangled data. Since it's now been quite a while since that happened (years now), the fact that it's very widely used, and on mention from Andy that he'd been using it at work for super secret projects, I went for it again. Happily there's been no problems and the disk performance of the domU's is excellent.
Xen has also given me my first real opportunity to use and play with Vyatta, a fully pre-packaged, commercially supported, open source alternative to Cisco, Juniper, etc. It's actually pretty sweet, it must be said. I like the JunOS-like interface for setting it up, and how the config needs to be saved and commited, and how it's all accessible from the command line or web-GUI (if you're that sort of person). If you're already using your own rolled Linux boxes as routers, then you might not beable to see the point behind Vyatta, and I must admit before trying it I was one of those people. However, it's simply that it's all there already, with support should you need it - which in the real world can sometimes be very useful (as I'm sure everyone knows, since people like Redhat, Canonical, IBM, etc. exist and thrive). As time goes on I hope to not only use it in testing, but also for creating a quarantined portion of my personal network here, and to connect the other “main house” network. Hopefully that'll work out nicely, and if I does I may well end up using Vyatta on my next externally hosted server (which may end up being setup in a xen-hosting style - let me know if you maybe interested!).
The only shame with the Vyatta system, is the price of the hardware appliances from the commercial company. In comparison with Cisco's offering they are cheaper, but when you're looking at the small end of the companies we support at work, it's sometimes hard to justify using anything more than a really cheap Draytek or “worse”.
So on the whole things are going well with Xen. Right now I don't know if I'd suggest a Windows 2008 Hyper-V Core server or Xen at work, next time it comes up - I suspect that I'd suggest different solutions for different circumstances (i.e. Hyper-V core for a Windows network with non-virtual AD boxes, and Xen for a colocation setup), but I can't really explain why when you exclude the obvious (such as managability in each situation). Food for thought perhaps
Apparently I need to say Happy New Year also.
]]>I'm unsure if his patches have been merged into the main Xen source, but it's still an interesting read and useful if you're wanting to secure Xen domU's, or experiment with IPv6.
]]>Now I'm not saying that ‘Running Xen’ is a bad book. It's not. It's just missing that “something more”, that “je ne sais quoi” (I cannot believe I've just typed that and not removed it). If you're going to somewhere without man pages or online docs, then it's an invaluable reference.
Worth £21 (it's current cost on Amazon)? Without reading more books on Xen, I honestly don't know, but despite feeling as I do, I am glad that I've got it.
Mostly because I've had some awesome ideas, whilst on the bog with it.
]]>Thankfully fixing it is pretty easy (although potentially time consuming depending on your setup), if the redirection policy isn't active and is simply a case of changing the relevant entry under HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders, and then logging off and back on, and migrating the redirected files back into the local profile directory.
As far as I'm aware the affected products are;
If you're not using redirected app data directories any more then this is obviously a handy fix. If for whatever reason you're still using redirected directories and not roaming profiles, then you're screwed as it appears that Adobe aren't planning on fixing this.
Things like this really piss me off and just make me feel like the majority of my work is working around bug, flaws or oversights and is just why I prefer open solutions and platforms; at least I'd have the possibility of trying to fix it in-house.
]]>Under a DFS share, any linked shares are created as junctions. It appears that the permissions on these junctions do affect the permissions of the data within the linked share. Whilst this is logical, given how junction points work, what really threw me was that the wonderful, wonderful GUI didn't reflect this and the permissions on the junction point had been inadvertently changed.
It's not like you ever need another reason to chalk one up for the command line, but there we go!
]]>KB239088 details the process. I found that the wizard wasn't much use at all - but it's not like the process is particularly complicated. Doing it manually also demonstrates that deploying the “fix” over multiple servers is childs play.
As 64 bit Terminal Servers become more common, and clients with printers stay at 32 bit (i.e. home or remote workers) I can see this becoming more relevant over the next few years.
]]>There are various reasons for this, and if you're not already aware of them I'd suggest listening to S5E21 yourself. To have a hovis moment, I discovered LUGRadio at the start of season 2, which culminated in LRL 2005. I never made that LRL, nor the following 2 UK events due to various reasons, despite my best intentions and plans of going. So here I am, stuck with a dilemma - do I follow my own little personal tradition or should I say “sod it all” and head up to Wolverhampton for the last big bash from the 4 large gents? As the gents would say; answers on a postcard!
If, like me, you're looking to fill the void soon to be left by LUGradio do not despair, for there alternatives (but never replacements) which I shall try over the coming weeks;Linux OutlawsUbuntu UK podcastLinux Action ShowThe Linux Link Tech Show (TLLTS)If I don't make it to LRL 2008 let me take this opportunity to thank the presenters (Jono Bacon, Stuart Langridge, Stephen Parkes, Matthew Revell, Ade Bradshaw, Adam Sweet and Chris Procter) who over the years have given me many laughs and much enjoyment.
]]>There's all sorts of news on this, but little in the way of unix and unix-like related info on the web. Despite having 2 customers with it at work, I've not had the opportunity to try any of the unix-like systems on it either.
Sean on the other hand has had the time and opportunity, and has posted a nice round up of Linux distros which work and a few work arounds for known issues, all in his entry entitled “Linux on Hyper-V”.
]]>To cut a long story short the disk space was regained, but any logon attempts to the terminal server yielded a completely black screen, with exception to the Microsoft logo. We figured it was a client side caching problem, but it was not so.
Turns out that when the disk space on the primary partition (C:) fills up the default colours can be overwritten, which results in the black logon screen.
KB906510 details the fix, but not so much that it's caused by the disk space issue. If you're looking for a quick fix to the default colours, then just save the following as a reg file and import it.
Windows Registry Editor Version 5.00
[HKEY_USERS\.DEFAULT\Control Panel\Colors]
"ActiveBorder"="212 208 200"
"ActiveTitle"="10 36 106"
"AppWorkSpace"="128 128 128"
"Background"="102 111 116"
"ButtonAlternateFace"="181 181 181"
"ButtonDkShadow"="64 64 64"
"ButtonFace"="212 208 200"
"ButtonHilight"="255 255 255"
"ButtonLight"="212 208 200"
"ButtonShadow"="128 128 128"
"ButtonText"="0 0 0"
"GradientActiveTitle"="166 202 240"
"GradientInactiveTitle"="192 192 192"
"GrayText"="128 128 128"
"Hilight"="10 36 106"
"HilightText"="255 255 255"
"HotTrackingColor"="0 0 128"
"InactiveBorder"="212 208 200"
"InactiveTitle"="128 128 128"
"InactiveTitleText"="212 208 200"
"InfoText"="0 0 0"
"InfoWindow"="255 255 225"
"Menu"="212 208 200"
"MenuText"="0 0 0"
"Scrollbar"="212 208 200"
"TitleText"="255 255 255"
"Window"="255 255 255"
"WindowFrame"="0 0 0"
"WindowText"="0 0 0"
Or if you don't trust me just export HKU.DEFAULT\Control Panel\Colors from a “working” Windows server. The effects are instant.
]]>If you don't know what that means then a more apt description would be simple to create rules, that allow you to do anything from append text to the bottom of an email, to apply filters on messages between both internal and external users. If you use the GUI think of an Outlook rules style interface that generates rules which are actioned at the server.
This allows you to do all sorts of cool things. For example even if you don't have an Edge Transport server you can block incoming emails from certain recipients. You can prevent two internal users from mailing each other if a message contains certain strings. You can append a disclaimer to all outgoing emails.
If this has tickled your fancy there are 2 ways you can check it out;
The cool thing about transport rules is that you can have a bit of fun with them, as things like bounce messages are completely configurable. So if you're wondering why you're receiving weird error messages, from an Exchange server, you might have a new potential answer.#5.7.1 smtp;550 5.7.1: Computer says no…
]]>I wish Shannon all the luck in the world in whatever he decides to do, and to thank him for what he's done for the modification culture. His work has certainly helped to shape a significant part of my beliefs over my recent years. The people I have met have been amongst the kindest, the politest and the most social people I've had the good fortune to deal with, and I only hope that with his absence that things continue.
]]>Like Resier[3|4], ZFS is one I'd heard about, did some research on but never considered using at all. The fact that it currently only runs on Solaris or via FUSE under Linux (which in itself can be considered to be a benefit, as the filesystem is recoverable and separate from the kernel - performance supposedly sucks though), had kind of put me off a bit.
If you're unfamiliar with ZFS and it's feature, then may I suggest taking a quick look-see at the ZFS wikipedia article. There are many pretty cool features in ZFS, such as the concept of pools (and everything that comes with them, such as growing pools with the file systems mounted - very slick), the sheer capacity, RAID-Z, etc. all which helps it to to sustain multiple disk failures in a RAID-Z2 array, much like you'd see in RAID-6, except this is acheived within the filesystem itself. Granted you might not see someone attacking your drives with a sledge, but you never know what might happen some days…
The video is certainly it's aimed at managers or some sort of technical head, but you cannot deny it. That. Is. awesome. I've been considering creating a small box, with multiple SATA hard disks in a separate enclosure (possibly attaching the enclosure to a mini, pico or nano ITX box) to create a home-grown NAS box and ZFS certainly seems interesting enough to consider as an option, considering that iSCSI, NFS and CIFS (aka SMB or Windows sharing) support is now built into the kernel (interesting decision perhaps?), plus Samba is running on Nexenta as well. My only hesitation is the work done on Nexenta - GNU tools sat on top of the OpenSolaris kernel. I'm familiar with the various tools used by this distro and it would speed up my understanding of what I'd be using, however the rate of packaging and development seems to flucutate. Playing with it in a virtual environment is going to be limiting at the end of the day, and my spares box won't cover something of this scale, so maybe I'll have to jump in with both feet Real Soon(TM)…
Does anyone have any practical experience with ZFS? Is it mature enough to trust my files and believe that I won't have to go through the pain of restorations?
]]>Astonishingly they captured 10,000 unique devices (supposedly) over 6 months, from various locations. Including “the pub”, which appears unnamed. Now whilst I usually disable my bluetooth when I'm not using it (one, because the battery life on my k800i is slowy going the way of the electron fairies, and two because I don't want stuff like this happening, or my phone being subjected to anything that it shouldn't), it makes me wonder if the people who were being tracked were informed. Granted it's interesting work nonetheless, although the fact that they used a pub strikes me as a good excuse for other activities; "Err… yes we're in the pub. But don't worry, it's all in the name of science!“
However this could serve as a practical wake up call for those who object to lack of privacy but aren't technologically aware.
]]>In the past we've always manually configured the thin clients, as they've always been rolled out over a long period, typically in very small quanities (no more than one or two); slowly replacing aging computers. However, this project is effectively a new cluster of terminal servers (replacing some aging hardware it was decided that it would be a good opportunity to do things properly). Personally I didn't really like the prospect of going to each one in the building and manually fixing the config, or using a CNAME in the local DNS, as many probably need firmware updating and altering in other ways to bring them inline with the rest.
In the past I knew that the Wyse S10's (which is the model we've mostly got deployed with this client) were administratively configurable (by this I mean en-mass from a single point, á la group policy), but I had never really gone hunting for some decent docs until three-four weeks ago.
I stumbled across freewysemonkeys.com, an absolute gold mine of documentation for anything Wyse thin client related without having to wade through all the other “user guides” from Wyse themselves. Whilst the docs weren't always 100% accurate for our model they did give me enough of a leg up to get everything we needed working perfectly on all but the oldest S10's (unfortunately getting a firmware update from Wyse currently requires a support contract, according to their site).
A few days prior to this project I had tested it on another, much smaller client, as I was on-site virtualising part of their systems, and it worked awesomely well. Based on the configuration we will now beable to tell if a thin client has a proper network connection by simply asking what colour the background of the thin client is, setup various remote connections, wallpapers and icons, automatically reflash the firmware if there's a newer version available, etc. all of which is picked from some basic DHCP scope options and a FTP server.
There are other solutions for the various thin client's out there, so if you're still manually configuring stuff by hand before shipping out to a customer, I recommend investigating for your chosen make and model, no matter how small your deployment is. It'll make your life as an administrator, and the life of your users, much easier.
]]>First thing I did was to head into the ISA console and setup an IPSec tunnel, using almost all of the defaults (this is important as the settings for the Draytek must match the ISA/Windows defaults). If you're not familiar with ISA, then the process is roughly as follows;
The Vigor then needs to be configured, to match the ISA server;
Set the Profile Name to anything you like, its just a name - I used the same name that I gave the network in ISA. Tick “Enable this profile” and select both for Call Direction.Dial-Out Settings
Select IPSec tunnel, set the “Server IP/Host Name for VPN” to the external IP of your ISA server (or whatever you selected in your IPSec tunnel setup in ISA). Set the IKE Pre-Shared Key to the same as in ISA, or if you used a certificate set the Digital Signature. Under IPSec Security Method set “High (ESP) 3DES with Authentication”. Click advanced to open a new window and check “Main mode”, set IKE phase 1 proposal to 3DES_SHA1_G2, IKE phase 2 proposal to 3DES_SHA1/DES_MD5, IKE phase 1 key lifetime to 28800, IKE phase 2 key lifetime to 3600 and enable PFS and click ok.TCP/IP Network Settings
Set the WAN IP and the Remote Gateway IP to 0.0.0.0. Set the Remote Network IP to your internal subnet host address, and the Remote Network Mask to your internal subnet mask (by internal I mean the subnet protected by ISA). Disable RIP (unless you want to use it), and set the NAT operation to Private IP. We didn't need to set this as the default route, this is obviously your own design decision.
You should now be good to go, and your Vigor and ISA box will negotiate and encrypt all traffic that travels between the 2 subnets, as it should. To check on the Vigor you can head to connection management and check out whether or not the tunnel is currently up, and that it's encrypted.
There are various reasons for opting for an IPSec tunnel, however the major one is that it's one of the easier tunnels that can be created, and are secure. You could of course opt for a site-to-site PPTP, or L2TP/IPSec, VPN. However these come with their own complications and security issues.
If the Vigor 2800 is anything to go by, I'd say that the Draytek routers are a nice bit of kit which work well for SOHO deployments, although there are a number of things in the GUI which really started to cheese me off after a while. There's no denying that they're not as flexible as other solutions, but they're no where near as simple as other routers which you could use. I'm still not entirely convinced that they're the perfect solution, although Chris has fallen in love with them, but at the end of the day for this sort of job you're rarely going to be touching them once they're up and working, so the cost savings over an equivilent Cisco box may pay off at the end of the day.
]]>What this feature allows you to do is configure member ship of groups within Active Directory or in the local groups of domain computers. It's also available in the local security policy (naturally), so you can also use it on a standalone machine (although I'd imagine that in this situation it would be rather less useful).
Why do I now consider this setting important? Because it allows you to setup a GPO for an OU to allow users to be a member of a given local group, such as the Remote Desktop Users, for instance. This first example is useful to me as I didn't want users to be a member of the AD Remote Desktop Users group and have RDP access all over the network by default. This allows me to add a group of users to the local RDU group, and now setup a Terminal Server entirely automatically once it's been added to the correct OU.
The second example is forcing membership to the local administrators group. This is useful in stopping fiddlers (who “require” local administrator rights on laptops) from removing Domain Admins, or other groups and users, from the local admin group. Whilst I've only ever been locked out of a user's laptop once because of this, I'd rather not go through that again.
Another benefit of using the setting is that it will automatically remove any local user accounts that should not be a member of the local admins group. I'm sure you can imagine why this is useful!
]]>In true Symantec style the documentation for the workaround is reportedly sketchy and the available fix doesn't entirely sort the problem, at this stage.
Thanks to the SBS blog, and Mike Lieser, for bringing this to my attention.
]]>So, the news that IBM is due to be handing off at least part of it's x86 server tech to Lenovo doesn't fill me with hope. Prior to the Lenovo desktop and laptop take over we put in many IBM desktops and laptops in for customers. Once the switch over occured we saw the speed in responses for hardware faults change drastically overnight, and not for the better in my opinion. No longer was it cost effective to actually use them. It's because of this that worries me. When you get a server and build it with a RAID array, and the customer can't stretch the budget to an additional hotspare, you can occasionally see multiple hard drive failures in a very short time. Granted this is a very sweeping statement, so feel free to shoot me. Even so, it's a possibility. If we have to wait more than a few days to get a replacement, I'd be itchy all over. The last thing you want to have to do is a restore. And we all know how fun they can be, no matter how well planned and practised you are.
Whilst we'll all have to wait and see what they come out with at the end of this, I honestly can't see any value added features from IBM for resellers or the SME any more.
The question then becomes, where do you go? Dell? HP? Fujistu Siemens? I can forsee problems with each, sadly.
]]>Interestingly it appears that IBM, and Dell, are still distributing installation aids with the older drivers, which cause issues in this situation.
To be relatively technical it appears that arp packets aren't responded to or sent out correctly. [1]
[1] I say this, but I have no network dumps to back this hypothesis up. At present all I have is the experience of debugging the problem and watching the ARP cache of each machine affected fill with invalid entries.
]]>I've not discovered why this happens, but this has fixed it on my main desktop and on my virtual machines. I've only had a moment to test this on a PPTP VPN at the moment, but you may find it works for L2TP/ipsec.
]]>Today I've done just that, using Clonezilla, an open source clone of Symantec / Norton Ghost based on Debian. I'm sorry Redhat-lovers, but I think the world is trying to tell you something when awesome tools are created from a Debian base ;) Now the obligatory distro bashing is out of the way, we can continue.
This is the first time I've used Clonezilla, and I've got to say that I'm very impressed. You can send and receive an image to and from a variety of filesystems, both network and local, and is capable of compressing the files quite nicely. If you're running a server you can also do some clever multi-cast magic for casting a number of machines simultaneously. Nice.
I won't bore you further with how awesome Clonezilla is, and I'll get to the nitty gritty. I saved and restored my image to and from an SMB (Windows) share, which you may find a bit tricky if you're not up together with the ins and outs of running Linux on MSVS2005SP1R2. I've written about this before, but rather than point you at a previous article, and tell you what to ignore, I've popped my steps together below. The reason for this is that all you need to do is get the networking up and running, as the “GUI” is simply an curses interface, and the clock issues shouldn't matter as you're doing this over a very short space of time.Burn the Clonezilla CD, boot it and make an image of your machine to your chosen SMB share. There are plenty of tutorials for this if you get stuck. I won't give you a step by step guide, I'm afraid.Create a virtual machine and set it to boot from the Clonezilla iso or CD. Turn it on.Select whatever boot option you need to, and once in to the GUI drop to a terminal (Ctrl+Alt+Fx, where Fx is an F key. F1 is the GUI, so I'd suggest F2).Login as root:
sudo su
ifconfig -a
You should see eth0, unconfigured. If you don't then you need to
modprobe tulip
vim /etc/network/interfaces
and add the following at the bottom, to get it running on DHCP
auto eth0
iface eth0 inet dhcp
Save and close.Restart the networking stack:
/etc/init.d/networking restart
Check that you've got an IP assigned to eth0, by using
ifconfig
Thanks to mypapit gnu/linux blog, via PlanetSysadmin, for bringing this awesome tool to my attention.
]]>Be aware that LUGRadio is known for bad language and a very British sense of humour, so if either of these things offend you have been warned.
]]>However, Microsoft have just opened up a new hotfix ‘blog here with the following schedule.
You might find this a bit more palletable, if you're deep into a feed aggregation addiction.
Go on. Add it to the rest of your Microsoft security and update related feeds. I dare you.
]]>However, something else has been playing my mind, and that's the Logitech G25. It's an affordable steering wheel, from Logitech with a half decent H-gate and sequential shifter, 3 pedals, a full 900 degree rotation wheel with 2 paddle shifters, all with hand sitched leather. This is a wheel of dreams for a sim racing enthusiast, of which I have been for some time. It's cheaper. Much, much cheaper than the alternatives which weigh in at several hundred quid, and a far superior spec than the lesser models.
The G25 isn't new, it's been doing the rounds for a few months and I've not heard many horror stories yet (aside from there being a few problems with reverse when the gears are in H-gate mode and some arriving and breaking very quickly). Granted there are probably going to be the usual problems with the Logitech wheels, but considering I've been using a rather knackered generation 1 Saitek R440 Force for the last several years (which I maintain is partially responsible for driving me away from simracing) anything would be an improvement.
Unfortunately I've not had much time to play, what with all my other projects and interests. With the newest patch of Live For Speed coming soon, and the promise of stallable engines, a brand new, real, car based on a BMW single seater, and a host of other improvements, I've decided I'm going to dedicate more time to get back into the seat. Hopefully it will help with the cost of driving in real life as well.
So what this all means is that the laptop project is going to go on the back burner for a month or two, as I've just shelled out for a G25. From Dell of all people. Usually this is the sort of thing I'd order on a next day delivery, however I'm out Tuesday and Wednesday, and Dell were offering a 3-5 day delivery for free and a 15% deduction on it's RRP, it's practically a no-brainer.
Now I've never personally ordered from Dell before, and I've only heard bad things. But, if they can get this wheel to me, on time, without me having to reorder from somewhere else, then I might have to look into their server kit with a bit more gusto.
]]>8042.noloop clock=pit
If you want to know why this works then carry on reading.. Still here? This is probably quite boring, so I don't blame you if you give up half way through!
Generally an OS gets the current time from the CMOS during startup, and then sets up a timer to generate periodic interrupts. The OS keeps track of time by counting these interrupts. However, when a virtual machine generates a time interrupt, the guest OS may not be running therefore the guest OS may not immediately account for some of these interrupts and thus “loose” time. To work around this issue, the virtual machine keeps a backlog of these interrupts. Additionally, the virtual machine increases the frequency of timer interrupts when it is running. The increased frequency of timer interrupts is intended to help the guest operating system maintain the correct time. However, the increased frequency of these interrupts could cause the guest to miss some of the interrupts. These missed interrupts are known as “lost ticks.” Lost ticks cause the time on the guest OS to lag behind.
The Linux 2.4 kernel doesn't have any way of accounting or dealing with these lost ticks, and this can cause the time to lag. However, the Linux 2.6 kernel has a number of clever algorithms to deal with them. Unfortunately this causes some other problems, by the look of it. Unfortunately I don't know exactly why, but I assume it's to do with how its dealing with the time and internal scheduling, perhaps (I am not a kernel developer, this is a bit of a stab in the dark)? What I can tell you is that by telling the kernel to use the pit algorithm you are telling it to not use any lost tick correction.
I'm assuming that this is probably going to become a staple thing to do when running Linux 2.6 kernels under VS 2005R2SP1 - at least for a little while.
]]>About 2 weeks ago I got the results from my interviews with Blizzard. I received several phone calls from them whilst I was out and about for my current employer. Eventually I got an email from my contact asking what number was best to get me on and I responded, apologising and detailing my details.
They phoned in due course and they offered me the job. I was unbelievably gobsmacked. I seriously didn't think that I had any chance of even going to see them, let alone being offered the job. I said my thank yous and asked for a few days to think it over.
Unfortunately I then spent the next few evenings out at the cinema, or at the pub, and then the rest of the day asleep (it was the weekend before you panic). It was the first time I'd seen my mates in a few weeks, so whilst it was cool to see them and chat about stuff, it was weighing on my mind a fair bit. My best friend was also out of contact during this time, so couldn't chat to her about stuff. Fortuantely I have a small number of really great friends that I can talk to about this sort of stuff. To those guys and gals I apologise as all I did was talk at you, but I really, really appreciated that as it just let me get my thoughts out.
Eventually I reached my decision and informed Blizzard that I could not accept the position. I couldn't quite believe that I was telling them this for a start. I was giving up the opportunity to play with some really, really cool technology (sorry, I can't really tell you what little I learnt about their setup - please stop asking :P), and I was giving up the opportunity of working with some undeniably cool people that I had met. However, I just wasn't sure about the prospect - the fact I'd be moving to France, a country with different customs and a different language I didn't speak was really daunting, and I personally found the pay to be a little less than expected based on similar roles I've been offered in the UK in the past. Now, before anyone gets the wrong idea it wasn't bad, and money is certainly not the most important thing to me - happiness is - and there were other benefits. However, for the risk of moving to a country that I might not enjoy in my personal time for sometime, I just could not risk it. So based on this decision I declined.
A day or so later they get back in touch with me regarding my decision, and I have another phone conversation with the fantastic HR guy that I'd been dealing with. He took me through their offer, in detail, which I'll admit was a fantastic offer that I've never seen another company offer (it does offset the money thing quite a bit, but that's not something I can go in to), and then asked if I was still sure. I asked if I could have another few days and I would get back to them this Monday. Over the weekend I thought about it all. The geek factor and the prestige of having Blizzard on your CV is undoubtedly unbelievably cool. Sadly my feelings didn't change, so I declined again. I have the belief that if you're really not sure then you shouldn't go for a job in your chosen career, it's just not fair to the employer or yourself.
So there it ends, at least for now. Who knows, maybe things will be massively different in a few years time, and maybe I'll get a similiar opportunity. The whole experience has taught me a few things about myself though. I'm clearly either good in interviews, or technically capable. I hope it's both, but I'd settle for technically capable over good interview skills any day. I'm also perfectly capable of dealing with obstacles in my way in life (see this post for details on the actual in person interview and the “nightmare”), which is something that I'd never be able to have done a few years ago. So, on those grounds, thank you Blizzard.
I'd almost say, that I'm happy in life at the moment. Well, maybe a begrudging indifference at least.
Now, all I need to do is gently nudge a certain maths teacher (Mark Rawlinson) who really introduced me to computers, and Blizzard games, in the first place. He'd really freak out if he knew I'd turned this down! I'd also like to say thank you - for various reasons.
]]>As always there's a petition against this, which can be also found online here. If you're in the area, or have ever been to Castlecoombe and had a good time, then you might want to spend a minute or two of your time signing it.
Personally I've always had a good time, everytime I've been Castlecoombe - both in the rain and in the sun. It's a brilliant place and it would be a shame to see it close.
]]>Luckily I still had this stashed for such an eventuality that precipated my requirements today. But that's not the point.
]]>Maybe I should step back. Not many people are familiar with Centro. I mean EBS. Once upon a time (way back in 1997) there was BackOffice Small Business Server, a product developed by Microsoft for the small business, bundling many of the commonly used features into a nice little product. Over time this became Windows Small Business Server (SBS)*, and all was good (subjective to opinion, of course - but lets not get political).
SBS is the bastard child of the “regular” windows domain, as it provides a single CAL for everything, and is effectively cheaper. Great, except that it has limits - 75 users or devices (dependant on CAL used), only one computer in a domain can be running SBS, SBS must be the root of the Active Directory forest, trusts cannot be setup with any other domains, it cannot have any child domains. Which is a good thing really as 75 users on a single server is getting a bit iffy, especially as companies of this size are starting to use SQL heavily, and you can't use the SBS SQL licence on another machine.
Enter EBS - a product designed to split the roles, much like you would in a “regular” windows domain, across multiple servers. In the case of EBS, it's actually 3. Now here's the problem Microsoft; Essential Business Server. This sounds like a single product, for a single server. Having spoken to some “lesser” technical guys online, who actually weren't aware of Centro, they came to that conclusion. If even average technical users make the assumption that it's a single product for a single server, not a bundle, then how are managers supposed to make this call? Granted Centro, I mean EBS, is still in the realm of having an external IT support team make decisions for you, and no dedicated internal IT roles, but how many SBS deployments are there around that have been deployed internally by someone who has inherited the “part time IT part time something-else” post? Quite a lot I'd imagine, since this is who SBS is aimed at.
Eventually the question will become “How do I transition from SBS to EBS?". Right now it doesn't look like there's been anything announced (or if there has then I've missed it), but I'd imagine that the SBS transition pack will be refactored into transitioning from SBS to EBS, and then we'll see a new transition pack that will take EBS to a standard domain licencing. Hopefully they'll retain a transition pack to go from SBS to a standard domain, but the pessimist in me thinks otherwise.
This is all well and good, unless the virtual machines are running in a production environment and you need to do this at 2am in the morning. Like hell do I want to stay up for that, especially as each stage can take some time, depending on the load on the virtual server host.
Also thankfully it's possible to automate this behaviour (thanks to David Wang, this script is heavily based on his existing work)! Copy or make the precompactor available to each virtual guest that you wish to compactSchedule the precompact to run using Scheduled Tasks, within each virtual server guest. Allow for at least 2 hours, depending on the size of the disk(s). “precompact.exe -Silent -SetDisks:cde” would automatically, and silently, compact drives C, D and E. Ammend or script as appropriate.In the virtual server host you then setup a Scheduled Task to run the script below. “cscript path\to\server\filename.js”, ensuring that you edit strServer and arrVMNames as appropriate, and that you leave enough time for all the precompacting to complete Ensure that the script is run by a user account with the correct privileges.
// Automated VHD compacting
// Heavily based on a script by David Wang, http://blogs.msdn.com/david.wang/archive/2006/04/17/HOWTO-Perform-VHD-Maintenance-Automatically.aspx
// Usage:
// Edit the strServer and arrVMNames variables below, as appropriate
// Schedule "precompact.exe -Silent -SetDisks:cde", where "cde" are the drives to run the precompact against,
// to run a few hours before this script, inside the virtual machines
// Then on the host machine, schedule this script to run (command below), ensuring that there's enough
// time for the precompact to have finished
// "cscript automated-vhd-compact-custom-multiple.js"
// definitions
// amount of time, in milliseconds, that the script should sleep
var GUEST_OS_SLEEP_RESOLUTION = 250;
var ERROR_FILE_NOT_FOUND = 2;
var CLEAR_LINE = String.fromCharCode( 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8 );
var VM_STATE_OFF = 1;
var VM_STATE_SAVED = 2;
var VM_STATE_RUNNING = 5;
var VM_STATE_PAUSED = 6;
// Config
// server that Microsoft Virtual Server 2005 is running on
var strServer = "localhost";
// array of names - each virtual machine you wish to compact
var arrVMNames = ["Standalone - Xenon - XP", "Webserver - Tungsten - 2003 Std"];
var objVS = new ActiveXObject("VirtualServer.Application", strServer);
for (var i = 0; i < arrVMNames.length; i++)
{
var objVM = objVS.FindVirtualMachine(arrVMNames[i]);
var task;
if (objVM == null)
{
LogEcho("Virtual Machine " + arrVMNames[i] + " was not found on server " + strServer);
Quit(ERROR_FILE_NOT_FOUND);
}
LogEcho("Selected Virtual Machine " + arrVMNames[i] + " on server " + strServer);
if (objVM.State != VM_STATE_RUNNING)
{
// if the VM wasn't running, then the precompact didn't run,
// therefore there is no point in even running the compact
LogEcho(arrVMNames[i] + " is not running");
continue;
}
LogEcho("Saving VM...");
task = objVM.Save();
WaitForTask(task);
LogEcho("Compacting VHDs");
var enumHardDiskConnection = new Enumerator(objVM.HardDiskConnections);
var objHardDiskConnection;
var objHardDisk;
while (!enumHardDiskConnection.atEnd())
{
try
{
objHardDiskConnection = enumHardDiskConnection.item();
objHardDisk = objHardDiskConnection.HardDisk;
LogEcho("Compacting " + objHardDisk.File);
task = objHardDisk.Compact();
WaitForTask( task );
}
catch (e)
{
LogEcho(FormatErrorString(e));
}
enumHardDiskConnection.moveNext();
}
LogEcho("Compact done!");
LogEcho("Starting up" + arrVMNames[i]);
task = objVM.StartUp();
WaitForTask(task);
LogEcho("Startup done!");
}
LogEcho("Done!");
function Quit(errorNumber)
{
WScript.Quit(errorNumber);
}
function LogEcho(str)
{
WScript.Echo(str);
}
function FormatErrorString(e)
{
return e.number + ": " + e.description;
}
function WaitForTask(task)
{
var complete;
var strLine = "";
var cchLine = 0;
while ((complete = task.PercentCompleted) <= 100)
{
strLine = CLEAR_LINE.substring( 0, cchLine ) + complete + "% ";
// this should not exceed CLEAR_LINE
cchLine = strLine.length;
WScript.Stdout.Write(strLine);
if (complete >= 100)
{
// delete the % display so that next line is clean.
WScript.Stdout.Write(CLEAR_LINE);
break;
}
WScript.Sleep(GUEST_OS_SLEEP_RESOLUTION);
}
}
]]>Randy talks about the lessons he's learnt and gives advice on how to achieve your dreams, by recounting his experiences in life. I honestly found this upsetting, as I don't really remember what my childhood dreams were. I think at one point I wanted to be a super-hero, but what boy doesn't? Unfortunately as I'm not into cosplay, and I'm not an actor, I don't think this is entirely acheivable.
Maybe it's time to get some new dreams…
My thoughts go out to Randy's family, and I appreciate the time the Carnegie Mellon staff and Randy himself put into making this lecture publically available / possible. It's really made me consider things very carefully again.
]]>This also got me thinking about a solution we've been using ourselves at work, very successfully I might add for over a year now, whereby we've virtualised our entire Windows, and partially non-Windows (phone system) infrastructure. We've rolled out cutdown versions of this to other clients, whereby we've virtualised one server (such as a terminal server). We're now going for the whole she-bang, with one key difference. We're also virtualising SBS, with a multi-terminal server environment. SBS is the one thing that's worried me, but this morning I was happy to see that a mad Canadian has also done it (although they actually deployed it on real customers when we were still playing with the idea on our own network). Hurrah!
For those concerned about virtual environments I recommend doing some play/testing. It certainly works, and makes backup and restoration a practical doddle once you've got a new host box. Granted it does increase the strain on your backups, but with the cost of a decent LTO3 drive and tapes within reach of even small customers, it's a no-brainer. Just make sure those backups go off-site.
]]>However, for the up and coming year here's a good list of how you can make the life of your resident IT expert easier. Honest.
]]>I've simply patched the executables to look at the new master address, ensuring the executable size remained the same and with a little bit of luck discovered that the master server is still serving requests from older clients to unlock.
I'd recommend using 0.3H6, the last official S1 release. For a while you should find a 0.3H dedicated server running with 15 slots, voting and select map enabled.
* I say old days, this is of course not as far back as you can go. It is the most requested, non-current, client for unlock over the last few weeks though.
]]>When this respect is broken it can often lead to ill feeling within the community. The question of scale is often small, but naturally if it reoccurs often it can kill a good project for no “real” reason. Within the Open Source community this arguably isn't a big issue as people pick up the project because it does something they want, or need. For commercial projects, on the other hand, it's better to liken to a band without any fans. Without them, they're just weirdos on a stage making cocks out of themselves.
Investing personal time in a commercial project is noble, but I'm personally finding it harder to continue to do this due to my peers and others involved in a certain project.
Respect is such a little thing and simply restructuring a phrase can convey it. No real effort required, but the rewards can be astonishing.
Common sense perhaps? Then why is it increasingly disappearing? Or is it?
]]>As good as this guide is (obviously - as I wrote it :P), there are a few issues that I feel need to be addressed, predominately;
“Oh”, I hear you say. Many experienced sysadmins dislike running Wine under linux, on a server, because of the associated bloat (i.e. a X server) which is unnecessary. To avoid this it's possible to compile your own version of Wine, without X support. This does restrict you to running programs under wineconsole. In the circumstance of running a dedicated Live For Speed server, this does not matter.
Simply download and compile the latest version of Wine;
./configure
make depend
make
make install
As usual make install will need to be preformed as root (or equivilent). You will also need to ensure that the latest version of the libncurses development package is installed. Under Debian this is libncurses-dev.
Now simply invoke your server, making sure that the /dedicated=invisible
, within your configuration file (in this instance, setup.cfg);
wineconsole --backend=curses LFS.exe /cfg=setup.cfg
This will cause Wine to complain about the lack of an X server, but it will run without issue. To ensure the server continues to run after you disconnect, start it under a screen session, or nohup.
]]>Overall it felt like quite a sad occasion as the mini-itx box had served me well, and saved my bacon in a number of situations. It's also somewhat disturbing to find yourself missing the low hum of a computer, and getting increasingly pissed off at the ticking sound of your analog clock.
For those who are interested the various boxes previously had ran (in order) Gentoo, Win2000 (briefly), and then from the latest hardware; freeBSD (very briefly), Debian stable, Ubuntu 4.10 (yes, that was before the “server edition”), Debian unstable and finally Debian testing. Out of all of them I have to say that Debian testing was the most enjoyable to work with. Nice and stable, despite its name, and reasonably upto date. I've only ever had a problem when a kernel upgrade went a little wonky.
So now that Zinc has taken over (yes, a brand new naming scheme from now on), I'm somewhat perplexed as to what to do with the inners of Ezra. My primary thought was to turn it into a MythTV or Freevo box, but I fear that despite the onboard MPEG decoder, and using a hardware MPEG encoder won't be enough. My other thought is to turn her into a CarPC, or an all-in-one unit (complete with screen) and use her to take from site-to-site. As usual, any suggestions are welcome :) Either way, I'd dread to have to give her up after all we've been through (sad, isn't it?).
]]>My current level of fitness is also something that scares me. Don't get me wrong, I'm not fat, infact I'm still underweight for my height (probably about right for my build), but my fitness is questionable - to say the least. Late summer this year, I started going swimming with my girlfriend but we slowly stopped going together, and going by myself was rather boring. Actually, that's a complete and utter lie. I was intimidated by my fear of getting into that pool alone. So there's something I want to work on. Perhaps jogging early each morning?
I'm also going to try to stop pushing my friends away. Over the last few months I've recognised that I'm increasingly alone, and that without my girlfriend I wouldn't have any one else, outside of work, that I regularly see. Sometimes I enjoy it, and even crave it. But other times, I just cannot bear it. Working from home has put this into perspective. Granted the guys and gals I work with really go out of their way to stop these sort of problems, and we've got meetings / outings booked up well in advance for the coming year; but sometimes it's just not enough.
As sad as it may seem, I'm going to try and spend a little less time with some members of my family over the next few months. I've realised I'm trying to fill in for my Dad, but it's only making me unhappy as I just can't do it without thinking that he'd be disappointed in me. His perspective radically changed when my Mum passed away and I truly didn't understand, until this evening, just how much he was right. You need to live life as you want to, and make sure you enjoy it. He spent his life looking after people to the point of not living for himself, which he just couldn't sustain over the years; which only caused disappointment to others in the end. I can only hope that I can be half as kind, but not take it to extremes.
So there are my resolutions. I was hoping to pop a [lame] screen resolution gag in there as well, but considering the max my 2 screens can get to is 2560x1024, and that I'm currently running at that, it would be rather tricky.
]]>I was talking to a friend in a similar situation from another company, and it appears as if he didn't know how to resolve the issue. As such I present the no-reboot solution:Open a command prompt and change directory to %WINDIR%\pchealth\helpctr\binariesRun the following commands:start /w helpsvc /svchost netsvcs /regserver /install net start helpsvc
]]>And people wonder why I want to move out of this country. The question is; would it be worth it? Are we likely to see the rest of the world following in our collective apathetic footsteps?
]]>Anyway, I decided tonight would be a good time to fix it. Amy's off dancing with her mum, and other than wanting to play some games, it seemed like an ideal time. Having never actually used Exim4, Courier, and SpamAssassin it seemed like an ideal thing to waffle on about.Install what you need. As usual apt-get is king.apt-get install exim4-daemon-heavy courier-imap spamassassin spamc sa-eximObviously accept all the various dependancies, and select multiple files for exim4, and internet site. Otherwise leave as default.Make your Maildir (I prefer this over mbox), using courier's maildirmake commandmaildirmake ~/Maildir/Append the following to /etc/exim4/update-exim4.conf.confdc_localdelivery='maildir_home'Edit /etc/default/spamassassin, and set ENABLE = 1Generate the new configupdate-exim4.confand if that went through without any errors, check eximexim4 -bVand then restart/etc/init.d/exim4 startTest Exim is working, just to make sureexim4 -bt user@localhostexim4 -v AnExternalMailAccount@Domain.TLD From: user@localhost To: AnExternalMailAccount@Domain.TLD Subject: Test
Test .If you want to be particularly shitty and reject exe's, com's, bat's, etc. then add the following to /etc/exim4/conf.d/acl/40_exim4-config_check_data, before the final line, “accept"deny message = Serious MIME defect detected ($demime_reason) demime = * condition = ${if >{$demime_errorlevel}{2}{1}{0}}
deny message = This server will not accept certain file attachments.
Please resend it as a compressed archive.
demime = bat:btm:cmd:com:cpl:dll:exe:lnk:msi:pif:prf:reg:scr:vbsEdit /etc/exim4/sa-exim.conf, and change the line SAEximRunCond: 0toSAEximRunCond: 1Now lets add support for virtual domains, in the old fashioned way.mkdir /etc/exim4/virtualCreate a set of file, one for each of your domains.touch /etc/exim4/virtual/yourdomain.tldIn each file, add the various aliases. The format is localpart: localuser@localhost. Wildcards are accepted.postmaster : user@localhost
One thing to finally remember is to create the Maildir in /etc/skel, and possibly .forward, which can apparently contain user defined filtering rules (and can be surprisingly powerful);if $h_X-Spam-Status: CONTAINS “Yes” or $h_X-Spam-Flag: CONTAINS “Yes” then save Maildir/.Junk/ finish endifHaving gotten past the hard bit, its was time to play with RoundCube. As with most PHP scripts (yes, I know I've been trying to get rid of them, not add more), all you need to do is read INSTALL. Its very straight forward. As for how RoundCube works… I'm not entirely sure if I like it. The interface is pretty good, but I have to say its not quite as slick as Outlook Web Access (which if you ignore the whole IE-only-proprietory-experience thing, isnt too bad), Zimbra or Hula, and its missing some features; but it's definately getting there. Most certainly much better than Horde IMP or Squirrelmail by a long shot. I'll give it a few weeks / days, and see how things go.
]]>So what's brought all this to the forefront of my mind? Getting caught, face on (whilst sitting in a static queue of traffic), as some leather-clad-biker in the other lane was speeding in a 30mph zone. Granted, this isn't a big issue, but can we trust those who monitor us?
Fear ECHELON. I have little doubt that it exists, in some form. I know for a fact that the constant video speed cameras send their data to a central repository. But does it get intercepted and are our current levels of encryption sufficient for the near future, or even now?
Paranoid yet? Or are you still annoyed about the speed cameras?
]]>php.net/cunt -> php.net/count
php.net/fuck -> php.net/faq
php.net/dildo -> php.net/delete
php.net/nude -> php.net/newt
I'm so tempted to google bomb at least one of those, the childish person that I am.
I apologise for anyone who I've offended with rude language, and hereby incriminate Theo as the source of said childishness.
]]>The name and concept seems inspired from Qt, or perhaps WxWidgets, and is quite robust. Hardly pretty at this stage and I've not yet hacked anything together, but quite cool. Especially if you insist on using C++ to develop your websites.
Edit: Paul / Chip has a very good point on using Wt;
Given my frustrations in the past, in dealing with ultimatums from designers (despite implementing templated systems), I can fully understand Chip's point… Things never look as good as they initially do. Arses.
On the plus side, Chip appears to be throwing his hand into a web toolkit of his own. It'll be interesting to see what he throws together.
]]>This article, by Adrian Sutton, perhaps states the obvious but its a good read and probably essential if you're thinking about refactoring any projects in the near future.
]]>First off I met a really nice Social Worker from Africa who was getting her clutch done, and shortly after, whilst I was struggling with my stuff in the Ikea car park, a really nice French man helped me get my remaining purchases in the car. So, not only did I get a warm-fuzzy-feeling, but the second French man I've “met” was also nice (for reference, the other is kNo (Bruno Bord), from #lugradio on freenode). Perhaps the reputation the English have given them, is somewhat unfair.
]]>Infact, make that Sir Gnarly Hat Gene ;)
]]>So download away and piss yourselves laughing at the classic british comedy.
]]>It goes into a really rough overview on how proprietory software KeyGen's are created; beware, it assumed a reasonably amount of knowledge of ASM, and some limited C. If you dont know ASM, or even what it looks like you could be confused. Its certainly one of those more interesting articles - definately more interested on “how to rip a DVD”.
]]>