/ BLOG / LXC: Linux Containers

I’ve been toying with migrating my server into a containerized system, and almost bought a new server in preparation to migrate everything across. I’d chosen and tested my solution, OpenVz. All was good with the world. Until I saw that OpenVz was effectively being dropped from Ubuntu 10.04 (Lucid Lynx) and most likely Debian 6.0 (Squeeze).

The reason for the drop is simply because the OpenVz patches haven’t been forward ported into the current kernel. Now thats obviously a bit of a problem, since I prefer Debian no straight dist-upgrade from Lenny to Squeeze would be slightly annoying. To put it mildly.

It turns out that a few people that realised this is going to be equally annoying, should such a thing occur, and they’ve been working LXC: Linux Containers and got the implementation into the main kernel tree. However there are a few issues with LXC;

The OpenVz team seem to be updating the OpenVz patches for a more recent kernel, however the decision by a few distros to migrate to LXC might provoke the same sort of effect that migrating to KVM had on Xen’s popularity.

Anyway, here’s my quick start guide:

  1. You need a relatively recent kernel with control groups (cgroups) and capabilities support. If you’re not planning on rolling your own kernel the current kernel in Squeeze or Lucid should be fine.
  2. Use the package manager of your choice to install lxc, bridgeutils and debootstrap. That should pull in everything you need for a basic install (if you want to run Debian based container).
  3. Run lxc-checkconfig and you should see everything is enabled and ready to go, with exception of cgroup memory controller, which LXC doesnt appear to require.
  4. So now we need the control groups pseudo-file system setup

    mkdir /cgroups

    and mounted (add the following to /etc/fstab

    cgroup /cgroups cgroup defaults 0 0

    and run mount).

  5. Next choose how you’re going to network your containers. In both these examples br0 is the device that the containers will be attached to. Aside from these 2 options there are other ways you could do it, but honestly I can envisage these being the most common options.
    • You could attach them straight onto your network by creating a bridge including eth0 (for example), which your containers will attach to. In this example your /etc/network/interfaces might look something like

      auto lo
      iface lo inet loopback
      
      iface br0 inet static
          max_wait 0
          bridge_ports eth0
          address 192.168.0.3
          netmask 255.255.255.0
          gateway 192.168.0.1

    • Alternatively you could create a dummy network device, add that to the bridge and then attach your containers to that, and either NAT (using iptables) or route to your containers. Or just keep them on their own. It really depends what do you want to do. I chose to add a dummy network and route between the 2. The host node interface in this example could look like

      auto lo
      iface lo inet loopback
      
      allow-hotplug eth0
      iface eth0 inet static
          address 192.168.0.3
          netmask 255.255.255.0
          gateway 192.168.0.1
      
      iface br0 inet static
          max_wait 0
          bridge_ports dummy0
          address 10.10.10.1
          netmask 255.255.255.0

  6. Now you need to create the files for the container. debootstrap or febootstrap, etc. are your friends. I chose to keep my containers in /var/lxc/guests. So for my first container (called “one”, which is a Lenny basic install)

    debootstrap lenny /var/lxc/guests/one

    You’ll need to ensure that the /etc/resolv.conf is setup correctly under /var/lxc/guests/one, as is /etc/network/interfaces. For resolv.conf you can most likely copy from the host node. Your interfaces will just need eth0 and lo setup with the correct IPs.

  7. Next you need to create a LXC config. This file is pretty much full of voodoo and dragons. I’ve saved this one under /var/lxc/guests/one.conf

    # Hostname
    lxc.utsname = one
    # Number of TTYs to allocate to the container
    # Relies on some lxc.cgroup.devices settings
    lxc.tty = 4
    # Networking type
    lxc.network.type = veth
    # State of networking at boot
    lxc.network.flags = up
    # Bridge you want to attach to
    lxc.network.link = br0
    # Internal container network interface name
    lxc.network.name = eth0
    lxc.network.mtu = 1500
    # Address you intend to add the container to
    # Doesn't seem to care too much as far as I can tell
    lxc.network.ipv4 = 192.168.0.3/24
    # Location of the root, from within the host node
    lxc.rootfs = /var/lxc/guests/one
    # Lots of stuff I've not fully yet looked into, but attempted to make
    # some intelligent guesses about
    lxc.cgroup.devices.deny = a
    # /dev/null, /dev/zero
    lxc.cgroup.devices.allow = c 1:3 rwm
    lxc.cgroup.devices.allow = c 1:5 rwm
    # TTYs
    lxc.cgroup.devices.allow = c 5:1 rwm
    lxc.cgroup.devices.allow = c 5:0 rwm
    lxc.cgroup.devices.allow = c 4:0 rwm
    lxc.cgroup.devices.allow = c 4:1 rwm
    # /dev/random, /dev/urandom
    lxc.cgroup.devices.allow = c 1:9 rwm
    lxc.cgroup.devices.allow = c 1:8 rwm
    # /dev/pts/* - Seems to be unused at this point?
    lxc.cgroup.devices.allow = c 136:* rwm
    lxc.cgroup.devices.allow = c 5:2 rwm
    # No idea
    lxc.cgroup.devices.allow = c 254:0 rwm

  8. Next create the container

    lxc-create -n one -f /var/lxc/guests/one.conf

    .

  9. And now you can start it. I’ve discovered that you need to set the container to be daemonised otherwise lxc-start will never return:

    lxc-start -n one -d

    To attach to your container just run

    lxc-console -n one

    To stop

    lxc-stop

  10. You will notice at this point that lxc-ls returns 2 sets of lists. The top list is the list of available containers, and the second list of the currently running containers.

This is obviously by no means a definitive guide, but it is just what I’ve done this evening to get stuff up and running. I’ve not yet tried getting IPv6 working into the containers, mostly because I wanted to try the v4 networking in a few different ways, and it’s now bed time. However looking at the docs, it shouldn’t be all that tricky working straight out of the box - something that OpenVz doesn’t do in all circumstances (the same can be said for Linux Vservers).

As for whether or not it’s worth it.. Lets just say it’s not been unpleasant.