Sunday, November 25, 2012

File transfer performance on Synology NAS

Since I've seen a bunch of questions on various NAS forums on data transfer performance, here's a quick post on the performance I observe transferring data between a Win 7 box (with source data on SATA3 external storage) to a Synology NAS DS1511+. The below are a couple of simple screen grabs taking about half an hour into transferring a full back-up using Synology DataReplicator 3. The Win7 machine and NAS are connected via gigabit ethernet to the same router. No jumbo frames or trunking.

Fig 1.  DSM 4.1 monitoring during backup
 
Fig 2. Windows 7 monitoring during backup.
 
I found that enabling jumbo frames on my Win7 machine killed any connection to the NAS, despite the NAS having jumbo frames enabled. I think (but haven't confirmed) that the router doesn't support jumbo frames, and therefore may be the cause of the issue. Having said that, it's not clear how much additional performance you could milk out of such a configuration.
 
The DSM Resource Monitor sampling frequency appears lower than the Windows monitor which may go some of the way to explaining why it looks a little more choppy as seen from the NAS. Nevertheless, sustained transfers seem to average about 80MB/s which will get through most home network transfers pretty quickly. I have noticed on some lengthier transfers that throughput can (infrequently) drop off markedly for a time before returning to about 80MB/s but haven't investigated why this occurs.
 

Sunday, November 18, 2012

Backing up Ubuntu using Deja-Dup (inc. system files)

When I got the NAS set up on the network, I started playing around with Deja-Dup, the backup utility for Ubuntu. Previously I'd never backed up my Ubuntu installation but with the NAS now online it made sense to try it out. Note that Deja-Dup is a file-based back-up utility. If you want low-level backups of your partitions then you would consider using dd together with the relevant device files as described here. Be cautious with these low-level techniques as you really need to know what you're doing and can do untold damage if you don't.

Initially I ran the utility under my own account but noticed that there were a number of (system) files that didn't back up successfully due to a lack of permissions so determined that running the backup in two parts was the way to go: one as root for purely system files (NB: edit at end of post), and the other under my regular user account for anything of personal import.

Deja-dup needs to be set up for the root user. You can either do this by logging in as root if you've enable such a login in Gnome. More likely you haven't done this (Ubuntu doesn't do this for you by default), so use gksu as follows (either should work):

user@host: gksu gnome-control-center deja-dup
or
user@host: gksu deja-dup-preferences

Then configure the storage location. Mine is NFS mounted like this:

/mnt/nas/Backup/DejaDup.hostname.user

Then add/remove directories as required. To keep this as a system-specific backup I've pruned a few directories from the generic file system layout, but you can always refine this further:

Included:
/
/root


Excluded (this list includes some kernel file systems for interrogating certain kernel data structures that we don't want to back up, as well as stuff that wouldn't ever need to be restored or will be covered by non-system user backups):
/dev
/home
/mnt
/media
/proc 
/run
/sys
/tmp


A frequency of weekly and retention period of 6 months should be fine considering how often changes occur to the OS through the Ubuntu software updater and LTS releases. On my vanilla Ubuntu 12.10 desktop installation with NFS mounted backup dir running over gigabit ethernet, the backup took about 7 minutes and generated 1.2GB of backup files.

You can now run the Backup utility under your regular user account to cover off any personal files that you care about.

A final note regarding NFS mounts of your backup location. I encountered permission issues during backup writing to the backup target. You may need to investigate how the NFS server has set permissions at the target end to ensure everything works smoothly. In my case, root on the NFS client host was mapped to a different id on the NFS server. You may need to override this on the NFS server.

REDACTION: This doesn't automate backups for root or other users. Backups only seem to run for users when logged in.

I logged in to double-check the back-ups were working as expected for root (per the description given above). Whilst the configuration I had created was all still there, the backups weren't actually running automatically - presumably since you need to be logged in for the backup to get triggered. The Deja-Dup dialogue basically said something like "Last backup: 57 days ago". Anways, the back-end to Deja-Dup is duplicity, a command line utility. Depending on the selected options, Deja-Dup's auto-generated duplicity command line could look like this:

/usr/bin/python /usr/bin/duplicity --exclude=/mnt/nas/Backup/DejaDup.goat-lin.root --include=/root/.cache/deja-dup/metadata --exclude=/proc --exclude=/sys --exclude=/run --exclude=/dev --exclude=/home --exclude=/media --exclude=/tmp --exclude=/mnt --exclude=/sys --exclude=/proc --exclude=/tmp --exclude=/root/.gvfs --exclude=/root/.cache/deja-dup --exclude=/root/.cache --include=/ --include=/root --exclude=** --gio --volsize=50 / file:///mnt/nas/Backup/DejaDup.goat-lin.root --verbosity=4 --gpg-options=--no-use-agent --archive-dir=/root/.cache/deja-dup

Now you can go put something similar to the above in cron. Edit 7/7 to remove some uneccessary options and reduce verbosity.

Monday, November 05, 2012

DHCP on Synology NAS (dhcpd and dnsmasq)

As an adjunct to my home network DNS configuration, I decided to move DHCP duties across to my always-on Synology NAS (DSM4.1) and disable the DHCP services on my routers. This was driven by a number of (not particularly compelling) factors, however my experience follows. This is as much a story about making mistakes as getting it right, so read through to the end before you start cutting and pasting commands into your live environment.

As Synology offers a DHCP server package, I installed this on the base installation via the DSM Package Centre utility. A new icon will appear in your apps list and the Control Panel->Networks applet gets a new 'DHCP Server' button. A form asks you to configure key details for your DHCP server, including primary and secondary DNS, domain name, lease time and so forth. You can also configure subnets, static IP mappings and other details. Neither of my routers support a particularly sophisticated DHCP management interface so this is one somewhat compelling reason to move away from allowing a vanilla consumer router to provide DHCP services on your network.

So the moment of truth arrives. Knowing that my DNS configuration on the NAS works I switch on DNS on the NAS and disable it on the routers. Needless to say, there were issues. The DHCP server on the NAS was broken but it wasn't obvious why, and now my devices were having issues getting onto the network (including the admin interfaces to my router and NAS...).

Running 'ps | grep dhcpd' shows that dhcpd is not running. This is despite no warnings or errors being issued by the Synology UI. The first stop is /var/log/messages for any clues that might have been left.

Nov  3 21:42:18 dhcpserver.cgi: net_get_dhcp_server_conf.c:164 File [/etc/dhcpd/dhcpd-bond0-bond00.conf] does not exist.
Nov  3 21:42:18 dhcpserver.cgi: dhcpserver.cpp:130 Cannot read Global setting on bond0 bond00
Nov  3 21:42:19 dhcpserver.cgi: dhcpserver.cpp:317 Can not open DHCP static file
Nov  3 21:42:19 dhcpserver.cgi: dhcpserver.cpp:410 Can not open DHCP lease file
All the files that are claimed not to exist are clearly extant:
dns> ls /etc/dhcpd
dhcpd-bond0-bond00.conf  dhcpd-static.conf        dhcpd.conf               dhcpd.info

I then tried to hunt down the init.d management scripts and dhcpd binary but these were nowhere to be found on the system. So I have basically let myself up the garden path as I'm not looking for the right thing. You can install regular old dhcpd if you like via ipkg install dhcp, but it won't work with the Synology's UI (web management interface) - at least not without some hackery. It didn't initially twig for me that under the hood Synology's DHCP Server uses dnsmasq ("a lightweight, easy to configure DNS forwarder and DHCP server"). Admittedly, whilst I had seen references to dnsmasq I did not know what it was precisely and it wasn't until I dug around in places like /etc/rc.network and recalled some error entries in /var/log/messages like the below that it made sense that dnsmasq was in use and why it wouldn't run.

dnsmasq[22230]: failed to create listening socket for port 53: Address already in use
dnsmasq[22230]: FAILED to start up

If you had previously installed the dns package, it would listen on port 53 and would prevent dnsmasq from starting. You may also have noticed in Synology's UI that although it reports that DHCP Service is running in Package Center, every time you navigate back to Control Panel->Network->Network Interface->DHCP Server, under the General tab the 'Enable DHCP Server' check box is always unchecked even if you see the 'Settings applied' message after clicking Apply, implying to me at least that it wasn't actually running.

If you're running named, kill it:
/opt/etc/init.d/S09named stop

Under /var/log/messages you may also see something like:
dhcpserver.cgi: dhcpserver.cpp:410 Can not open DHCP lease file

There is a post on the Synology site that has a simple remedy for this:
touch /var/packages/DHCPServer/target/etc/dhcpd.conf.leases
touch /var/packages/DHCPServer/target/etc/dhcpd-leases.log

On my NAS, only dhcpd-leases.log wasn't present so I ran the second command only and dnsmasq came up cleanly using the Synology UI. Check that it's running:

dns> ps | grep dnsmasq
20315 root      1620 S    dnsmasq --user=root --cache-size=200 --conf-file=/etc/dhcpd/dhcpd.conf --dhcp-lease-max=2147483648
20316 root      1620 S    dnsmasq --user=root --cache-size=200 --conf-file=/etc/dhcpd/dhcpd.conf --dhcp-lease-max=2147483648

A few more tips here:
  • within the Synology UI you need to ensure that under the table headed 'Subnet list' that the row(s) you have entered have green check marks next to them. It doesn't seem to check this box by default and will not work if unchecked.
  • (obviously) you need to turn off other DHCP server(s) on your network as appropriate. Chances are your router is running DHCP which will interfere with things.
  • consider how to manage addresses on your network. There are going to be a number of hosts that you will probably want to have static IP addresses (NAS, routers, other servers), but the rest can be dynamic. Use start/end addresses and reserved addresses to ensure you have enough addresses of each type and don't overlap between the static and dynamic address pools.
Testing out the operation of the dnsmasq DNS server reveals that it pretty much works like the old named server however there are a few minor updates to /etc/hosts to capture the static addresses and hosts on your network. This is easier than mucking about with forward and reverse lookup files in BIND IMHO.
  • after config changes, restart dnsmasq (or the DHCP Server under Package Center)
  • the Synology implementation looks to regenerate /etc/dhcpd/dhcpd.conf when you make changes in the DSM UI. dnsmasq supports a lot of options, so you may need to look into how to preserve any extended config you intend to remain persistent.
[Edit 25/11/2012]: I upgraded to DSM 4.1-2661 which caused some silly issues.
  • I didn't disable my named package. When the NAS upgrade completed and the system rebooted, my old named config was left lying around which meant /opt/etc/init.d/S09named was called and prevented dnsmasq from starting (both need port 53). All dns and dhcp services were down as a result and therefore no internet access until this got fixed, made all the more annoying as none of my computers could get on the network without manual intervention.
  • /etc/hosts looks like it got touched during the upgrade. I have no proof, but some static host/IPs configured in this file appeared to have disappeared. When I readded them and restarted dnsmasq these hosts would resolve properly on the network again.

Sunday, November 04, 2012

DNS configuration for your home network

Installing BIND on the NAS

 
I found some documentation on how to set up BIND (DNS) on a Synology NAS running Linux but as I ran into some problems I thought I'd document them here on the offchance someone finds it a useful reference. [Edit]: Other (simpler and in many ways better) ways of doing this exist.
 
My starting point was here, but quickly found that the more comprehensive documentation here was also useful. BIND configuration notwithstanding, the installation of the BIND package on the Synology NAS (DSM 4.1) was not without issues.
 
You start by installing the BIND package:

DiskStation> ipkg install bind
Installing bind (9.6.1.3-4) to root...
Downloading http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/bind_9.6.1.3-4_i686.ipk
Installing openssl (0.9.8v-2) to root...
Downloading http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/openssl_0.9.8v-2_i686.ipk
Installing psmisc (22.17-1) to root...
Downloading http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/psmisc_22.17-1_i686.ipk
Installing ncurses (5.7-1) to root...
Downloading http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/ncurses_5.7-1_i686.ipk
Configuring bind
Running post-install
You must now create your named.conf file
When it is installed in /opt/etc/named/named.conf, execute
/opt/etc/init.d/S09named start
to start service
You will probably also want to create rndc.conf by running
rndc-confgen. Of course, you may want to set your name server
in the GUI to 127.0.0.1 or your local ip-address
Configuring ncurses
update-alternatives: Linking //opt/bin/clear to /opt/bin/ncurses-clear
Configuring openssl
Configuring psmisc
update-alternatives: Linking //opt/bin/killall to /opt/bin/psmisc-killall
update-alternatives: Linking //opt/bin/pidof to /opt/bin/psmisc-killall
Successfully terminated.

I created /opt/etc/named/named.conf and related files per my desired set-up and tried to start the daemon:

DiskStation> /opt/etc/init.d/S09named start
Starting DNS Services: /opt/bin/pidof: error while loading shared libraries: libssp.so.0: cannot open shared object file: No such file or directory
started

The forums will tell you that you need gcc installed to have access to this library, so go do it. Before you do it, however, make sure root's PATH environment variable has /opt/bin and /opt/sbin at the START (that is, edit and source ~/.profile):

PATH=/opt/bin:/opt/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin

Otherwise you will encounter errors like the ones below:

DiskStation> /opt/bin/ipkg install gcc
Installing gcc (4.2.1-5) to root...
Downloading
http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/gcc_4.2.1-5_i686.ipk
file_move: ERROR: failed to rename /opt/ipkg-Ys4oOA/gcc_4.2.1-5_i686.ipk to /opt/ipkg-Ys4oOA/gcc_4.2.1-5_i686.ipk: No such file or directory
Nothing to be done
An error ocurred, return value: -1.
Collected errors:
Failed to download gcc. Perhaps you need to run 'ipkg update'?

It appears that there are at least two wget binaries installed on the system and the Synology version doesn't work with ipkg.
/usr/syno/bin/wget (GNU Wget 1.10.1)
/opt/bin/wget (GNU Wget 1.12)

DiskStation> ipkg install gcc
Installing gcc (4.2.1-5) to root...
Downloading
http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/gcc_4.2.1-5_i686.ipk
Installing binutils (2.19.1-1) to root...
Downloading
http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/binutils_2.19.1-1_i686.ipk
Installing libc-dev (2.3.6-5) to root...
Downloading
http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/libc-dev_2.3.6-5_i686.ipk
Installing libnsl (2.3.6-4) to root...
Downloading
http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/libnsl_2.3.6-4_i686.ipk
Configuring binutils
update-alternatives: Linking //opt/bin/strings to /opt/bin/binutils-strings
Configuring gcc
Configuring libc-dev
Configuring libnsl
Successfully terminated.

Now let's start the daemon again:
DiskStation> /opt/etc/init.d/S09named start
Starting DNS Services: started

Well this is a lie.
DiskStation> ps | grep named
12730 root      2540 S    grep named

Check /var/log/messages to see what happened:
[...]
Nov  1 15:55:25 named[8642]: dns_master_load: root.servers:40: unexpected end of file
Nov  1 15:55:25 named[8642]: dns_master_load: root.servers:40: unexpected end of input
Nov  1 15:55:25 named[8642]: could not configure root hints from 'root.servers': unexpected end of input
Nov  1 15:55:25 named[8642]: loading configuration: unexpected end of input
Nov  1 15:55:25 named[8642]: exiting (due to fatal error)

Ok, my fault for incorrectly editing the config files (a couple of issues look related to a possible  bug using 'open' (o) command in vi which incorrectly splits the last character of the line onto a new line) so clean this up and see what else was missed by looking at /var/log/messages again:
[...]
Nov  1 16:21:02 named[12572]: none:0: open: /opt/etc/named/rndc.key: file not found
Nov  1 16:21:02 named[12572]: /opt/etc/named/named.conf:19: couldn't install keys for command channel 127.0.0.1#953: file not found
Nov  1 16:21:02 named[12572]: /opt/etc/named/named.conf:19: couldn't add command channel 127.0.0.1#953: file not found
Nov  1 16:21:02 named[12572]: logging channel 'dns_log' file '/opt/var/log/dns.log': file not found
Nov  1 16:21:02 named[12572]: isc_log_open '/opt/var/log/dns.log' failed: file not found

Silly mistakes corrected, but it is at least now running with some other issues:
DiskStation> ps | grep named
12572 root      7324 S    /opt/sbin/named -c /opt/etc/named/named.conf
12730 root      2540 S    grep named

Thank goodness for that. Now let's check if the config works.

dns> nslookup
> server 192.168.1.2
Default server: 192.168.1.2
Address: 192.168.1.2#53
> dns.mydomain.net
[...]
Name:   dns.mydomain.net
Address: 192.168.1.2
> bogus.mydomain.net
[...]
** server can't find bogus.mydomain.net: NXDOMAIN
> router.mydomain.net
[...]
Name:   router.mydomain.net
Address: 192.168.1.1
> r6300.mydomain.net
[...]
r6300.mydomain.net        canonical name = router.mydomain.net.
Name:   router.mydomain.net
Address: 192.168.1.1
> www.google.com
[...]
Non-authoritative answer:
Name:   www.google.com
Address: 74.125.237.82
Name:   www.google.com
Address: 74.125.237.83
Name:   www.google.com
Address: 74.125.237.84
Name:   www.google.com
Address: 74.125.237.80
Name:   www.google.com
Address: 74.125.237.81
> 192.168.1.2
[...]
2.1.168.192.in-addr.arpa        name = dns.mydomain.net.
Cool. All the locally configured hosts are there, non-existent local hosts are not found and hosts on the internet are also found. A reverse lookup also appears to have worked. Same results on the NAS (127.0.0.1) and a Win-7 host (connecting to 192.168.0.2).
 

Friday, October 26, 2012

Modding your Synology NAS to support DNS and DHCP to manage your home/office network

Intro


I have been hunting around for a way to manage the growing number of devices on my home network. For the most part, I don't care about the host names and IP addresses of my devices. I'm quite happy for DHCP to manage all of this via my router and get on with life. However, I would like to be able to contact core bits of my home network infrastructure via fully qualified names (eg nas.homenetwork.com) rather than remembering a series of IP addresses. I think that I probably need to set up a local DNS server, but where to run it? To be useful, it would need to be always on. Other than my routers, the only other device that fits the bill is my NAS.

Synology NAS DS1511+


There are many things to like about the Synology NAS (I have a model DS1511+). I'm not going to go on about DSM or any of the other packages here however it's effectively just a linux box, which got me thinking that someone must have attempted to set up just this sort of thing already. Which of course, they had.

Once you log into the NAS's admin console (I have DSM4.1 installed), you can turn on SSH access by checking the option Control Panel->Terminal->Enable SSH Service. Once logged using admin credentials, you can see what's running under the hood:

DiskStation> uname -a
Linux DiskStation 3.2.11 #2647 SMP Wed Sep 26 03:17:57 CST 2012 x86_64 GNU/Linux synology_x86_1511+


According to this Synology forum reference, the DS1511+ has the following hardware:

Intel Atom D525 Dualcore (2C/4T) 1.8GHz x86 Processor

I've monitored the NAS under all sorts of workloads in it barely gets taxed so I'm not too concerned about adding a few extra daemon processes.

There is also a Synology site on how to mod your NAS which is a recommended read before proceeding. The key points are that:
  • you need to get the ipkg utility installed
  • modding may invalidate your warranty
  • back up your data first

Bootstrapping the NAS

Bootstrapping was pretty straightforward and involves running a shell script which installs a few packages that are eventually required to get ipkg up and running:

DiskStation> sh syno-i686-bootstrap_1.2-7_i686.xsh
Optware Bootstrap for syno-i686.
Extracting archive... please wait
bootstrap/
bootstrap/bootstrap.sh
bootstrap/ipkg-opt.ipk
bootstrap/ipkg.sh
bootstrap/optware-bootstrap.ipk
bootstrap/wget.ipk
1216+1 records in
1216+1 records out
Creating temporary ipkg repository...
Installing optware-bootstrap package...
Unpacking optware-bootstrap.ipk...Done.
Configuring optware-bootstrap.ipk...Modifying /etc/rc.local
Done.
Installing ipkg...
Unpacking ipkg-opt.ipk...Done.
Configuring ipkg-opt.ipk...WARNING: can't open config file: /usr/syno/ssl/openssl.cnf
Done.
Removing temporary ipkg repository...
Installing wget...
Installing wget (1.12-2) to root...
Configuring wget
Successfully terminated.
Creating /opt/etc/ipkg/cross-feed.conf...
Setup complete.

Installing BIND on the NAS

[Edit]: I've moved the BIND configuration steps into a new post here since I encountered further issues. Note that you should just check out the DHCP link below if you don't specifically need BIND as it covers dnsmasq which, as I've discovered, is just simpler and will probably do what you want.

Installing DHCP server on the NAS

[Edit]: I've moved the DHCP configuration steps into a new post here since I encountered further issues.

Wednesday, October 24, 2012

NFS mounting Synology NAS shares on (fresh) Ubuntu 12.10

An accidental 'make install' in an incorrectly configured dev environment blatted part of my Ubuntu installation. I thought removing and adding the affected packages and then later an upgrade from 12.04 to 12.10 might fix this but ended up having to do a fresh install. Annoying, but it let me rejig a few things on my dual boot (Win7/Ubuntu) PC in the process.

This is a minor note but I was trying to mount some shares on my Synology NAS via NFS but the mount command kept failing, telling me that it was due to a 'wrong fs type, bad option, bad superblock' or possibly one of a variety of other reasons. Somewhat surprisingly, it turns out that the desktop version of Ubuntu 12.10 is missing the NFS packages required to facilitate this. To correct this:

sudo apt-get install nfs-common

Then you can run something like the following to see your files on the NAS:

sudo mount yournashost.home.net:/volume1/music /mnt/music

Don't forget to update /etc/fstab if you want these mounts to be brought up each time the system boots. Tab separated.

yournashost.home.net:/volume1/music /mnt/music nfs rw 0 0

Monday, August 13, 2012

Setting up Ubuntu on Ubuntu under KVM

This an initial installment on setting up and running multiple virtualised operating systems under Ubuntu on my home AMD based desktop machine.
Why do I want to do this?
  • I've used VMWare and Solaris 10 Zones (and some storage virtualisation on NetApp arrays) over the years but wanted to try out the state of the art in virtualisation in the x86 space
  • I have a project in mind that will likely be deployed into a multi-tier environment and I'd like an easy way to emulate this without buying more  hardware.
  • There are other things that can be tried out in a virtualised multi-host environment, including automated deployment in distributed environments and other fun activities.
  • Testing on multiple platforms/distributions should also be possible if the right guest operating systems are created
  • To get experience in managing virtualised environments - ie creating golden images, managing clones and overlays, starting up and shutting down groups of virtual hosts comprising an integrated system.
  • To determine how to perform configuration management of virtual OS images.
  • I wonder whether there is general value in running the majority of my computing out of a virtualised environment? If I blow something up, or something blows my system up, it's always nice to be able to revert to a previous snapshot of your environment without the time and hassle of going through a full reinstall. Since you can migrate running virtualised systems between hosts, it makes sense that you get a portable set of host(s) to put on a USB stick and take with you wherever you go. What would the limitations be, if any?
Presumably I'll get to some, all or more than the above points in separate posts.

Also, there are a number of different technologies to investigate (Amazon EC2, VMware, Xen, Parallels, LXC, VirtualBox, KVM and others) but I won't get through all of them, but rather focus on those that are non-commercial/FOSS, will probably be of most relevance to my project and available on my home hardware and OSes. So I'll start with KVM.


There are some official instructions for KVM here, but I've sourced information from a range of pages on the 'net.

To determine hardware support we need to see if the CPU on the machine supports the required instructions via:


egrep "flags.*:.*(svm|vmx)" /proc/cpuinfo

If anything prints from this command then the CPU provides the necessary support - but this doesn't imply you have set your BIOS to enable it. Check this now (probably under "Advanced CMOS Settings" or similar), or you will see strange results creating and starting images in the steps that follow.


There are also some software pre-requisites. KVM is already in the Linux kernel mainline from 2.6.25+. My desktop machine is on 3.2.x so should be fine for the kernel mode support.

We also need the user space components. You can get these via the Ubuntu Software Centre or:

sudo apt-get install qemu-kvm

This installs a number of packages, and gives you access to a number of command line tools:

kvm, kvm_stat, qemu-ga, qemu-i386, qemu-io, qemu-system-i386, qemu-system-x86_64, qemu-x86_64


From this point there are two ways to create the image:
  1. Build one from scratch (using ubuntu-vm-builder); or
  2. Build one from an ISO


Building an Ubuntu guest OS using ubuntu-vm-builder

Building an Ubuntu guest OS image from scratch  using ubuntu-vm-builder requires a package to be installed on the host where you will do the build.

sudo apt-get install ubuntu-vm-builder

Then you run ubuntu-vm-builder to do the work for you. This is great, because it means you can keep the creation scripts under some source code control system to be able to recreate an image exactly, although note that the process does take some time since the tool pulls packages across the network to build the image. I had a few attempts at building images until I got the options right.


There are some advanced options which I've reproduced from here to show how you can provide quite detailed image build instructions:

ubuntu-vm-builder kvm hardy \
                  --domain newvm \
                  --dest newvm \
                  --arch i386 \
                  --hostname hostnameformyvm \
                  --mem 256 \
                  --user john \
                  --pass doe \
                  --ip 192.168.0.12 \
                  --mask 255.255.255.0 \
                  --net 192.168.0.0 \
                  --bcast 192.168.0.255 \
                  --gw 192.168.0.1 \
                  --dns 192.168.0.1 \
                  --mirror http://archive.localubuntumirror.net/ubuntu \
                  --components main,universe \
                  --addpkg acpid \ 
                  --addpkg vim \
                  --addpkg openssh-server \
                  --addpkg avahi-daemon \
                  --libvirt qemu:///system ;

For my image, I eventually went with the below to get a headless server. If you want a desktop, you can add --addpkg ubuntu-desktop to the line below. Alternatively, you can connect to the guest OS later and simply run from within the guest followed by a restart (which you can just do with shutdown from within the guest).

sudo apt-get install ubuntu-desktop
The above worked out of the box for me.

gsw@goat:/media/Data/KVM$ time sudo ubuntu-vm-builder kvm precise --mem 256 --domain vm-test1 --dest vm-test1 --hostname vm-test1 --user test1 --pass test1 --components main,universe,restricted --addpkg acpid --addpkg vim --addpkg openssh-server --addpkg avahi-daemon --libvirt qemu:///system
[sudo] password for gsw:
2012-08-13 01:12:08,664 INFO    : Calling hook: preflight_check
2012-08-13 01:12:08,675 INFO    : Calling hook: set_defaults
2012-08-13 01:12:08,676 INFO    : Calling hook: bootstrap
2012-08-13 01:19:38,737 INFO    : Calling hook: configure_os
Extracting templates from packages: 100%
2012-08-13 01:22:19,034 INFO    : Updating certificates in /etc/ssl/certs... 152 added, 0 removed; done.
2012-08-13 01:22:19,036 INFO    : Running hooks in /etc/ca-certificates/update.d....done.
2012-08-13 01:22:19,225 INFO    : invoke-rc.d: policy-rc.d denied execution of start.
2012-08-13 01:22:19,781 INFO    : invoke-rc.d: policy-rc.d denied execution of start.
2012-08-13 01:22:20,008 INFO    : invoke-rc.d: policy-rc.d denied execution of force-reload.
2012-08-13 01:22:20,034 INFO    : invoke-rc.d: policy-rc.d denied execution of start.
2012-08-13 01:22:20,461 INFO    : Creating SSH2 RSA key; this may take some time ...
2012-08-13 01:22:20,589 INFO    : Creating SSH2 DSA key; this may take some time ...
2012-08-13 01:22:20,600 INFO    : Creating SSH2 ECDSA key; this may take some time ...
2012-08-13 01:22:20,748 INFO    : invoke-rc.d: policy-rc.d denied execution of stop.
2012-08-13 01:22:20,750 INFO    :
2012-08-13 01:22:20,750 INFO    : Warning: Fake initctl called, doing nothing
2012-08-13 01:22:20,751 INFO    :
2012-08-13 01:22:20,752 INFO    : Warning: Fake initctl called, doing nothing
2012-08-13 01:22:22,407 INFO    :
2012-08-13 01:22:22,408 INFO    : Current default time zone: 'Etc/UTC'
2012-08-13 01:22:22,414 INFO    : Local time is now:      Sun Aug 12 15:22:22 UTC 2012.
2012-08-13 01:22:22,414 INFO    : Universal Time is now:  Sun Aug 12 15:22:22 UTC 2012.
2012-08-13 01:22:22,415 INFO    :
2012-08-13 01:22:52,282 INFO    : gpg: key 437D05B5: "Ubuntu Archive Automatic Signing Key " not changed
2012-08-13 01:22:52,288 INFO    : gpg: key FBB75451: "Ubuntu CD Image Automatic Signing Key " not changed
2012-08-13 01:22:52,288 INFO    : gpg: Total number processed: 2
2012-08-13 01:22:52,289 INFO    : gpg:              unchanged: 2
2012-08-13 01:22:53,104 INFO    : invoke-rc.d: policy-rc.d denied execution of stop.
2012-08-13 01:22:54,835 INFO    : invoke-rc.d: policy-rc.d denied execution of stop.
2012-08-13 01:22:55,164 INFO    : invoke-rc.d: policy-rc.d denied execution of start.
2012-08-13 01:22:55,411 INFO    : invoke-rc.d: policy-rc.d denied execution of start.
2012-08-13 01:22:55,578 INFO    : invoke-rc.d: policy-rc.d denied execution of restart.
2012-08-13 01:22:56,133 INFO    : invoke-rc.d: policy-rc.d denied execution of start.
2012-08-13 01:23:04,897 INFO    : Cleaning up
2012-08-13 01:23:04,898 INFO    : Calling hook: preflight_check
2012-08-13 01:23:05,743 INFO    : Calling hook: configure_networking
2012-08-13 01:23:05,775 INFO    : Calling hook: configure_mounting
2012-08-13 01:23:05,780 INFO    : Calling hook: mount_partitions
2012-08-13 01:23:05,780 INFO    : Mounting target filesystems
2012-08-13 01:23:05,780 INFO    : Creating disk image: "/tmp/tmpaHg8oh" of size: 5120MB
2012-08-13 01:23:05,829 INFO    : Adding partition table to disk image: /tmp/tmpaHg8oh
2012-08-13 01:23:06,260 INFO    : Adding type 4 partition to disk image: /tmp/tmpaHg8oh
2012-08-13 01:23:06,260 INFO    : Partition at beginning of disk - reserving first cylinder
2012-08-13 01:23:06,603 INFO    : Adding type 3 partition to disk image: /tmp/tmpaHg8oh
2012-08-13 01:23:06,614 INFO    : [0] ../../libparted/filesys.c:148 (ped_file_system_type_get): File system alias linux-swap(new) is deprecated
2012-08-13 01:23:06,947 INFO    : Creating loop devices corresponding to the created partitions
2012-08-13 01:23:06,963 INFO    : Creating file systems
2012-08-13 01:23:06,972 INFO    : mke2fs 1.42 (29-Nov-2011)
2012-08-13 01:23:07,607 INFO    : mkswap: /dev/mapper/loop0p2: warning: don't erase bootbits sectors
2012-08-13 01:23:07,608 INFO    :         on whole disk. Use -f to force.
2012-08-13 01:23:11,430 INFO    : Calling hook: install_bootloader
2012-08-13 01:23:32,284 INFO    : Removing update-grub hooks from /etc/kernel-img.conf in favour of
2012-08-13 01:23:32,284 INFO    : /etc/kernel/ hooks.
2012-08-13 01:23:32,400 INFO    : Searching for GRUB installation directory ... found: /boot/grub
2012-08-13 01:23:32,474 INFO    : Searching for default file ... Generating /boot/grub/default file and setting the default boot entry to 0
2012-08-13 01:23:32,477 INFO    : Searching for GRUB installation directory ... found: /boot/grub
2012-08-13 01:23:32,487 INFO    : Testing for an existing GRUB menu.lst file ...
2012-08-13 01:23:32,488 INFO    :
2012-08-13 01:23:32,488 INFO    : Could not find /boot/grub/menu.lst file. Would you like /boot/grub/menu.lst generated for you? (y/N) /usr/sbin/update-grub: line 1094: read: read error: 0: Bad file descriptor
2012-08-13 01:23:33,224 INFO    : Searching for GRUB installation directory ... found: /boot/grub
2012-08-13 01:23:33,298 INFO    : Searching for default file ... found: /boot/grub/default
2012-08-13 01:23:33,302 INFO    : Testing for an existing GRUB menu.lst file ...
2012-08-13 01:23:33,302 INFO    :
2012-08-13 01:23:33,303 INFO    : Could not find /boot/grub/menu.lst file.
2012-08-13 01:23:33,303 INFO    : Generating /boot/grub/menu.lst
2012-08-13 01:23:33,426 INFO    : Searching for splash image ... none found, skipping ...
2012-08-13 01:23:33,688 INFO    : grep: /boot/config*: No such file or directory
2012-08-13 01:23:33,851 INFO    : Updating /boot/grub/menu.lst ... done
2012-08-13 01:23:33,851 INFO    :
2012-08-13 01:23:34,060 INFO    : Searching for GRUB installation directory ... found: /boot/grub
2012-08-13 01:23:34,130 INFO    : Searching for default file ... found: /boot/grub/default
2012-08-13 01:23:34,140 INFO    : Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
2012-08-13 01:23:34,366 INFO    : Searching for splash image ... none found, skipping ...
2012-08-13 01:23:34,405 INFO    : grep: /boot/config*: No such file or directory
2012-08-13 01:23:34,584 INFO    : Updating /boot/grub/menu.lst ... done
2012-08-13 01:23:34,584 INFO    :
2012-08-13 01:23:34,638 INFO    : Searching for GRUB installation directory ... found: /boot/grub
2012-08-13 01:23:34,653 INFO    : Calling hook: install_kernel
2012-08-13 01:25:56,510 INFO    : Done.
2012-08-13 01:26:00,092 INFO    : Running depmod.
2012-08-13 01:26:00,152 INFO    : update-initramfs: deferring update (hook will be called later)
2012-08-13 01:26:00,160 INFO    : Examining /etc/kernel/postinst.d.
2012-08-13 01:26:00,161 INFO    : run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-29-virtual /boot/vmlinuz-3.2.0-29-virtual
2012-08-13 01:26:00,163 INFO    : update-initramfs: Generating /boot/initrd.img-3.2.0-29-virtual
2012-08-13 01:26:03,765 INFO    : run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.2.0-29-virtual /boot/vmlinuz-3.2.0-29-virtual
2012-08-13 01:26:03,888 INFO    : Searching for GRUB installation directory ... found: /boot/grub
2012-08-13 01:26:03,952 INFO    : Searching for default file ... found: /boot/grub/default
2012-08-13 01:26:03,963 INFO    : Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
2012-08-13 01:26:04,184 INFO    : Searching for splash image ... none found, skipping ...
2012-08-13 01:26:04,296 INFO    : Found kernel: /boot/vmlinuz-3.2.0-29-virtual
2012-08-13 01:26:04,514 INFO    : Replacing config file /run/grub/menu.lst with new version
2012-08-13 01:26:04,564 INFO    : Updating /boot/grub/menu.lst ... done
2012-08-13 01:26:04,564 INFO    :
2012-08-13 01:26:05,059 INFO    : Calling hook: post_install
2012-08-13 01:26:05,060 INFO    : Calling hook: unmount_partitions
2012-08-13 01:26:05,062 INFO    : Unmounting target filesystem
2012-08-13 01:26:08,404 INFO    : Calling hook: convert
2012-08-13 01:26:08,404 INFO    : Converting /tmp/tmpaHg8oh to qcow2, format vm-test1/tmpaHg8oh.qcow2
2012-08-13 01:26:36,008 INFO    : Calling hook: fix_ownership
2012-08-13 01:26:36,010 INFO    : Calling hook: deploy

real    14m32.066s
user    1m4.068s
sys    0m36.362s
From the output of the time command above you can see that there is a bit of time spent creating each image. Perhaps creating a local mirror of the relevant bits of Precise would yield much faster image build times if this is something you will come to do a lot.

After including the desktop package here's what you end up with:

gsw@goat:/media/Data/KVM$ ls -al vm-test1/
total 3453704
drwx------ 1 gsw gsw          0 Aug 13 01:26 .
drwx------ 1 gsw gsw          0 Aug 13 01:26 ..
-rw------- 1 gsw gsw 3536715776 Aug 13 21:00 tmpaHg8oh.qcow2

Starting it is as easy as:

kvm -m 256 -smp 1 -drive file=vm-test1/tmpaHg8oh.qcow2 "$@"

The screen shot at the top of this post shows the end result after logging in using the test1 credentials.

Hardware specs

Some posts will reference this info, so I've put it in one spot. It's a couple of years old now but here are the machine and host OS specs:

gsw@goat:~$ uname -a
Linux goat 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Memory: 8GB
Processor: AMD Phenom II X6 1090T (hex core), 64bit
OS: Ubuntu 12.04 LTS/Windows 7 Home Premium (Dual boot w/ GRUB)
There is also an SSD boot drive and 1TB DAS SATA storage.

Saturday, August 11, 2012

Hardwood table

In January 2010 I set myself a summer project. Having spent a lot of time working in IT I felt I needed something a little more hands on so determined to do some wood working. This is the result, posted about 2 years after the fact.

Like any good project, it started with a plan. Or rather two rough sketches.



I then went down to a local timber recyclers (Urban Salvage) to see what stock might work. All I really knew was that I wanted a nice native hardwood but beyond that new that I'd just have to work with whatever the recyclers had on hand. I ended up picking up a load of old Blackbutt floorboards and joists which I had them cut onsite to fit into my station wagon. This is the raw timber at home.


In an apartment there isn't much space to work so I made use of our small outdoor patio area for a few weekends until the project was completed. Wood glue ("Liquid Nails") was used to fix the pieces of the table top. A couple of 4m tension straps (the sort you use on a trailer to hold down a load) were used to apply pressure to the boards whilst the glue cured. The pine batons and G-clamps were used to prevent the boards from concertinaing whilst the load was applied by the straps. The boards were fixed and set in twos before assembling and gluing all four together.

 

The large tabletop area was difficult to work in a small space so an improvised workbench was created on the floor on some structural pine.

Masking tape (see leading corner) helped to prevent the power saw from damaging the external edges of the final piece. Note also that the table top is upside down at this point.
This shot shows the legs being prepared for cutting. The masking tape assists in keeping the pieces aligned and importantly the lengths even.
 Getting the straight edge (or in this case the spirit level) at right angles is critical and can take some time to get finessed. Despite the age of this timber one can also see that the pieces are not straight with respect to one another.
  

 The underside of the table top after trimming and the first two legs.
 
Preparing the second legs. More clamps are better, although G-clamps are time consuming to operate but provide a robust grip. Chocks are used here to compensate for my under-investment in G-clamps.



 
Again the baton alignment and masking tape help ensure a nice, clean cut.





A mock-up of the final piece can be useful to visualise the end result. In this case, the orientation of the rail (thin-edge horizontal) was being assessed.


G-clamps again came in handy to mock-up the final table configuration despite none of the critical joins having been cut. The chair provided proof positive that the dimensions were indeed correct.

 

A mix of the recycled timber and some spare pine provided a realistic idea of how the final piece might come to look since the dimensions more or less matched.


I had no experience with a router before I purchased this one so made some simple practice cuts on some pine scraps that were lying around. It quickly became apparent that a simple jig was required to ensure that the router was restricted to only being able to remove material from the required locations. This jig has the obvious shortcoming that it needs to be recalibrated for each workpiece, but it nevertheless served its purpose for getting some practice in before moving on to the final hardwood pieces.

 
As a step up from the simple jig in the previous slide I constructed the above to ensure the router didn't wander from the marked area on the hardwood offcut I was using to practice preparing the mortis end of the join.

The wood marking gauge lines can be seen in the wood here and the jig has done a pretty good job of keeping the router in the required area. It could have used a little extra length and about half a milimeter down one side to reduce manual adjustments with a hammer and chisel later on.

  
This is the final mortis and tennon joint made from off-cuts of Black Butt. Again, this was a practice piece using the end materials as the pine used during the first trial is in contrast extremely soft.

Some manual work with the chisel got these pieces to fit snugly after the bulk of the work was done with the router. I was relieved that these had come out square and flush, but I'd also invested quite some time in setting things up correctly so this would happen.
 

A macro shot showing the spacing between the tenon cheek and mortis.


Legs all marked and ready for the router.


Legs all routed and ready for chiselling. As you can see from the floor in the background this process creates a lot of sawdust, seemingly more than you might expect from the volume removed from the legs.
 

A simple jig was prepared to prepare mortis and tenon joints with a router. Close inspection will show some feint markings in the wood made using a traditional wood gauge. These marks indicate where the cuts are to be made.


Part of the table frame after the final cuts had been made.


This shot shows the completed pieces of the table, four legs, two long rails and 2 short rails.


Each leg was routed out using a home-made jig and hand-chiselled to make it square. Note again the feint etchings in the wood showing where the wood guage had been applied to mark out the cutting area. In this shot the square end had been chiselled out, the rounded edge is the result of routing alone.


 Finishing was performed with this orbital (finishing) sander. The machine was used with 80 then 120 grit paper, with a final hand sanding at 600 and 800 grit wet and dry silicon carbide paper.


A mock-up of a mortis and tenon joint was prepared prior to cutting up the 'real' timber. This practice piece was finished using French polish, mixed by hand from garnet shellac flakes. The holes were filled with wax. A piece of the real, unfinished timber sits alongside for contrast.


More French polish on a practice piece against a fully sanded but otherwise untreated piece of Black Butt.


A practice piece (left) treated with Danish/Scandinavian Oil, with the piece on the right sanded down to 800 grit and a final wet cloth applied to remove dust and raise the grain.


My father's hand plane was used to remove the machined groove in the timber which was there at the time of purchase. Disassembly and blade sharpening were required before this tool operated efficiently. The sheen of the Danish/Scandinavian-oiled surface can be seen in the table top.


This is so old-school but I couldn't help taking this shot! It will probably remind many of their father's and grand father's workshops.

The finished tabletop after two coats of Scandinavian Oil. The product used here has a small amount of urethane added to achieve a satin finish. The major knot and other holes in the final surface were filed using wax prior to oiling.


The table required indoor assembly due to the constrained dimensions of the doorways in our abode. The ends are glued but the brackets can be unscrewed to allow the long rails of the frame to be removed and the table taken outside again.


The frame and tabletop united in the same room for the first time.


The final piece assembled in the dining room.


Black butt is suprisingly hard hardwood. My basic home handyman drill shredded a drill bit trying to drill small holes required for the frame. I would encourage punters to use a drill with a manual chuck as this provides far better grip on the drill bit itself and won't leave bits stuck in the wood.
 

The finished product in place with some very outdated computer equipment.