Thursday, 7 July 2016

Old PC: RAM and CD Drive

My next job for this PC is to get a modern Operating System installed. This was hampered by both of the CD drives being non-functional. To compound this problem, USB booting is not supported by the motherboard and so that avenue for getting an operating system installer booted up was not an option.

So on to repairing the CD burner drive. When you press the eject button, but the tray would not slide out and would only move slightly. Aparently this is a fairly common fault with older drives and cleaning a little belt will fix this. There are loads of youtube videos out there for this problem, here's the one that I used. After doing my cleaning, the drive was back to normal.

The second thing that I looked into was increasing the RAM which was 1GB of PC2700 non-ecc RAM. More RAM is always better, especially with applications getting more and more RAM hungry as time goes by. According to specs for my ASUS A7V333 motherboard, the maximum RAM that I can install is 3GB - this is a 32 bit machine after all and I have three RAM slots. I went through a friends spare RAM collection and found a spare 1GB PC2700 memory module. Feeling lucky I installed it onto the motherboard and booted it up. The RAM was detected without any drama and it now has two 512MB and one 1GB memory modules to make a healthy enough total of 2GB RAM.

Choosing an operating system is next on my list, front runners are Debian, and Arch. In the next post I'll hopefully have an Operating System installed and will be testing system performance.

Saturday, 25 June 2016

Old PC: Resurrection

I want to get my old PC going again which has been in storage for the past 6 years gathering dust. It has a 32bit AMD 2.1GHz processor (AMD rated 3000+) with 1GB RAM and 160GB hard disk. I think I bought it at the end of 2003 or beginning of 2004. It's not ancient, but still old enough.

Up to now I have been using a modern and powerful multicore i7 laptop on a daily basis. It suits my needs but I still miss a desktop for various reasons - I want a large monitor with a high resolution, I want a proper keyboard, I want to be able to use a dual monitor setup for coding on. But the biggest reason for getting it up and running is, just for the hell of it :)

Here's the specs:

CPU: AMD Athlon XP 3000+ (barton) 32bit
RAM: 1GB PC2700 DDR-333Mz
Motherboard: ASUS A7V333
Graphics: Nvidia GeForce FX 5950 Ultra
Hard Disk: Seagate Barracuda 160GB, 7200 RPM
Sound Card: Soundblaster audigy 5.1 (SB0090)
Speakers: Creative labs 5.1 surround sound speakers
Network: D-link 100Mbps network card, and a MODEM
    3.5" internal floppy disk drive
    Medion dvd ReWritable (DVR-106DB)
    Samsung CD RW 24-10-40 (Model SW-224)

This was a top of the line gaming rig when I bought it and had the best graphics card that I could purchase at the time. The CPU got upgraded at some stage - I think I originally bought the PC with a 2600+ processor and upgraded it later on. The DVD re-writer was added later on too.

After booting it up for the the first time since 2010, there's few jobs that immediately presented themselves. I found a dualboot setup with Windows XP and Ubuntu 9.10 on it and neither would boot. I don't want to keep either OS and so a clean OS install will fix this. Both the DVD and CD drive will not work and so need to repaired or replaced. I need to get a monitor - the original LG 19" CRT Flatron monitor was a beast and i think went to the re-cycling centre for being too heavy. A cheap wireless network card would be a good investment too.

I'm interested to see how it all pans out - especially when a decent monitor and the 5.1 surround sound speakers are hooked up. I think it will be perfectly usable running GNU/Linux with a moderately light desktop environment - it's not totally ancient after all.

The end game is to have a fast and usable PC with some form of GNU/Linux or BSD running on it. I'll do a couple more posts with some updates on the project as things progress.

Friday, 5 February 2016

Deploying Google Chrome with CFEngine 3

This post describes how to install Google Chrome with a CFEngine 3 and provides a ready-made bundle for download written by yours truly.

Deploying chrome is fairly straight forward, and about the only thing that you need to watch out for is how to get the Google apt signing key distributed and installed on each target machine.

To get the key installed onto each of my target machines, I took the approach of downloading the key manually and saving it to a text file on the cfengine policy hub - CFEngine then replicated this file automatically to each of the target client machines. A promise then executed the debian 'key-add' command to install the key.

This promise bundle also adds the google deb archive, and performs the google chrome install using the built in apt package manager.

Step 1
Download the promise file from here and save it to your CFEngine policy server.

Step 2
Create a subdirectory called 'files' and save the google key in a file called chromekey.txt.

A variable called $(this.promise_dirname) is used to access the chrome key file from within the bundle. This means that the key file always exists in a subdirectory relative to the chrome promise bundle and makes specifying absolute paths to the key file un-necessary.

Thats pretty much it. It will take a little while for google-chrome to install because CFEngine is usually configured to run apt-get update once every 24 hours. Allow for that much time to elapse before starting to troubleshoot.

Happy CFEngining

Wednesday, 20 January 2016

Troubleshooting CFEngine 3 client machines

Here's a couple of operational tips that I have gathered together when running CFEngine 3. This page is mainly for finding out why a client did not get a particular promise applied and so these commands are geared towards being run on client machines for troubleshooting.

Is the service running?

For systemd systems:

  systemctl status cfengine3
for System V init systems:
  /etc/init.d/cfengine3 status

How do I know if all the promises have been kept on a client?

Run the following command on a client machine

  tail -f /var/cfengine/promise_summary.log
You will see two things, the outcome of which is how many of the promises have been kept on the machine, and secondly the outome of it tells you if any new promises have been downloaded from the policy hub. The outcome of is the main one that we want to see at 100%

Promise Directory Locaton

This is where the promises kept on a client, you may want to inspect these files to see if the client has pulled down a promise or template file from the policy hub.

You should find that all promise and template files get synced to the client, even those promise files that do not get executed or apply to the client. Compare the files here with the files on the policy hub.

Manually run the cfengine rules on the client

This gives output straight to the screen for you to read. Any errors will be displayed on screen.

  cf-agent --no-lock --inform -f /var/cfengine/inputs/

Find out the current environment

We use different environments, as described in the "Learning CFEngine 3" book. To see what environment applies to a client, we have written a bundle to write the environment string to a file. Then all you have to do is inspect this file on any client to see which environment is in effect. Here's the bundle:

  bundle agent current_environment_info
          "curr_environment" string => "/etc/current_environment";

              create => "true",
              edit_defaults => empty,
              edit_line => write_environment_string;

  bundle edit_line write_environment_string
              "Environment: $(";

Thursday, 10 December 2015

Purging old backup files in Linux

Here's a quick script for cleaning out old backup files. We organise our backups into directories by the amount of time you want to keep the backups for. So when writing your backup script, you do not need to think about purging old backup files, this is done automatically for you. This cleanup script goes through each of the backup directories and cleans them out when the timestamp on backup files exceed the given time period.

This has since been replaced by a proper backup system, but it still remains for devices that cannot have the backup client installed, for example a config dump from a network device where it runs a small proprietary o/s

We saved this script in /usr/local/bin/
Here's the script in full:


# exit if a variable is not initialised
set -u

## variables for editing

#keepfor ever
printf "== Processing keepfor ever\n"
printf "No Action needed in directory ${KEEPFOR_EVER}\n"

#keepfor onemonth
printf "== Processing onemonth dir\n"
/usr/bin/find ${KEEPFOR_ONEMONTH} -type f -mtime +31 -exec rm -vf {} \;

#keepfor threemonths
printf "== Processing threemonts dir\n"
/usr/bin/find ${KEEPFOR_THREEMONTHS} -type f -mtime +92 -exec rm -vf {} \;

#keepfor twoyears
printf "== Processing twoyears dir\n"
/usr/bin/find ${KEEPFOR_TWOYEARS} -type f -mtime +730 -exec rm -vf {} \;

Some notes on the stript

  • set -u is included at the start of the script to make the script exit if it encounters an unset variable. I mention this because we are using the rm command and could potentially delete files starting at the root directory if an unset variable is used as part of the path passed to the rm command.
  • The second point i want to make is to think about how long you want to keep backups for, you will notice that we had one directory where we only keep files for a month. This is very short I would not recommend keeping critical files for only one month. Instead, keep them for as long as possible, typically measured in years, not months. Do some research on the subject, there's books and websites that cover this topic in detail.
  • The aim of this script was to stop the server hard disk from filling up, and to keep it like that automatically. Adjust it for your own needs as required.

I would recommend having this script run once daily with cron. On debian systems you create a script in /etc/cron.daily/

Here's the contents of ours, stored in /etc/cron.daily/daily-scripts



One final note, don't forget to make both the script and the cron executable with the chmod +x command. Thats it!

Friday, 2 October 2015

Install Arch on a Raspberry Pi with MATE Desktop

These instrutions show you how to get GNU/Linux with the MATE Desktop installed on a Raspberry Pi using Arch Linux Arm. Arch Linux Arm is a distribution of Linux for ARM computers that can be installed as an alternative to raspbian, which usually comes pre-installed on a raspberry Pi SD Cards. This is a lighweight, up to date and rolling distribution, read on if you want to try it out.

To get started, navigate to the install instructions for your Pi on the Arch Linux Arm website. These instructions tell you how to set up partitions, install the base operating system and where to download the base Arch Linux image from. These instructions assume that you have access to a linux desktop or laptop computer and are all linux commands.

For Raspberry Pi 1, B and model B+ models, follow the the ARMv6 instructions. If you have a Raspberry Pi 2, follow the ARMv7 ones.

After completing the install instructions put the SD Card into your Pi, boot up and log in as root, you will find the root password at the end of the install instructions.

As root, perform a system wide update with the command:

  # pacman -Syu
Once done, install mate desktop and the lightdm window manager:
  # pacman -S xorg xf86-video-fbdev mate mate-extra lightdm lightdm-gtk-greeter

and configure lightdm to autostart on bootup, this will make the Pi autostart a graphical user interface:

  # systemctl enable lightdm

Finally change the default passwords for the alarm and root users. As root and issue the following commands from a command prompt

  # passwd alarm
  # passwd root

An optional extra configuration would be to enable autologin. You can get instructions for doing this from the arch wiki here.

You now have a bleeding edge rolling distribution on your raspberry Pi. For further reading and information on using Arch, the arch linux wiki is an excellent source of information

Wednesday, 8 April 2015

Network booting and Imaging with Clonezilla and PXELINUX

If you want to quickly look at the PXE menu file you can do so here

Ok theres a lot of stuff about PXE out there, I thought I'd do an overview of our current PXE setup. We use PXELINUX to boot from the network and provide network boot images. We mainly use this setup for deploying PC images using clonezilla.

We deploy about 250 PCs in the space of a couple of days every year and so we had a few goals for this PXE boot setup.

  • It had to integrate with the current network setup in our orgainisation.
  • It had to be easy to use.
  • It had to be as zero touch as possible.
  • It had to be reliable.
To meet these requirements, we went with clonezilla live edition and made this bootable over the network. We could then pass preseeded answers to all questions that are asked by the live edition, in fact multiple menuitems with preseeded answers were created for each task. This reduced errors when using the system and made imaging accessible and easy to use.

As mentioned, we use clonezilla boot options heavily to answer a lot of questions that would get the same answers on each boot. The clonezilla guys are very helpful and you can basically pre-answer all of the questions that get asked, including what image to pull down! This saves a lot of typing.

To find out what answers you need to pass at boot time, the general idea is to burn a clonezilla ISO to CD and do the tasks that you need to do manually. At the end, the clonezilla CD will summarise the chosen options for you, just take a note of these and include them in your network boot menu. Here's a link to the clonezilla documentation on the subject

Clonezilla needs storage space on the network, we used a separate storage server with access over SSH for this. This means that all of our PC images are password protected and can be in separate user accounts for separate departments or people. I won't cover the setup of the storage server since it's just a plain old SSH server.

Here's the tutorial that I got a lot of my PXE setup information from, our setup differs where we use nginx instead of apache, but largely it's the same.

The overall system is made up of a Debian GNU/Linux server which serves up the PXE boot images over TFTP and network OS filesystems over HTTP. A second server acts as storage for clonezilla PC images that are to be deployed. This storage server can be a windows server, ssh server, NFS.

When PCs boot up, they get the address of the PXE server and the name of the file to boot via dhcp. This dhcp configuration snippet was given to the network admins to add to the organisation's DHCP configuration. We asked for the network guys to allow specific subnets to get this configuration, as PXE booting was not going to the whole network.
    ##### PXE-specific configuration directives...
    allow booting;
    allow bootp;
    filename "pxelinux.0";

We used tftpd-hpa on our Debian server to serve up the TFTP PXE files. Here's our current /etc/default/tftpd-hpa

    # /etc/default/tftpd-hpa

    TFTP_OPTIONS="--secure --ipv4"

Nginx was configured to serve the same directory over http. This allows the larger squashfs files to be downloaded over http which is much faster. Here's a sample /etc/nginx/sites-available/pxe

    server {
        listen   80;
        server_name pxe;
        root   /srv/tftp;
Then enable this config by linking to the file from sites-enabled and restart nginx. (If the default config is in here, remove it)
    cd /etc/nginx/sites-enabled/
    ln -s ../sites-available/pxe
    /etc/init.d/nginx restart

Next copy some pxelinux files into the tftp directory. On the debian server, install syslinux then (look at step 5 here) copy the pxelinux files that get installed. To be honest you will find different tutorials all recommending a different set of files to copy, some more some less. It all depends on the features you use in your PXE menus. Here's the ones I use:

    apt-get install syslinux
    cp /usr/lib/syslinux/chain.c32 /srv/tftp
    cp /usr/lib/syslinux/ifcpu64.c32 /srv/tftp
    cp /usr/lib/syslinux/mboot.c32 /srv/tftp
    cp /usr/lib/syslinux/memdisk /srv/tftp
    cp /usr/lib/syslinux/menu.c32 /srv/tftp
    cp /usr/lib/syslinux/ /srv/tftp
    cp /usr/lib/syslinux/chain.c32 /srv/tftp
    cp /usr/lib/syslinux/ /srv/tftp
    cp /usr/lib/syslinux/vesamenu.c32 /srv/tftp

Ok so now we are nearly ready to serve the PXE boot images over tftp. Next we need to create the PXE menu and add some network enabled operating systems (e.g. clonezilla live network boot).

Available in our network boot menus are: a memory error checker memtest86+, clonezilla, a debian live LXDE desktop environment, gparted and the System Rescue CD. Finding out exactly where to download these images can be troublesome, so we'll go through these. Generally you are looking to download a zip which is named closely to the a corresponding iso file. The zip file will contain the network boot version of the iso file.

memtest86+ - On the downloads page, download the pre-compiled bootable binary

Clonezilla - From the projects front page, it's in Downloads -> stable releases -> Select CPU architecure "i686-pae", and file type "zip". You may want a different CPU Architecture, read the notes on this page.

Debian Live LXDE - from front page, Under user, Download releases, stable, amd64, webboot. Look for the latest version of the desktop you want. then download three files, ending with vmlinuz, initrd.img, squashfs. These are the kernel, the initial ram filesystem, and the live filesystem respectively.
Here's the download location:

gparted - Download the gparted live ZIP file from here

System Rescue CD - All the files you need are on the ISO file. The files you want to copy from the CD are:


Once I had downloaded all of these files, I made an images directory under /srv/tftp/ and copied the various images into a directory hierarchy. I'm just going to do a tree command on the filesystem and you can work out what goes where. Here it is And here is just the directories

Finally on to creating the PXE boot menu itself. Create a directory under /usr/tftp called pxelinux.cfg in here create a file called default. This file contains all of the menu items and options. Again I'll just post our complete working menu file so that you can take and compare to your own config files.

    mkdir /svr/tftp/pxelinux.cfg
    touch /svr/tftp/pxelinux.cfg/default

Here's a link to our PXE menu file. Some menu items have a password associated with them (which is blah), these are generated with the sha1pass tool. Also you can optionally hide the menu completely by uncommenting two lines near the top of the file the lines starting with MENU SHIFTKEY and NOESCAPE.

There's a lot of information here and a lot that can go wrong with your setup. If you feel that some aspect needs more explanation, comment and I can do a post that specifically covers that area. Anyway I hope this helps someone out there.