Distribute updates from server to pool computers (hacky way)

This is the crowning achievement of my days at the MINT computer pool. Sadly, by the time I write this post, it will all be deleted. So here it is, archived for posterity.

The setting is a typical computer pool setting. 24 computers and one server. All computers load their home directories from the server (see here). Otherwise, each computer is totally independent. The idea is now, that updates to the pool computers can also be distributed from the server. So that I don’t have to sit down at each computer and execute a script which is a real pain. Again, a really common problem and many solutions exist – but I made my own, hacky, one.

Setting up the server side of things is easy. Basically, we create a few folders and put one shell script in the home of a special user called admin. There is a folder /home/admin/poolsetup/script into which we put a script poolupdate.sh. You can get the script from my wkutils github repository. This script goes through all files in another folder, /home/admin/poolsetup/updates, and executes any scripts it finds in there. It redirects the outputs of the execution into a log file in the folder /home/admin/poolsetup/logs. We can use this file to check what happened. And the update script uses the log file to avoid executing scripts twice. If a log file for a given script exists, we don’t execute the script again.

Because the home directories are loaded from the server, each pool computer will have access to files in the home directory of admin. We don’t have to do any copying to distribute the update script to the pool computers and any changes to the script will take effect right away.

So here are the commands for the server setup – basically just create the necessary folder structure with the correct permissions and put the update script there:

mkdir /home/admin/poolsetup/script
mkdir /home/admin/poolsetup/updates
mkdir /home/admin/poolsetup/logs
chmod o+rx /home/admin/poolsetup/script
chmod o+rx /home/admin/poolsetup/updates
chmod o+rwx /home/admin/poolsetup/logs
wget https://github.com/Kaffeedrache/wkutils/blob/master/admin/poolupdate.sh
mv poolupdate.sh /home/admin/poolsetup/script
chmod o+rx /home/admin/poolsetup/script/poolupdate.sh

On client side, we only have to make the computer execute the update script on a regular basis. We use cron and therefor call crontab which manages the cron jobs:

crontab -e

In the editor, we add these two lines:

00 17 * * 5 bash /home/admin/poolsetup/script/poolupdate.sh
@reboot bash -c "while [[ ! -d /home/admin/ ]] ; do sleep 5; done" ; bash /home/admin/poolsetup/script/poolupdate.sh

The first line will execute the update script every Friday at 5 PM. The second line will execute the script at every system start. We need the ugly loop with the sleep to ensure that the home directories have been mounted, before trying to access them.

So how does making an update work now? Write a script that does what you want to do. Put this script into the folder /home/admin/poolsetup/updates. When a pool computer starts up, it will execute the script. After that, look into the log folder and read the corresponding log to see what has happened. When all computers execute the update, usually you need to read only one log file and then check if all other log files have the same size. Done!

Access a server with an SSH key

Install ssh on the server:

apt-get install openssh-server

Generate a key pair (files id_rsa and id_rsa.pub) with a passphrase:

ssh-keygen

Edit the ssh configuration file:

vi /etc/ssh/sshd_config

In the file, make the following settings:

   PasswordAuthentication no
   PermitRootLogin without-password
   RSAAuthentication yes
   PubkeyAuthentication yes

Add the public key as an authorized key for root that can be used for login:

cat id_rsa.pub >> /root/.ssh/authorized_keys

That’s all for the server!

Now for the client. Copy the key pair into the folder ~/.ssh. Now you should be able to connect with:

ssh -i ~/.ssh/id_rsa root@server

Regular backup with rsync and cron

This is nothing new, just for me to remember. We use rsync and cron to make a backup of all home directories regularly. There will be a weekly backup that is readily accessible and a zipped monthly backup.

First, we need to install rsync (this is for Ubuntu, replace with the package manager of your choice):

apt-get install rsync

Then we need to create the directories where we want to save the backups. Here I am putting it into /media/backup for no particular reason, use any directory you like.

mkdir /media/backup/weekly
mkdir /media/backup/monthly

Next the command that actually copies the files:

rsync -a --exclude=".*/" --delete /home/ /media/backup/weekly

The command uses rsync, which is the tool for the job. We want to backup the complete folder home with all the directories contained in there. We exclude hidden files (starting with .) to avoid copying .cache and other temporary files. You may want to refine this setting for your case. The option --delete overwrites the files in the target directory /media/backup/weekly. So last week’s data will be overwritten with this week’s data when the backup runs. I’d suggest running this command directly to see what happens before proceeding with the automation.

Now that we know how to copy the data, we just need to execute this command regularly. This is done with cron via this command:

crontab -e

This opens an editor with all cron jobs that are currently set up. Add two lines for the backup:

00 18 * * 5 rsync -a --exclude=".*/" --delete /home/ /media/backup/weekly
00 6 1 * * tar -cjf /media/backup/monthly/monthly_$(date +%Y%m%d).tar.bz2 /media/backup/weekly/

The numbers in the beginning of each line give the minutes, hours, day, month and day of the week. The rest of each line is the command. The first line copies the data to /media/backup/weekly on every Friday afternoon (at 18:00). So we will always have a backup that is at most a week old. The second line is executed on every first day of the month at 6:00 and copies the data to /media/backup/monthly in a zipped form.

Load home directories from a server

The setting is the following: I have a pool of 24 computers and about 20 students who need to be able to login at any of the computers and access their data. Basically the normal setup of a computer pool. Of course there are many solutions for this problem (LDAP and so on), but of course it is more fun to create your own solution!

The basic idea is that the home directories are loaded from the server and overwrite the home directories of the clients. The accounts are created directly on each computer, but a user has the same user ID on every computer, so that the mapping of permissions works.

Now for the details. First the server. As a first step, install the NFS server package:

apt-get install nfs-kernel-server

Configure what should be exported. This is done in the file etc/exports:

vi /etc/exports

We want to export the folder /home/ and make it available for all computers in our pool (the subnet 1.22.333.* – of course that’s not the correct IP). So we add this line to the file:

/home/  1.22.333.0/255.255.255.0(rw,async)

We re-read the configuration to let the changes take effect:

exportfs -ra

Now we can check if the correct folder is exported:

exportfs -v

Finally, we create all student accounts on the server. This will also create a home directory for each one. We use fixed user IDs, so for example we will have hans with UID 1010, lisa with UID 1011, kim with UID 1012, and so on.

Now for the clients, where as a first step we need to install the NFS package for the client:

apt-get install nfs-common

Now we could mount the exported folder from the server by hand, but because we want to mount them permanently, we will use the global fstab file for this:

vi /etc/fstab

In this file, insert the following line (where 1.22.333.4 is the server IP):

1.22.333.4:/home/    /home/  nfs     rw,soft 0       0

Restart the computer for the changes to take effect. And finally, again, we need to create all student accounts on each computer and take care to assign the same UID.

Empty panel in XFCE

XFCE has a very annoying property for new users. When you start the desktop for the first time, it asks whether you want to use the “empty panel” or the “default panel”. Unfortunately, people who are new to Linux (and even some that are not so new) have no idea what the question is asking. What you usually want to click is “default panel”. Clicking on “empty panel” will usually result in unhappy users – the desktop will be completely empty. Nothing there, not even a logout button. Bad luck for the newbie.

So in a pool where I expect most users to know little to nothing about Linux, it may be a good idea to simply remove the question completely. This can be done by copying the default panel to a specific place (why? don’t ask me – but it works):

cp /etc/xdg/xfce4/panel/default.xml /etc/xdg/xfce4/xfconf/xfce-perchannel-xml/xfce4-panel.xml

If the user already clicked ’empty panel’, the above doesn’t work. What you can do is to get the question back by removing a few files:

rm -r ~/.config/xfce4/xfconf
rm -r ~/.config/xfce4/desktop
rm -r ~/.cache/sessions/

Setting computer time from the internet [hacky way]

Most of my pool computers show the wrong time and most of them are different. Just for fun, here are the times shown by those running at the moment of the poll:

8:36 (2x), 8:40, 9:36 (2x), 10:35, 10:36 (3x), 10:39 (6x), 11:36 (2x), 11:40

I assume it is the result of setting the time wrong in the installation and then a few semesters of trying to fix some of them (those running at the moment, the first three rows, until the admin was bored, a single one now and then, …), adjusting to daylight savings time or forgetting it and so on.

So this is what I tried to get them back on track (courtesy of AskUbuntu.com):

sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z"

The line first gets a random web page (here google.com) and prints the header of the HTTP response, e.g.,:

  HTTP/1.1 302 Found
  Cache-Control: private
  Content-Type: text/html; charset=UTF-8
  Referrer-Policy: no-referrer
  Location: http://www.google.de/?gfe_rd=cr&dcr=0&ei=mVYyWqvyKNHPXuKYpeAP
  Content-Length: 266
  Date: Thu, 14 Dec 2017 10:46:49 GMT

The line then retrieves the part of the response with the date using grep. It splits the line with the date at spaces with cut -d ' ' and uses the parts 5 to 8. In this line, part 4 is the day of the week, part 3 is the text Date: and parts 1-2 are empty because of the leading spaces. So using parts 5 to 8 results in a date and time in a format that the tool date can understand. Before passing the time on to date, the letter Z is appended. This Z stands for UTC, meaning the time zone set on the computer will be taken into account.

So the line after evaluating wget, grep and cut for the example page we got will be:

sudo date -s "14 Dec 2017 10:46:49Z"

The option -s sets the date to the specified value. So if the request ran in a reasonable time, we should have a reasonably accurate time set for the computer.

PS: Yes, I know that there is such a thing as NTP and I know that time synchronization is not a problem that you need to hack on your own. But this version is much more freaky and cool!! [Also NTP and the university firewall don’t seem to be friends]