Regular backup with rsync and cron

This is nothing new, just for me to remember. We use rsync and cron to make a backup of all home directories regularly. There will be a weekly backup that is readily accessible and a zipped monthly backup.

First, we need to install rsync (this is for Ubuntu, replace with the package manager of your choice):

apt-get install rsync

Then we need to create the directories where we want to save the backups. Here I am putting it into /media/backup for no particular reason, use any directory you like.

mkdir /media/backup/weekly
mkdir /media/backup/monthly

Next the command that actually copies the files:

rsync -a --exclude=".*/" --delete /home/ /media/backup/weekly

The command uses rsync, which is the tool for the job. We want to backup the complete folder home with all the directories contained in there. We exclude hidden files (starting with .) to avoid copying .cache and other temporary files. You may want to refine this setting for your case. The option --delete overwrites the files in the target directory /media/backup/weekly. So last week’s data will be overwritten with this week’s data when the backup runs. I’d suggest running this command directly to see what happens before proceeding with the automation.

Now that we know how to copy the data, we just need to execute this command regularly. This is done with cron via this command:

crontab -e

This opens an editor with all cron jobs that are currently set up. Add two lines for the backup:

00 18 * * 5 rsync -a --exclude=".*/" --delete /home/ /media/backup/weekly
00 6 1 * * tar -cjf /media/backup/monthly/monthly_$(date +%Y%m%d).tar.bz2 /media/backup/weekly/

The numbers in the beginning of each line give the minutes, hours, day, month and day of the week. The rest of each line is the command. The first line copies the data to /media/backup/weekly on every Friday afternoon (at 18:00). So we will always have a backup that is at most a week old. The second line is executed on every first day of the month at 6:00 and copies the data to /media/backup/monthly in a zipped form.

Load home directories from a server

The setting is the following: I have a pool of 24 computers and about 20 students who need to be able to login at any of the computers and access their data. Basically the normal setup of a computer pool. Of course there are many solutions for this problem (LDAP and so on), but of course it is more fun to create your own solution!

The basic idea is that the home directories are loaded from the server and overwrite the home directories of the clients. The accounts are created directly on each computer, but a user has the same user ID on every computer, so that the mapping of permissions works.

Now for the details. First the server. As a first step, install the NFS server package:

apt-get install nfs-kernel-server

Configure what should be exported. This is done in the file etc/exports:

vi /etc/exports

We want to export the folder /home/ and make it available for all computers in our pool (the subnet 1.22.333.* – of course that’s not the correct IP). So we add this line to the file:

/home/  1.22.333.0/255.255.255.0(rw,async)

We re-read the configuration to let the changes take effect:

exportfs -ra

Now we can check if the correct folder is exported:

exportfs -v

Finally, we create all student accounts on the server. This will also create a home directory for each one. We use fixed user IDs, so for example we will have hans with UID 1010, lisa with UID 1011, kim with UID 1012, and so on.

Now for the clients, where as a first step we need to install the NFS package for the client:

apt-get install nfs-common

Now we could mount the exported folder from the server by hand, but because we want to mount them permanently, we will use the global fstab file for this:

vi /etc/fstab

In this file, insert the following line (where 1.22.333.4 is the server IP):

1.22.333.4:/home/    /home/  nfs     rw,soft 0       0

Restart the computer for the changes to take effect. And finally, again, we need to create all student accounts on each computer and take care to assign the same UID.

Empty panel in XFCE

XFCE has a very annoying property for new users. When you start the desktop for the first time, it asks whether you want to use the “empty panel” or the “default panel”. Unfortunately, people who are new to Linux (and even some that are not so new) have no idea what the question is asking. What you usually want to click is “default panel”. Clicking on “empty panel” will usually result in unhappy users – the desktop will be completely empty. Nothing there, not even a logout button. Bad luck for the newbie.

So in a pool where I expect most users to know little to nothing about Linux, it may be a good idea to simply remove the question completely. This can be done by copying the default panel to a specific place (why? don’t ask me – but it works):

cp /etc/xdg/xfce4/panel/default.xml /etc/xdg/xfce4/xfconf/xfce-perchannel-xml/xfce4-panel.xml

If the user already clicked ’empty panel’, the above doesn’t work. What you can do is to get the question back by removing a few files:

rm -r ~/.config/xfce4/xfconf
rm -r ~/.config/xfce4/desktop
rm -r ~/.cache/sessions/

Bash settings

A few settings to be put into the .bashrc.

Ignore duplicates in history, but do put in comands that start with spaces:

HISTCONTROL=ignoredups # (default: ignoreboth)

Keep a lot of history:

export HISTSIZE=10000  # default: 1000
export HISTFILESIZE=10000  # default: 1000

When the shell exits, append to the history file instead of overwriting it:

shopt -s histappend 

Disable the annoying password thingy in KDE:

unset SSH_ASKPASS

sshfs – mount files over ssh

Mount a file system on a different computer via ssh:

sshfs -o follow_symlinks user@server:/home/user/ /path/to/mount/point

server is the other computer, user is your username on the other computer and /home/user/ is the folder you want to include from the other computer. /path/to/mount/point is the place on your drive where the files will be located. It needs to be a folder that exists and is empty.

To get rid of the mounted folder again use

umount /path/to/mount/point

How to get WiFi running on Suse Leap 42.3 (Broadcom driver)

After the update from Suse Leap 42.2 to Suse Leap 42.3, my Wifi stopped working. Which is kind of bad, because I need internet to figure out what is wrong…

This was the situation right after the update, when it was not working:

> lspci -nnk | grep -A 3 "Network"
04:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01)
        Subsystem: Hewlett-Packard Company Device [103c:804a]
        Kernel driver in use: bcma-pci-bridge
        Kernel modules: bcma
> hwinfo --short
network:
  eth0                 Realtek RTL8101/2/6E PCI Express Fast/Gigabit Ethernet controller
                       Broadcom BCM43142 802.11b/g/n
network interface:
  eth0                 Ethernet network interface
  lo                   Loopback network interface
> iwconfig
lo        no wireless extensions.
eth0      no wireless extensions.
> lsmod | grep "wl"

No WiFi to be seen!

So now this is what I did:

  1. Remove the old driver:
    > rpm -e broadcom-wl broadcom-wl-kmp-default 
    
  2. Find out my exact kernel version (the last part is the part we need, i.e., “default”):
    > uname -r
    4.4.104-39-default
    
  3. Add the Packman repository to my repositories:
    > zypper addrepo http://packman.inode.at/suse/openSUSE_Leap_42.3/ packman
    
  4. Install the drivers, paying attention to my kernel type (…-“default”):
    > zypper install broadcom-wl-kmp-default broadcom-wl
    

    You can also download the rpm by hand and install it. In that case, you need to pay attention to the full kernel number. Meaning, for my kernel 4.4.104-39, I should install the driver from broadcom-wl-kmp-default-6.30.223.271_k4.4.49_19-3.6.x86_64.rpm where the numbers after the k match exactly. Using Packman does that for you.

    Another issue I had with manual installation was missing keys. At least my configuration forces a valid PGP key and aborts if no key is in the key list. And I didn’t have a key for the downloaded rpms. It is possible to tell rpm to install the packages without checking the key (option --nosignature), but that did not properly install the package (without error messages, of course). When installing with zypper it looks for the key itself and you don’t have to worry.

  5. I rebuilt the loaded modules list and then restarted, but I am not sure it is necessary:
    > mkinitrd
    

Finally, the outputs of the above commands are (for reference, the next time it breaks):

> lspci -nnk | grep -A 3 "Network"
04:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01)
        Subsystem: Hewlett-Packard Company Device [103c:804a]
        Kernel driver in use: wl
        Kernel modules: bcma, wl
> hwinfo --short
network:
  eth0                 Realtek RTL8101/2/6E PCI Express Fast/Gigabit Ethernet controller
  wlan0                Broadcom BCM43142 802.11b/g/n

network interface:
  wlan0                WLAN network interface
  eth0                 Ethernet network interface
  lo                   Loopback network interface
> iwconfig
lo        no wireless extensions.
wlan0     IEEE 802.11abg  ESSID:"..."  
          Mode:Managed  Frequency:2.412 GHz  Access Point: ...   
          Bit Rate=65 Mb/s   Tx-Power=200 dBm   
          Retry short limit:7   RTS thr:off   Fragment thr:off
          Encryption key:off
          Power Management:off
          Link Quality=70/70  Signal level=-39 dBm  
          Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0
          Tx excessive retries:0  Invalid misc:0   Missed beacon:0
eth0      no wireless extensions.
> lsmod | grep "wl"
wl                   6451200  0 
cfg80211              610304  1 wl

And it only took all afternoon … sometimes I hate Linux 🙁

Setting computer time from the internet [hacky way]

Most of my pool computers show the wrong time and most of them are different. Just for fun, here are the times shown by those running at the moment of the poll:

8:36 (2x), 8:40, 9:36 (2x), 10:35, 10:36 (3x), 10:39 (6x), 11:36 (2x), 11:40

I assume it is the result of setting the time wrong in the installation and then a few semesters of trying to fix some of them (those running at the moment, the first three rows, until the admin was bored, a single one now and then, …), adjusting to daylight savings time or forgetting it and so on.

So this is what I tried to get them back on track (courtesy of AskUbuntu.com):

sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z"

The line first gets a random web page (here google.com) and prints the header of the HTTP response, e.g.,:

  HTTP/1.1 302 Found
  Cache-Control: private
  Content-Type: text/html; charset=UTF-8
  Referrer-Policy: no-referrer
  Location: http://www.google.de/?gfe_rd=cr&dcr=0&ei=mVYyWqvyKNHPXuKYpeAP
  Content-Length: 266
  Date: Thu, 14 Dec 2017 10:46:49 GMT

The line then retrieves the part of the response with the date using grep. It splits the line with the date at spaces with cut -d ' ' and uses the parts 5 to 8. In this line, part 4 is the day of the week, part 3 is the text Date: and parts 1-2 are empty because of the leading spaces. So using parts 5 to 8 results in a date and time in a format that the tool date can understand. Before passing the time on to date, the letter Z is appended. This Z stands for UTC, meaning the time zone set on the computer will be taken into account.

So the line after evaluating wget, grep and cut for the example page we got will be:

sudo date -s "14 Dec 2017 10:46:49Z"

The option -s sets the date to the specified value. So if the request ran in a reasonable time, we should have a reasonably accurate time set for the computer.

PS: Yes, I know that there is such a thing as NTP and I know that time synchronization is not a problem that you need to hack on your own. But this version is much more freaky and cool!! [Also NTP and the university firewall don’t seem to be friends]