About swk

I am a software developr, data scientist, computational linguist, teacher of computer science and above all a huge fan of LaTeX. I use LaTeX for everything, including things you never wanted to do with LaTeX. My latest love is lilypond, aka LaTeX for music. I'll post at irregular intervals about cool stuff, stupid hacks and annoying settings I want to remember for the future.

Redirect http to https with nginx

Create two server environments, one for port 80 (http) and one for port 443 (https). Have the http environment do nothing but redirect to the other one:

server {
        listen 80;
        listen [::]:80;
        server_name yesterdayscoffee.de www.yesterdayscoffee.de;
        return 301 https://$server_name$request_uri;
server {
        listen 443 ssl;
        listen [::]:443 ssl;

... all the rest of your configuration


HTTPS with LetsEncrypt and nginx

First install Certbot on your computer using the instructions at https://certbot.eff.org/

Then you can create a certificate for the page www.yesterdayscoffee.de (with and without www) like this:

certbot -d yesterdayscoffee.de -d www.yesterdayscoffee.de --manual --preferred-challenges http certonly

During the proces, you will be asked to create a file with a specific content at a specific location on your web server (this is for the option http, there are other ways of proving that you control the domain). Once you have done this and everything is fine, the certificate will be created.

The certificate consists of a bunch of files in a location the certbot tells you. You will probably need to put two files on the server: The private key in privkey.pem and the certificate file fullchain.pem. As a location, the folder /etc/letsencrypt/live/ is suggested.

Now you have the certificate, the next step is to tell the web server to use them for your web page. We are using nginx with the configuration file for our web page yesterdayscoffee.de at the default location /etc/nginx/sites-available/. The only thing to do is to add the port 443 for the https protocol and specify the location of the certificate files. These are the lines:

listen 443 ssl;
listen [::]:443 ssl;

ssl_certificate /etc/letsencrypt/live/www.yesterdayscoffee.de/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.yesterdayscoffee.de/privkey.pem;

Restart nginx for the changes to take effect with:

sudo /etc/init.d/nginx restart

You may need to tell your firewall to open the https port for nginx:

sudo ufw allow 'Nginx HTTPS'

That’s it! Now it should work. Coming up: How to redirect http to https and how to renew certificates (which with letsencrypt you have to do every 90 days).

Copying changes into your branch (`rebase`)

Situation: You create a branch newfeature from develop at point X to develop your super cool new feature. You do some commits in your branch. Meanwhile there have been some changes in develop that you want to have in your new branch. To do this, rebase your branch. Basically, this replays every commit in your branch on top of the current state in the branch you do the rebase against. Do a rebase as follows:

$ git checkout develop
Switched to branch 'develop'
$ git pull
$ git checkout newfeature
Switched to branch 'newfeature'
$ git rebase develop
First, rewinding head to replay your work on top of it...
Applying: intermediate commit

Before doing this, you need to commit or stash all your changes. During the rebase, if there are conflicts, you need to resolve them one by one for every commit in your branch. You can simplify conflict resolution in one of two ways: use the flag -Xours to always take the version in the branch that you are rebasing against (develop in our example); or use the flag -Xtheirs to always take the version in your branch.

When there are conflicts, you need to resolve them. To resolve a conflict, edit the file that has the conflict and remove all places indicated by <<<<<. Then add all the modifications to the staging area. You can commit, but you do not have to. For things that you know they will be resolved later on anyway, you can also chose git rebase --skip to ignore the problem (e.g., if a file was deleted in your branch and edited in the other and you know you copied the new file into your branch in a later commit). At the end, there may be things you need to commit to finalize the merge, then the rebase is complete. Don’t forget to push the result to your remote branch!

Fixing the locale on Kubuntu

After installing my brand new Kubuntu, I got the following error:

locale: Cannot set LC_ALL to default locale: No such file or directory
perl: warning: Setting locale failed.

Here is how to fix it.

The first step is to see the current settings with locale. This should give an output similar to the following:

> locale

This output already might give you a hint about what is wrong. In my case, I have the entry en_DE which looks fishy. The next step is, to see what locales are installed on your machine. This is done with the parameter -a for locale:

> locale -a

Here we already found the problem. The output does not contain the locale “en_DE”. In this case, it is because “en_DE” does not actually exist. I have no idea where it comes from. But somehow the combination of being in Germany and installing an English operating system caused it. So what I want to do is to set everything that has the wrong locale to the correct locale “de_DE” instead.

As the German locale is not installed on our system yet, we first have to create it. This is done with locale-gen:

> sudo locale-gen de_DE.utf8
Generating locales (this might take a while)...
  de_DE.UTF-8... done   
Generation complete.

Now we can set it as the default with the following:

sudo dpkg-reconfigure locales

You should restart the computer for the changes to take effect (just to be safe).

Der LISA Webcrawler – Wer sucht, der wird auch finden

Egal was man sucht, im Internet gibt es eine Seite dazu. Aber um diese Seite finden zu können, muss man erst einmal einen Haufen Internetseiten haben, damit man diese Seiten dann lesen (oder automatisch analysieren) kann. Wie kommen wir also an die Seiten? Man könnte vielleicht auf die Idee kommen alle möglichen Adressen alphabetisch aufzulisten. Das wäre aber wenig zielführend, da es eine unendliche Anzahl an Adressen gibt. Eine bessere Idee ist die Strategie, die ein Webcrawler verwendet.

Ein Webcrawler besucht ausgehend von einer Startseite immer weitere Seiten, indem er den Links auf den Seiten folgt. Nehmen wir als Beispiel als Startseite die Seite www.beispiel.de mit Links zu zwei anderen Seiten, www.5analytics.com und www.lisa-sales.de. Zuerst schreibt der Webcrawler die beiden Links auf eine Liste noch zu besuchender Seiten. Diese Liste wird Frontier genannt. Dann sucht er aus dem Frontier eine neue Seite aus, z.B. www.lisa-sales.de, und besucht diese. Auf der neuen Seite geht dann der gleiche Prozess von vorne los. Die Seite selbst wird abgespeichert. Alle Links auf dieser Seite werden identifiziert und zum Frontier hinzugefügt. Und dann kommt die nächste Seite an die Reihe.

Die Grundidee hinter einem Webcrawler ist einfach, aber in der Umsetzung gibt es einiges, was berücksichtigt werden muss. Zum einen ist da schiere Größe des Internets. Ein einzelner Webcrawler wird nicht weit kommen, daher wird man den Prozess vermutlich parallelisieren wollen. Die nächste Frage ist wie die nächste zu besuchende Seite ausgewählt wird. Die Auswahl kann nach verschiedenen Kriterien erfolgen. Zum Beispiel könnten Seite priorisiert werden, die sich häufig ändern. Oder es werden erst deutsche Seiten besucht. Oder Links die möglichst dicht an der Startseite dran sind (Breitensuche). Eine Schwierigkeit stellen auch dynamisch generierte Seiten dar, die je nach Nutzereingaben unterschiedlich sind. Würde ein Webcrawler z.B. die Startseite einer Suchmaschine besuchen, hätte die Seite für jede mögliche Suche einen anderen Inhalt und andere Links. Der Crawler könnte sich also endlos auf dieser Seite verfangen. Als letzter Punkt sei die Politeness erwähnt. Webcrawler schicken prinzipbedingt sehr viele Anfragen. Das könnte die Webserver auf denen die Seiten liegen schnell überlasten. Daher sollte sich ein Crawler “höflich” verhalten und Wartezeiten zwischen den Anfragen einhalten.

LISA enthält einen Webcrawler, der mit einer Breitensuche Seiten aus dem deutschen Internet besucht. Verschiedene Normalisierungen der URLs im Frontier sorgen dafür, dass LISA sich nicht in dynamischen Inhalten verfängt. LISA kann mehrere Crawler-Threads parallel laufen lassen und verhält sich nach Politeness-Regeln um Webserver nicht zu überlasten. Die vom Crawler erhaltenen Seiten werden in LISA gespeichert und an den LISA Analyzer übergeben, der dann relevante Informationen extrahiert.

This post has first appeared at lisa-sales.de.

Levels of sentiment analysis

Sentiment analysis can be done on different levels of granularity. We could determine the sentiment of a complete review which would give us something similar to the star ratings. We can then go down to a more detailed level, e.g., sentences or to aspects.

Document-level analysis
Input: Text of a complete document
Output: Polarity (as a label positive/negative/neutral or rating on a scale)

This task can be done fairly reliable with automatic methods, as there is usually some redundancy, so if the method misses one clue, there are other clues that are sufficient to know what polarity is expressed. Document-level analysis makes a few assumptions that may not be true. First, it assumes that a document talks about a single target and that all opinion expressions refer to this target. This may be true in some cases, but especially in longer reviews people like to compare the product they are discussing to other similar products, they may describe the plot of a movie or book, they may give opinions about the delivery, tell stories about how they got the product as a gift, and so on. Second, the assumption is that one document expresses one opinion, but human authors may be undecided. Finally, it assumes that the complete review expresses the opinion of one author, but there may be parts where other people’s opinions are cited (for completeness or to refute them).

Sentence-level analysis
Input: One sentence
Output: Polarity + target

On sentence level, we can add the task of finding out what the sentence talks about (the target) to the task of determining the sentiment. While this level of analysis allows us in some cases to overcome the difficulties we talked about on document level, it still makes the same assumptions for each sentence. And all of them may be false even in a single sentence, there may be more than one target ("A is better than B"), more than one opinion ("I liked the UI, but the ring tones were horrible") or opinions of more than one person ("I liked the size, but my wife hated it").

Aspect-level analysis
Input: A document or sentence
Output: many tuples of (polarity, target, possibly holder)

Instead of using the linguistic units of a sentence or a document, we can use individual opinion expressions as the main unit of what we want to extract. A sentence "I liked the UI, but my wife thought the ring tones were horrible" would result in two tuples: (positive, UI, author) and (negative, ring tones, my wife). The different tuples can then be added to get one polarity per aspect, per holder or even an overall polarity