\times 4/3 { a8( b c) }
And as of Lilypond 2.17:
\tuplet 4/3 { a8( b c) }
\times 4/3 { a8( b c) }
And as of Lilypond 2.17:
\tuplet 4/3 { a8( b c) }
In choir scores, you often have the score for two voices (e.g., soprano and alto) in one line:
\new Staff = "Frauen"<< \new Voice = "Sopran" { \voiceOne \global \soprano } \new Voice = "Alt" { \voiceTwo \global \alto } >>
When they both have a pause at the same time with the same length, lilypond will still print two rests in different positions. If you (like me) think this looks weird, here is how you can change it:
soprano = \relative c' { a2 \oneVoice r4 \voiceOne a4 } alto = \relative c' { a2 s4 a4 }
In one voice, change to only one voice with \oneVoice
for the rest and then back to the usual voice, here /voiceOne
. If you do the same in the other voice, you will get warnings about clashing notes, so instead of using a rest, use an invisible rest (spacer) with s
.
An alternative is the following command which causes all rests to appear in the middle of the line. It should be used inside the \layout
block:
\override Voice.Rest #'staff-position = #0
In the last post we discussed accuracy, a straightforward method of calculating the performance of a classification system. Using accuracy is fine when the classes are of equal size, but this is often not the case in real world tasks. In such cases the very large number of true negatives outweighs the number of true positives in the evaluation so that accuracy will always be artificially high.
Luckily there are performance measures that ignore the number of true negatives. Two frequently used measures are precision and recall. Precision P indicates how many of the items that we have identified as positives are really positives. In other words, how precise have we been in our identification. How many of those that we think are X, really are X. Formally, this means that we divide the number of true positives by the number of all identified positives (true and false):
Recall R indicates how many of the real positives we have found. So from all of the positive items that are there, how many did we manage to identify. In other words, how exhaustive we were. Formally, this means that we divide the number of true positives by the number of all existing positives (true positives and false negatives):
For our example from the last post, precision and recall are as follows:
It is easy to get a recall of 100%. We just say for everything that it is a positive. But as this will probably not the case (or else we have a really easy dataset to classify!), this approach will give us a really low precision. On the other hand, we can usually get a high precision if we only classify as positive one single item that we are really, really sure about. But if we do that, recall will be low, as there will be more than one item in the dataset to be classified (or else it is not a very meaningful set).
So recall and precision are in a sort of balance. The F1 score or F1 measure is a way of putting the two of them together to produce one single number. Formally it calculates the harmonic mean of the two numbers and weights the two of them with the same importance (there are other variants that put more importance on one of them):
Using the values for precision and recall for our example, F1 is:
Intuitively, F1 is between the two values of precision and recall, but closer to the lower of the two. In other words, it penalizes if we concentrate only on one of the values and rewards systems where precision and recall are closer together.
Link for a second explanation: Explanation from an Information Retrieval perspective