* ROCCurves traverse the ranked list of correspondences
* Hence, like PRGraphs, it is only relevant to this case.
* X-axis is the number of incorrect correspondences
* Y-axis is the number of correct correspondences
* It is expected that the curve grows fast first and then much slower
* This indicates the accuracy of the matcher.
*
* The "Surface Under Curve" (AUC) is returned as a result.
* AUC is in fact the percentage of surface under the curve (given by N the size of the reference alignment and P the size of all pairs of ontologies, N*P is the full size), Area/N*P.
* It is ususally interpreted as:
* 0.9 - 1.0 excellent
* 0.8 - 0.9 good
* 0.7 - 0.8 fair
* 0.6 - 0.7 poor
* 0.0 - 0.6 bad
*
* The problem with these measures is that they assume that the provided alignment
* is a complete alignment: it contains all pairs of entities hence auc are comparable
* because they depend on the same sizes... This is not the case when alignment are
* incomplete. Hence, this should be normalised.
*
* There are two ways of doing this:
* - simply Area/N*P, but this would advantage
* - considering the current subpart Area/N*|
* => both would advantage matchers with high precision