Commit 230e8372 authored by Serrière Guillaume's avatar Serrière Guillaume
Browse files

Update documentation box.


Signed-off-by: default avatarSerrière Guillaume <guillaume.serriere@inria.fr>
parent 0b8f5fb4
......@@ -1897,9 +1897,6 @@ OpenViBE::boolean CApplication::createPlayer(void)
if(res == 0)
{
releasePlayer();
//l_pCurrentInterfacedScenario->m_oPlayerIdentifier = OV_UndefinedIdentifier;
//l_pCurrentInterfacedScenario->m_pPlayer=NULL;
//m_rKernelContext.getPlayerManager().releasePlayer(l_oPlayerIdentifier);
return false;
}
}
......
......@@ -47,10 +47,15 @@ a new classification process is triggered, resulting in the generation of the co
* |OVP_DocEnd_BoxAlgorithm_ClassifierProcessor_Output1|
* |OVP_DocBegin_BoxAlgorithm_ClassifierProcessor_Output2|
This output reflects the classification algorithm status in the form of a matrix of value. The content of this
matrix is dependent of the chosen classification algorithm. For example, the LDA classifier sends the hyperplane
distance as its status. Given that this value is dependent of the chosen algorithm, you should be very carefull
with the use of this output stream. Unexepected behavior may (will) occur when changing the classifier.
This output reflects the classification algorithm status in the form of a matrix of value. This output will contain one or several distances
to an hyperplan if the classifier provide it. If not, the matrix will have 0 dimension. The format of this output directly depend on
the classification algorithm and of the strategy used by the processor box.
* |OVP_DocEnd_BoxAlgorithm_ClassifierProcessor_Output2|
*
* |OVP_DocBegin_BoxAlgorithm_ClassifierProcessor_Output3|
This output reflects the classification algorithm status in the form of a matrix of value. This output will contains one or several probabilities
for a data to be on a class if the classifier provide it. If not, the matrix will have 0 dimension. The format of this output directly depend on
the classification algorithm and of the strategy used by the processor box.
* |OVP_DocEnd_BoxAlgorithm_ClassifierProcessor_Output2|
__________________________________________________________________
......
......@@ -70,50 +70,45 @@ the type of those parameters is simple enough to be handled in the GUI, then add
parameters can not be done in this page because it is impossible to know at this time what classifier thus what hyper
parameters you will have available. This will depend on the classification algorihtms that are be implemented in OpenViBE.
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Settings|
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting1|
The first setting of this box is the strategy to use. You can choose any registered \c OVTK_TypeId_ClassificationStrategy
strategy you want.
*
* * |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting1|
This is the stimulation to consider to trigger the training process.
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting1|
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting2|
The second setting is the classifier to use. You can choose any registered \c OVTK_TypeId_ClassifierAlgorithm
algorithm you want.
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting2|
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting3|
*
* * |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting2|
This setting points to the configuration file where to save the result of the training for later online use. This
configuration file is used by the \ref Doc_BoxAlgorithm_ClassifierProcessor box. Its syntax
depends on the selected algorithm.
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting3|
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting2|
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting4|
This is the stimulation to consider to trigger the training process.
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting3|
This setting is the strategy to use. You can choose any registered \c OVTK_TypeId_ClassificationStrategy
strategy you want.
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting3|
*
* * |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting4|
This is the stimulation to send when the classifier algorithm detects a class-1 feature vector
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting4|
*
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting5|
This is the stimulation to send when the classifier algorithm detects a class-2 feature vector
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting5|
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting6|
This setting is the classifier to use. You can choose any registered \c OVTK_TypeId_ClassifierAlgorithm
algorithm you want.
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting6|
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting10|
If you want to perform a k-fold test, you should enter something else than 0 or 1 here. A k-fold test generally gives
a better estimate of the classifiers accuracy than naive testing with the training data. The idea is to divide the set of
feature vectors in a number of partitions. The classification algorithm is trained on some of the partitions and its
accuracy is tested on the others. However, the classifier produced by the box is the classifier trained with the whole
data. The cross-validation is only an error estimation tool, it does not affect the resulting model.
See the miscellaneous section for details on how the k-fold test is done in this box.
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting5|
*
*
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting6|
For classification algorithms that support rejection, you can choose a stimulation that reflects the feature vector
could not be classified.
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting6|
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting7|
This is the stimulation to send when the classifier algorithm detects a class-1 feature vector
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting7|
*
* |OVP_DocBegin_BoxAlgorithm_ClassifierTrainer_Setting8|
This is the stimulation to send when the classifier algorithm detects a class-2 feature vector
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting8|
* |OVP_DocEnd_BoxAlgorithm_ClassifierTrainer_Setting10|
......@@ -157,25 +152,23 @@ You cannot use every algorithm with every decision strategy, but the interface w
\par Support Vector Machine (SVM)
A well-known classifier supporting non-linear classification via kernels. The implementation is based on LIBSVM 2.91, which is included in the OpenViBE source tree. The parameters exposed in the GUI correspond to LIBSVM parameters. For more information on LIBSVM, see <a href="http://www.csie.ntu.edu.tw/~cjlin/libsvm/">here</a>.
\par
This algorithm provides only probabilities.
\par Linear Discriminant Analysis (LDA)
A simple and fast linear classifier. For description, see any major textbook on Machine Learning or Statistics (e.g. Duda, Hart & Stork, or Hastie, Tibshirani & Friedman).
\par Probabilistic LDA
This is the same algorithm as Linear Discriminant Analysis (LDA) but generates a confidence score between 0 and 1.
\par Shrinkage LDA (sLDA)
A variant of Linear Discriminant Analysis (LDA) with regularization. The regularization is performed by shrinking the empiric covariance matrix towards a prior covariance matrix according to a method proposed by Ledoit & Wolf: "A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices", 2004. The code follows the original Matlab implementation of the authors.
\par
The shrinkage LDA classifier has the following options.
\par
\li Shrinkage: A value s between [0,1] sets a linear weight between dataCov and priorCov. I.e. cov=(1-s)*dataCov+s*priorCov. Value <0 is used to auto-estimate the shrinking coefficient (default). If var(x) is a vector of empirical variances of all data dimensions, priorCov is a diagonal matrix with a single value mean(var(x)) pasted on its diagonal.
\li Force diagonal cov: This sets the nondiagonal entries of the covariance matrices to zero.
A simple and fast linear classifier. For description, see any major textbook on Machine Learning or Statistics (e.g. Duda, Hart & Stork, or Hastie, Tibshirani & Friedman). This algorithm can be used with a regularized covariance matrix.
The Linear Discriminant Analysis has the following option.
\par
\li Use shrinkage: Use a classic or a regularized covariance matrix.
\li Shrinkage: Force diagonal cov (DDA) : A value s between [0,1] sets a linear weight between dataCov and priorCov. I.e. cov=(1-s)*dataCov+s*priorCov.
Value <0 is used to auto-estimate the shrinking coefficient (default). If var(x) is a vector of empirical variances of all data dimensions, priorCov is a
diagonal matrix with a single value mean(var(x)) pasted on its diagonal. Used only if use shrinkage is checked.
\li Force diagonal cov: This sets the nondiagonal entries of the covariance matrices to zero. Used only if Use shrinkage is checked.
\li Shrinkage coefficient (-1 == auto)
\par
Note that setting shrinkage to 0 should get you the regular LDA behavior. If you additionally force the covariance to be diagonal, you should get a model resembling the Naive Bayes classifier.
\par Probabilistic Shrinkage LDA
This is the same algorithm as Shrinkage LDA (sLDA) but generates a confidence score between 0 and 1.
\par
This algorithm provides both hyperplan distance and probabilities.
Cross Validation
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment