Commit 7cbbc34f authored by Jussi Lindgren's avatar Jussi Lindgren
Browse files

Doc: Minor tweaks to the Regularized CSP documentation

parent 535caf60
...@@ -115,13 +115,15 @@ __________________________________________________________________ ...@@ -115,13 +115,15 @@ __________________________________________________________________
Note that the usage of the CSP filters before classification training can make the cross-validation results optimistic, unless strictly non-overlapping parts of the data were used to train the CSP and the classifier (disjoint sets for each). Note that the usage of the CSP filters before classification training can make the cross-validation results optimistic, unless strictly non-overlapping parts of the data were used to train the CSP and the classifier (disjoint sets for each).
The trace normalization can be found in the literature [2]. Presumably the idea is to normalize the The trace normalization can be found in the literature [2]. The idea is to normalize the scale of each chunk in order to
scale of each chunk in order to compensate for a possible signal power drift over time during the EEG recording, compensate for a possible signal power drift over time during the EEG recording, making each chunks' covariance contribute
making each chunks' covariance contribute similarly to the aggregate regardless of the current chunks average power. similarly to the aggregate regardless of the current chunks average power.
To get the "CSP with Diagonal Loading" of Lotte & Guan paper [1], set shrinkage to a positive value and Tikhonov to 0. To get the To get the "CSP with Diagonal Loading" of Lotte & Guan paper [1], set shrinkage to a positive value and Tikhonov to 0. To get the
"CSP with Tikhonov regularization", do the opposite. You can also try a mixture of the two. Note that the Guan & Lotte paper does not appear to use trace normalization. "CSP with Tikhonov regularization", do the opposite. You can also try a mixture of the two. Note that the Guan & Lotte paper does not appear to use trace normalization.
To get the CSP resembling the one in the Muller-Gerkin paper, set Trace Normalization to True and the Covariance method to Chunk Average, with no regularization. Then, feed the algorithm each trial as a separate chunk (with Stimulation based epoching box). This is also the classic OV way of computing the CSP.
Once the spatial filters are computed and saved in the configuration file, you can load the configuration into the \ref Doc_BoxAlgorithm_SpatialFilter "Spatial Filter" box. Once the spatial filters are computed and saved in the configuration file, you can load the configuration into the \ref Doc_BoxAlgorithm_SpatialFilter "Spatial Filter" box.
For the moment the Regularized CSP Trainer supports only two classes. For the moment the Regularized CSP Trainer supports only two classes.
......
...@@ -127,8 +127,8 @@ OpenViBE::boolean CAlgorithmOnlineCovariance::process(void) ...@@ -127,8 +127,8 @@ OpenViBE::boolean CAlgorithmOnlineCovariance::process(void)
if(ip_bTraceNormalization) if(ip_bTraceNormalization)
{ {
// The origin of this trick has been lost. Presumably the idea is to normalize the scale of // This normalization can be seen e.g. Muller-Gerkin & al., 1999. Presumably the idea is to normalize the
// each chunk in order to compensate for possible signal power drift over time during the EEG recording, // scale of each chunk in order to compensate for possible signal power drift over time during the EEG recording,
// making each chunks' covariance contribute similarly to the average regardless of // making each chunks' covariance contribute similarly to the average regardless of
// the current average power. Such a normalization could also be implemented in its own // the current average power. Such a normalization could also be implemented in its own
// box and not done here. // box and not done here.
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment