Commit 86d6814c authored by Jussi Lindgren's avatar Jussi Lindgren
Browse files

Doc: Updated the Regularized CSP box documentation

parent 49cf3824
......@@ -17,7 +17,7 @@ __________________________________________________________________
In principle CSP can be useful in two-class experiments where there is discriminative information in the variance (or power) of the signal conditions.
The box implements some of the methods described e.g. in Lotte & Guan: "Regularizing common spatial patterns...", 2011. Especially, it
The box implements some of the methods described e.g. by Lotte & Guan [1]. Especially, it
allows using the methods called "CSP with Diagonal Loading" and "CSP with Tikhonov Regularization" in the paper. In the approach of
the paper, the filters for condition 1 are found by eigenvector decomposition of inv(Sigma1+rho*I)*Sigma2, where Sigma1 and Sigma2 are
empirical covariance matrices for the two conditions, rho the amount of Tikhonov regularization and I an identity matrix. For condition 2,
......@@ -25,13 +25,13 @@ __________________________________________________________________
To avoid caching the whole dataset in the box, the algorithm tries to estimate the required covariances incrementally. The box supports
a few different incremental ways to compute the covariances. The 'block average' approach takes an average of all the covariances of the incoming
signal chunks, whereas the incremental (per sample) method aims to implement Youngs & Cramer algorithm as described in Chan, Golub, Leveq,
"Updating formulae and a pairwise algorithm...", 1979. Which one gives better results may depend on the situation -- however,
taking an average of covariance matrices is not usually suggested as a method for computing covariance over the whole data.
signal chunks, an approach described in [2], whereas the incremental (per sample) method aims to implement Youngs & Cramer
algorithm as described in [3]. Which one gives better results may depend on the situation -- however,
taking an average of covariance matrices is not your usual textbook method for computing covariance over the whole data.
Finally, the box also presents an option to try to compensate for the effect of changes in the average signal power over time. This is
can be done by enabling 'Trace Normalization' setting. When trace normalization is enabled, each data chunks contribution to
the covariance gets divided by its trace.
the covariance gets divided by its trace. See miscellaneous notes for details.
* |OVP_DocEnd_BoxAlgorithm_RegularizedCSPTrainer_Description|
__________________________________________________________________
......@@ -113,19 +113,27 @@ __________________________________________________________________
The suitable amount of regularization may depend on the variance of the data. You may need to try different values to find the one that suits your situation best.
Before the CSP training, it may be useful to temporally filter the input data to remove bands which are believed to have no relevant discriminative information.
Note that the usage of the CSP filters before classification training can make the cross-validation results optimistic, unless strictly non-overlapping
parts of the data were used to train the CSP and the classifier (disjoint sets for each).
Note that the usage of the CSP filters before classification training can make the cross-validation results optimistic, unless strictly non-overlapping parts of the data were used to train the CSP and the classifier (disjoint sets for each).
The origin of the trace normalization trick has been lost. Presumably the idea is to normalize the scale of
each chunk in order to compensate for possible signal power drift over time during the EEG recording,
The trace normalization can be found in the literature [2]. Presumably the idea is to normalize the
scale of each chunk in order to compensate for a possible signal power drift over time during the EEG recording,
making each chunks' covariance contribute similarly to the aggregate regardless of the current chunks average power.
To get the "CSP with Diagonal Loading" of Lotte & Guan paper, set shrinkage to a positive value and Tikhonov to 0. To get the
"CSP with Tikhonov regularization", do the opposite. You can also try a mixture of the two. Note that the Guan & Lotte paper does not use trace normalization.
To get the "CSP with Diagonal Loading" of Lotte & Guan paper [1], set shrinkage to a positive value and Tikhonov to 0. To get the
"CSP with Tikhonov regularization", do the opposite. You can also try a mixture of the two. Note that the Guan & Lotte paper does not appear to use trace normalization.
Once the spatial filters are computed and saved in the configuration file, you can load the configuration into the \ref Doc_BoxAlgorithm_SpatialFilter "Spatial Filter" box.
For the moment the Regularized CSP Trainer supports only two classes.
References
1) Lotte & Guan: "Regularizing common spatial patterns...", 2011.
2) Muller-Gerkin & al., "Designing optimal spatial filters for single-trial EEG classification in a movement task", 1999.
3) Chan, Golub & Leveq, "Updating formulae and a pairwise algorithm...", 1979.
* |OVP_DocEnd_BoxAlgorithm_RegularizedCSPTrainer_Miscellaneous|
*/
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment