ENH add early stopping for dictionary learning
I just noticed that there is no convergence criterion in trainDL
to stop the optimization.
Would it be interesting to add a criterion such as
\|D^{(j+1)} - D^{(j)}\|_2 < \epsilon
to the library (with default to 0
to avoid changing the behavior?
If the issue is with performance (checking the criterion at each iteration can slow down the running time), one could for instance check this only on certain iterations spawned in logscale
, or every epoch?
This could be interesting when one want to reach convergence but not wait to much (set a very large number of iteration but stop once there is no movement anymore).