Node is not saving optimizer internal parameters (momentum, learning rate decay, ...) between 2 rounds
When one wants to use a Optimizer involving momentum, learning rate decay, or any parameters tracking the evolution of optimizer, those parameters are reinitialized when entering the next round.
- One wants to keep these parameters for the next round,
- Nodes should send them to the researcher, so researcher can save those values into breakpoints
Edited by BOUILLARD Yannick