add support for model testing and evaluation (during training)
[included/replaced by SP14-item01]
Perform model testing and evaluation (shortened as "testing" for simplicity) during an experiment, possible scenario :
- distant testing : some sites provide training data, some site provide testing data (or testing dataset on the server), test on global model
- local testing : split node data between training/testing, test on global model
One or more of the following options of testing dataset:
- cross testing (dynamic split of a dataset between a training sub-dataset and testing sub-dataset for an experiment, automatic)
- 1 training dataset, 1 testing dataset (static separation of data, outside of fedbiomed node)
When to perform testing:
- after each round (and report through tensorboard)
- at the end of the experiment
Edited by VESIN Marc