Mentions légales du service

Skip to content

Feature/Model Testing During Training

This merge request contains following changes,

  • Metrics class for performing default evaluation metrics (fedbiomed/common/metrics.py)
  • Testing routine for TorchTrainingPlan and SkLearnSGDModel
  • Processing testing argument in the Round class, to make sure is everthing OK to perform testing.
  • Extending add_scalar message of HistoryMonitor, adding more information about real-time training/testing status
  • json.py encode/decode MetricType defined in the experiment
  • Changes in fedbiomed/researcher/monitor.py
    • MetricStore -> to store and categorize metric values received in real-time
    • Logging training/testing metric values to console as fancy/readable as possbile
    • Categorizing plots on the Tensorboard (with headers) based on metric name, metric for training/testing_global_updates/testing_localupdates
  • Adding unittests for;
    • metric.py -> MetricTypes, Metrics
    • monitor.py -> _MetricStore
    • fedbiosklearn.py -> testing_routine
    • torchnn.py -> testing_routine
Edited by CANSIZ Sergen

Merge request reports