Loading breakpoints from an Experiment cleans the tensorboard directory
As a researcher, I am following this pattern to conduct a federated validation on an holdout dataset: 0. load both the training and holdout sets in each node
- conduct the
TrainingExperiment
for N rounds, saving breakpoints - load the last breakpoint into a new experiment called
HoldoutExperiment
. Set thetest_ratio
argument ofHoldoutExperiment
to1.0
, and set a couple of other arguments, to make sure it only performs global validation - run the
HoldoutExperiment
- If needed, go back to point 1 and repeat.
Unforunately, when I go back to point 1 I have lost the tensorboard logs from my previous rounds of TrainingExperiment
.
It is possible that this is not the recommended way to conduct federated validation on a holdout set, although to me it seemed like the most natural given the current tools implemented in Fed-BioMed.