Use declearn optimizer on the researcher side (aggregation)
Declearn optimizer provides a Researcher side optimizer (not provided in FedBioMed yet). Goals
Discuss with team the following points:
- how to integrate Researcher optimizer in FedBioMed. What would be the API?
In the poc, we have the following:
from fedbiomed.common.optimizer import Optimizer
from fedbiomed.common.optimizer.modules import AdamModule, MomentumModule
researcher_opt = Optimizer(
lrate=1.0,
modules=[MomentumModule()],
)
exp = Experiment(
tags=tags,
model_args=model_args,
training_plan=training_plan, # assuming training_plan has been defined previously
training_args=training_args,
round_limit=rounds,
aggregator=FedAverage(researcher_opt), # pass the opt
node_selection_strategy=None,
)
-
do we provide a way to add a PyTorch optimizer too on Researcher side? : can be done thanks to
declearn
new feature -
how to set
Scaffold
or other aggregators? in the poc, it is done this way:
from fedbiomed.common.optimizer import Optimizer
from fedbiomed.common.optimizer.modules import ScaffoldServerModule, MomentumModule
researcher_opt = Optimizer(
lrate=1.0,
modules=[ScaffoldServerModule(), MomentumModule()],
)
exp = Experiment(
tags=tags,
model_args=model_args,
training_plan=training_plan, # assuming training_plan has been defined previously
training_args=training_args,
round_limit=rounds,
aggregator=FedAverage(researcher_opt), # pass the opt
node_selection_strategy=None,
)
This syntax may be confusing because aggregator defined is FedAverage
, and user wants to use Scaffold
.
We can provide a DeclearnScaffold
that inherits from FedAvg
with pre-defined Optimizer
IMPORTANT: make sure breakpoints feature is still working -> add get_state
and save_state
methods