Implement DP with refactored `TorchTraningPlan`
This issue has been created as an extension of
-
Clarify researcher's role on defining training plan with differential privacy
- Which methods should be defined by researcher
- Which methods need to be provided in the base TorchTrainingPlan
-
The method
validate_and_fix_model
This method is currently implemented in the notebook (in the TrainingPlan defined by researcher). To clarify implementation of this method:- Should researcher be responsible for creating this method?
- Is this method model (NN) dependent. Meaning that can it change based on different PyTorch implementation?
def validate_and_fix_model(self): # Validate and Fix model to be DP-compliant if not ModuleValidator.is_valid(self.model): print('######################################## Fixing Model ########################################') self.model = ModuleValidator.fix(self.model)
-
The method
make_optimizer
: This method is a new feature for training plan that allows researcher to define custom optimizer. However, optimizer can be directly defined in the__init__
of the training plan asCallable
class or function.def make_optimizer(self,lr): self.optimizer = torch.optim.Adam(self.model.parameters(), lr=lr)
Proposal:
class MyTraningPlan(TorchTraningPlan): def __init__(self, ...): self.optimizer = torch.optim.Adam def training_routine(...): self.optimizer = self.optimizer(self.model.parameters , **optimizer_arguments)
-
Refactor
preprocess
implementation. The methodadd_preprocess
has created for end-user to define/declare pre-process method for trainingDataLoader
. Therefore, internal preprocess actions should not be implemented throughadd_preprocess
method. This is how preprocess is implemented for DP and it needs to be fixed.