- Oct 10, 2023
-
-
ANDREY Paul authored
-
ANDREY Paul authored
-
- Oct 05, 2023
-
-
ANDREY Paul authored
-
ANDREY Paul authored
- Drop support for Torch 1.10-1.12, which complexify rules for functorch. - Make "torch", "torch1" and "torch2" specifiers deal merely with Torch, leaving Opacus to the "dp" specifier, as should be. - Have functorch be installed together with torch, removing some burden from our specifier, and enabling support for the new Torch 2.1.
-
- Sep 26, 2023
-
-
ANDREY Paul authored
Add 'verbose' argument to 'FederatedClient' and 'TrainingManager' See merge request !59
-
ANDREY Paul authored
Enable recording training loss values See merge request !58
-
- Sep 21, 2023
-
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
- Until now, the public methods for training and evaluation using the `declearn.main.utils.TrainingManager` class (and its DP-SGD counterpart) required message-wrapped inputs and emitted similar outputs. - With this commit, the previously-private `train_under_constraints` and `evaluate_under_constraints` routines are made public and thus part of the declearn API, with some minor refactoring to make them more user-friendly. - The rationale of this change is to enable using `TrainingManager` outside of our `FederatedClient`/`FederatedServer` orchestration, notably when simulating FL training or testing client-side code. It may also be helpful to end-users that would like to build on declearn but implement their own orchestration tools or algorithm loops. - In the future (declearn >=3.0), we may go one step further and take all the messaging-related instructions out of the current class. The class may also be moved to a different namespace, e.g. a new 'declearn.train' module; but this is entirely out of scope for now.
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
- Sep 20, 2023
-
-
ANDREY Paul authored
-
- Sep 06, 2023
-
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
- The use of 'functorch.compile' over a function that takes variable-size batch inputs proves impossible, as the tracing on first call creates a computation graph with fixed dimensions. - As a result, the tentative compilation of per-sample clipped gradients computation prevents the proper use of DP-SGD with the functorch backend. - An alternative attempt was to compile the sample-wise function and vmap it afterwards, but this is currently unsupported (and unlikely to be as part of functorch, as development efforts have moved to 'torch.func'). - This commit therefore drops the use of 'functorch.compile'.
-
- Sep 01, 2023
-
-
ANDREY Paul authored
-
ANDREY Paul authored
-
- Aug 31, 2023
-
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
- Aug 30, 2023
-
-
ANDREY Paul authored
Improve tests coverage and fix test-digged bugs See merge request !57
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-