- Aug 04, 2023
-
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
- Mar 27, 2023
-
-
ANDREY Paul authored
-
ANDREY Paul authored
-
- Mar 25, 2023
-
-
ANDREY Paul authored
This commit is a port of Merge Request !40 to the release branch of declearn v2.1. See commit ae6ec5cd on the develop branch as well as the merge request !40 for additional details.
-
- Mar 02, 2023
-
-
ANDREY Paul authored
-
BIGAUD Nathan authored
Functional test of regression See merge request !28
-
-
BIGAUD Nathan authored
-
- Mar 01, 2023
-
-
ANDREY Paul authored
-
ANDREY Paul authored
Revise git branching strategy. See merge request !35
-
ANDREY Paul authored
* Keep "develop" as principal (~main) branch where to push new features, hence acting as a nightly stable version. * Drop "main" branch from the strategy as its name was misleading. * Introduce "rX.Y" release branches, that need protection and CI/CD.
-
ANDREY Paul authored
Enhance support for 'tf.IndexedSlices' in 'TensorflowVector'. Closes #17 See merge request !33
-
ANDREY Paul authored
* Implement a public util to wrap tensorflow operations in order to preserve `tf.IndexedSlices` structures and run appropriate computations with them. * Deploy the former wrapper to cover all usual operations in the backend of `TensorflowVector`. * Refactor the use of the device-placement-handling wrapper in the backend of `TensorflowVector` to reduce runtime overheads and factor it with the new indexed-slices-handling wrapper.
-
ANDREY Paul authored
-
- Feb 28, 2023
-
-
ANDREY Paul authored
Change 'ScaffoldClient.collect_aux_var' behavior on unused module. See merge request !31
-
ANDREY Paul authored
-
- Feb 23, 2023
-
-
ANDREY Paul authored
Implement framework-specific OptiModule subclasses. See merge request !15
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
-
ANDREY Paul authored
The implemented functions are a convenience to reduce boilerplate code in unit tests, and improve the existing JSON-serializability check. They notably support comparing numpy arrays (optionally tolerating small absolute discrepancies), and ignoring tuple-list type change that typically result from JSON-serialization.
-
ANDREY Paul authored
* Add `test_set_state_results` to verify that resetting a modules' state enables running the same computation twice. This should enable detecting cases when information is missing from the returned state dict. * Note that at the moment noise-addition modules are ignored. We could look into a way to access and restore RNG states (when CSPRNG is not used).
-
ANDREY Paul authored
For some reason, `tf.square(x)` and `tf.pow(x, 2)` can result in distinct values for small-enough values of x. The same is true of `tf.sqrt(x)` and `tf.pow(x, 0.5)`. This commit ensures the `tf.sqrt` and `tf.square` functions are used in these two edge cases for the `vector ** p` syntax. This choice is motivated by the fact that these functions are used in most official tensorflow / keras code, and seem to be closer to their torch counterparts (which are equivalent to `vec ** p`) than `tf.pow` - based on a few local tests.
-
- Feb 22, 2023
-
-
ANDREY Paul authored
This removal was supposed to be part of a past commit (cd1d0cb1) but had in fact not been properly finished. With this commit, install declearn no longer results in the automatic installation of gRPC and websockets, which are properly relegated to being optional, user- triggered-installation, dependencies.
-
ANDREY Paul authored
Enable skipping frozen weights when using `Model.get_weights` and `Model.set_weights` Closes #15 See merge request !29
-
ANDREY Paul authored
This commit squashes the following modifications: * Add `trainable` argument to `Model.get_weights`: - Add `trainable: bool = False` argument to `Model.get_weights`, enabling accessing only trainable weights of a neural network (or any model that supports using frozen weights). - Use the former to fix support for models with frozen weights by the `Optimizer`'s weight-decay and loss-regularization plug-ins. - Add unit tests for `get_weights(trainable=True)` to the Torch and Tensorflow Model tests. * Add `trainable` argument to `Model.set_weights`: - Add `trainable: bool = False` argument to `Model.set_weights`, enabling to update model weights with the exclusion of frozen ones. - Improve the documentation and exception-raising of `Model.set_weights`. Notably refactor some code into a private util used by tensorflow and torch models' backend code. - Add some related tests to the tensorflow and torch Model test suites. Note that there is redundancy that could be tackled later, as part of an effort to improve the `Model` test suite template. * Restrict round-wise weights sharing to trainable weights: - Update `FederatedClient`, `FederatedServer` and `TrainingManager` so that only the trainable model weights' values and updates are communicated during training and evaluation rounds. - This reduces communication costs when some weights are frozen. - This also avoids including non-trainable weights in server-side optimization computations (which should have resulted in zero at any rate, properly leaving these weights unaltered -- but may not have done so in the rare case when weight decay or regularization was used on the server side). * Add 'MLP-tune' test case for 'TorchModel' and 'TensorflowModel': - This case is the same as the base MLP, but with a frozen first layer. - This case would fail if the `get_weights(trainable=False)` was badly implemented (or used), since the gradients-and-weights comparison in the `Model.apply_updates` test would raise an error. * Improve unit test for 'Optimizer.compute_updates_from_gradients': - Ensure the pipelined plug-ins were called in proper order and with proper input values. In other words, test the formula. - Ensure the model weights are only accessed once, and with the proper restriction to trainable weights.
-
ANDREY Paul authored
Improve GPU support for TensorFlow and Torch Closes #11 See merge request !24
-
ANDREY Paul authored
-
Co-authored-by:
Paul Andrey <paul.andrey@inria.fr>
-
ANDREY Paul authored
-
ANDREY Paul authored
Core changes to the global test suite: * Add some device-placement verifications to the generic Model test suite. This should be refactored into more unitary tests as part of a distinct effort to revise and improve this test suite. * For TensorFlow and Torch, parametrize the whole test suite to run either on CPU and GPU, and run it once per available device type. * Ensure Vector and OptiModule unit tests run on CPU. Changes to some 'Model' tests to run on GPU: * `test_compute_batch_gradients_np`: allow for small numerical discrepancies that may result from running the test on GPU. * `test_apply_updates`: correct the test, that had not been updated since `Model.get_weights` stopped to systematically use `NumpyVector` as return type. Changes to 'TorchModel' tests to run on GPU: * Override `test_serialization` due to torch-serialization relying on pickle. Replace pickles' comparison with a more shallow (but less susceptible to fail for unknown reasons) test that ensures a reloaded model shares the same structure of modules.
-