Mentions légales du service

Skip to content
Snippets Groups Projects
Verified Commit 2a85461e authored by ANDREY Paul's avatar ANDREY Paul
Browse files

Add release notes to the website docs.

parent 8a0c4c85
No related branches found
No related tags found
No related merge requests found
......@@ -4,3 +4,4 @@
- [User Guide](user-guide/)
- [API Reference](api-reference/)
- [Developer Guide](devs-guide/)
- [Release Notes](release-notes/)
- [v2.3.0](v2.3.0.md)
- [v2.2.1](v2.2.1.md)
- [v2.2.0](v2.2.0.md)
- [v2.1.1](v2.1.1.md)
- [v2.1.0](v2.1.0.md)
- [v2.0.3](v2.0.3.md)
- [v2.0.2](v2.0.2.md)
- [v2.0.1](v2.0.1.md)
- [v2.0.0](v2.0.0.md)
# declearn v2.0.0
Released: 06/02/2023
This is the first stable and public release of declearn.
As of this release, the package has been tested under python 3.8, 3.9, 3.10 and 3.11 (with the latter lacking TensorFlow support for now).
The coverage statistic is 72%. This value is undermined by the fact that some tests are run via multiprocessing. It was evaluated by running the same tests as programmed in our tox config, using the pytest-cov plug-in.
# declearn v2.0.1
Released: 13/02/2023
This sub-minor version update patches issues identified in version 2.0.0.
Changes:
* \[BUG\] Fix the labeling of output gradients in `TensorflowModel`. See issue #14.
* \[BUG\] Warn about limited support for frozen neural network weights. See issue #15.
* \[LINT\] Update to mypy 1.0 and adjust some type hints and comments in the code.
# declearn v2.0.2
This sub-minor version back-ports changes introduced in declearn v2.1.0.
Released: 02/03/2023
Changes:
* Make network communication dependencies truly optional.
- Previously, `websockets` and `grpcio` could already be done without,
however, `pip install declearn` would always install them.
- The patch now properly turns them into extra dependencies, that are
only installed when explicitly required by the end-user.
* Complete a broken docstring (`declearn.metrics.BinaryRocAUC`).
* Update git branching strategy (no impact on the package's use).
Note: the PyPI upload is labeled as "v2.0.2.2" due to the initial upload missing a bump to the in-code version number.
# declearn v2.0.3
Released: 04/08/2023
This is a subminor release to patch a couple of issues where some operations
were wrongfully implemented as to their maths. It is therefore **strongly**
recommended to update any existing installation to a patched version - which
may freely be `~=2.0.3`, `~=2.1.1`, `~=2.2.1` or (to-be-released) `~=2.3`.
In addition, a couple of backend fixes were pushed, notably fixing compatibility
with the recently-released scikit-learn version 1.3.
## Fixed math operations
- The `Vector.__rtruediv__` method was misdefined, so that computations of the
form `non_vector_object / vector_object` would result in wrongful values.
This was seemingly not used anywhere in declearn until now, and hopefully was
not used by any end-user either.
- The `L2Clipping` optimodule plug-in was misdefined, making it scale down the
gradients with a L2-norm below the cutoff threshold and leaving those with a
high norm unchanged.
## Other backend fixes
- Fix the `build_keras_loss` utility for TensorFlow. This is a backend fix
that adresses some newly-found issues with the way losses may be specified
as part of a `TensorflowModel`.
- With the release of Scikit-Learn 1.3, the interfaced `SGDClassifier` and
`SGDRegressor` models now have support for other dtypes than float64. In
declearn 2.3 and above, this will be handled by adding the possibility
to specify which dtype end-users wish to use. For previous versions, the
backported path is merely about ensuring that input data and weights are
converted to the default float64 dtype.
# declearn v2.1.0
Released: 02/03/2023
### New features
* Add proper GPU support and device-placement policy utils.
- Add device-placement policy utils: `declearn.utils.DevicePolicy`,
`declearn.utils.get_policy` and `declearn.utils.set_policy`.
- Implement device-placement support in `TorchModel`, `TorchVector`,
`TensorflowModel` and `TensorflowVector`, according to shared API
principles (some of which are abstracted into `Model`).
- Add tests for these features, and automatic running of unit tests
on both CPU and GPU when possible (otherwise, run on CPU only).
* Add framework-specific `TensorflowOptiModule` and `TorchOptiModule`.
- Enable wrapping framework-specific optimizer objects into a plug-in
that may be used within a declearn `Optimizer` and jointly with any
combination of framework-agnostic plug-ins.
- Add functional tests that verify our implementations of the `Adam`,
`Adagrad` and `RMSprop` optimizers are equivalent to these of the
Tensorflow and Torch frameworks.
* Add `declearn.metrics.RSquared` metric to compute a regression's R^2.
* Fix handling of frozen weights in `TensorflowModel` and `TorchModel`.
- Add `trainable: bool=False` parameter to `Model.get_weights` and
`Model.set_weights` to enable excluding frozen weights from I/O.
- Use `Model.get_weights(trainable=True)` in `Optimizer` methods,
enabling to use loss-regularization `Regularizer` plug-ins and
weight decay with models that have some frozen weights.
- Use `Model.set_weights(trainable=True)` and its counterpart to
remove some unrequired communications and server-side aggregator
and optimizer computations.
* Fix handling of `tf.IndexedSlices` structures in `TensorflowVector`.
- Avoid the (mostly silent, depending on tensorflow version) conversion
of `tf.IndexedSlices` row-sparse gradients to a dense tensor every
time it can be avoided.
- Warn about that conversion when it happens (unless the contexts is
known to require it, e.g. as part of noise-addition optimodules).
### Other changes
* Change `Scaffold.collect_aux_var` behavior on unused optimodule.
- Previously, the method would raise an error if `run` had not been called.
- Now, a warning is emitted, but a scalar value is returned, that the server-
side plugin processed into ignoring that client.
* Add SAN capabilities to `declearn.test_utils.generate_ssl_certificates`.
- Subject Alternative Names (SAN) enable having an SSL certificate cover
the various IPs and/or domain names of a server.
- The declearn interface requires OpenSSL >=3.0 to use the new parameters.
* Add a functional convergence test on a toy regression problem.
- Generate a toy regression problem, that requires regularization.
- Run a scikit-learn baseline and a declearn centralized-case one.
- Run a declearn federated learning pipeline, for all frameworks.
- Verify that in all cases, the model converges to a R^2 >= 0.999.
* Add some assertion utils under `declearn.test_utils` to refactor or enhance
some existing and newly-introduced unit and functional tests.
### Other fixes
* Fix some newly-identified backend-based issues in `Vector`:
- Enable `NumpyVector + <Tensor>Vector` (previously, only `<Tensor>Vector +
NumpyVector` would work), for all base operations (`+ - / *`).
- Fix scalar tensors' unpacking from serialized `TensorflowVector` and
`TorchVector`.
- Fix `NumpyVector.sum()` resulting in scalars rather than 0-d numpy arrays.
- Improve the documentation of `Vector` and its subclasses.
* Fix optimizer plug-ins' framework-equivalence test.
* Fix `pip install declearn` installing network communication third-party
dependencies in spite of their being documented (and supported) as optional.
- This fix was backported to release `declearn-2.0.2`.
* Fix the labeling of output gradients in `TensorflowModel`.
- This fix was backported to release `declearn-2.0.1`.
# declearn v2.1.1
Released: 04/08/2023
This is a subminor release to patch a couple of issues where some operations
were wrongfully implemented as to their maths. It is therefore **strongly**
recommended to update any existing installation to a patched version - which
may freely be `~=2.0.3`, `~=2.1.1`, `~=2.2.1` or (to-be-released) `~=2.3`.
In addition, a couple of backend fixes were pushed, notably fixing compatibility
with the recently-released scikit-learn version 1.3.
## Fixed math operations
- The `Vector.__rtruediv__` method was misdefined, so that computations of the
form `non_vector_object / vector_object` would result in wrongful values.
This was seemingly not used anywhere in declearn until now, and hopefully was
not used by any end-user either.
- The `L2Clipping` optimodule plug-in was misdefined, making it scale down the
gradients with a L2-norm below the cutoff threshold and leaving those with a
high norm unchanged.
## Other backend fixes
- Fix the `build_keras_loss` utility for TensorFlow. This is a backend fix
that adresses some newly-found issues with the way losses may be specified
as part of a `TensorflowModel`.
- With the release of Scikit-Learn 1.3, the interfaced `SGDClassifier` and
`SGDRegressor` models now have support for other dtypes than float64. In
declearn 2.3 and above, this will be handled by adding the possibility
to specify which dtype end-users wish to use. For previous versions, the
backported path is merely about ensuring that input data and weights are
converted to the default float64 dtype.
# declearn v2.2.0
Released: 11/05/2023
## Release highlights
### Declearn Quickrun Mode & Dataset-splitting utils
The two most-visible additions of v2.2 are the `declearn-quickrun` and
`declearn-split` entry-point scripts, that are installed as CLI tools together
with the package when running `pip install declearn` (or installing from
source).
`declearn-quickrun` introduces an alternative way to use declearn so as to run
a simulated Federated Learning experiment on a single computer, using localhost
communications, and any model, dataset and optimization / training / evaluation
configuration.
`declearn-quickrun` relies on:
- a python code file to specify the model;
- a standard (but partly modular) data storage structure;
- a TOML config file to specify everything else.
It is thought of as:
- a simple entry-point to newcomers, demonstrating what declearn can do with
zero to minimal knowledge of the actual Python API;
- a nice way to run experiments for research purposes, with minimal setup
(and the possibility to maintain multiple experiment configurations in
parallel via named and/or versioned TOML config files) and standardized
outputs (including model weights, full process logs and evaluation metrics).
`declearn-split` is a CLI tool that wraps up some otherwise-public data utils
that enable splitting and preparing a supervised learning dataset for its use
in a Federated Learning experiment. It is thought of as a helper to prepare
data for its use with `declearn-quickrun`.
### Support for Jax / Haiku
Another visible addition of declearn v2.2 is the support for models implemented
in [Jax](https://github.com/google/jax), specifically _via_ the neural network
library [Haiku](https://github.com/deepmind/dm-haiku).
This takes shape of the new (optional) `declearn.model.haiku` submodule, that
provides with dedicated `JaxNumpyVector` and `HaikuModel` classes (subclassing
the base `Vector` and `Model` ones). Existing unit and integration tests have
been extended to cover this new framework (when available), which is therefore
usable on par with Scikit-Learn, TensorFlow and Torch - up to a few framework
specificities in the setup of the model, notably when it is desired to freeze
some layers (which has to happen _after_ instantiating and initializing the
model, contrary to what can be done in other nerual network frameworks).
### Improved Documentation and Examples
Finally, this new version comes with an effort on improving the usability of
the package, notably via the readability of its documentation and examples.
The documentation has been heavily-revised (which has already been partially
back-ported to previous version releases upon making the documentation
[website](https://magnet.gitlabpages.inria.fr/declearn/docs/) public).
The legacy Heart UCI example has been improved to enable real-life execution
(i.e. using multiple agents / computers communicating over the internet). More
importantly, the classic MNIST dataset has been used to implement simpler and
more-diverse introductory examples, that demonstrate the various flavors of
declearn one can look for (including the new Quickrun mode).
The `declearn.dataset.examples` submodule has been introduced, so that example
data loaders can be added (and maintained / tested) as part of the package. For
now these utils only cover the MNIST and Heart UCI datasets, but more reference
datasets are expected to be added in the future, enabling end-users to make up
their own experiments and toy around the packages' functionality in no time.
## List of changes
### New features
* Add `declearn.model.haiku` submodule. (!32)
- Implement `Vector` and `Model` subclasses to interface Haiku/Jax-backed
models.
- The associated dependencies (jax and haiku) may be installed using
`pip install declearn[haiku]` or `pip install declearn[all]`, and
remain optional.
- Note that both Haiku and Jax are early-development products: as such,
the supported versions are hard-coded for now, due to the lack of API
stability.
* Add `declearn-quickrun` entry point. (!41)
- Implement `declearn-quickrun` as a CLI to run simulated FL experiments.
- Write some dedicated TOML parsers to set up the entire process from a
single configuration file (building on existing `declearn.main.config`
tools), and build on the file format output by `declearn-split` (see
below).
- Revise `TomlConfig` make `run_as_processes` public (see below).
* Add `declearn-split` entry point. (!41)
- Add some dataset utility functions (see below).
- Implement `declearn-split` to interface data-splitting utils as a CLI.
* Add `declearn.dataset.examples` submodule. (!41)
- Add MNIST dataset downloading utils.
- Add Heart UCI dataset downloading utils.
* Add `declearn.dataset.utils` submodule. (!41)
- Add `split_multi_classif_dataset` for multinomial classification data.
- Refactor some `declearn.dataset.InMemoryDataset` code into functional
utils: `save_data_array` and `load_data_array`.
- Expose sparse matrices' to-/from-file parsing utils.
* Add the `run_as_processes` utility.
- Revise the util to capture exceptions and outputs. (!37)
- Make the util public as part of the declearn quickrun addition. (!41)
* Add `data_type` and `features_shape` to `DataSpecs`. (!36)
- These fields enable specifying input features' shape and dtype.
- The `input_shape` and `nb_features` fields have in turn been deprecated
(see section below).
* Add utils to access types mapping of optimization plug-ins. (!44)
- Add `declearn.aggregator.list_aggregators`.
- Add `declearn.optimizer.list_optim_modules`.
- Add `declearn.optimizer.list_optim_regularizers`.
- All three of these utils are trivial, but are expected to be easier
to find out about and use by end-users than their more generic backend
counterpart `declearn.utils.access_types_mapping(group="...")`.
### Revisions
* Refactor `TorchModel` backend code to clip gradients. (!42)
- Optimize functorch code when possible (essentially, for Torch 1.13).
- Pave the way towards a future transition to Torch 2.0.
* Revise `TomlConfig` parameters and backend code
- Add options to target a subsection of a TOML file. (!41)
- Improve the default parser (!44)
* Revise type annotations of `Model` and `Vector`. (!44)
- Use `typing.Generic` and `typing.TypeVar` to improve the annotations
about wrapped-data / used-vectors coherence in these classes, and in
`Optimizer` and associated plug-in classes.
### Deprecations
* Deprecate `declearn.dataset.InMemoryDataset.(load|save)_data_array`. (!41)
- Replaced with `declearn.dataset.utils.(load|save)_data_array`.
- The deprecated functions now call the former, emitting a warning.
- They will be removed in v2.4 and/or v3.0.
* Deprecate `declearn.data_info.InputShapeField` and `NbFeaturesField`. (!36)
- Replaced with `declearn.dataset.FeaturesShapeField`.
- The deprecated fields may still be used, but emit a warning.
- They will be removed in v2.4 and/or v3.0.
### Documentation & Examples
* Restructure the documentation and render it as a website. (!40)
- Restructure the overly-long readme file into a set of guides.
- Set up the automatic rendering of the API reference from the code.
- Publish the docs as a versioned website:
[https://magnet.gitlabpages.inria.fr/declearn/docs](https://magnet.gitlabpages.inria.fr/declearn/docs)
- Backport these changes so that the website covers previous releases.
* Provide with a Quickstart example using `declearn-quickrun`.
- Replace the Quickstart guide with an expanded one providing with a fully-
functioning example that uses the MNIST dataset (see below).
- Use this guide to showcase the various use-cases of declearn (simulated
FL or real-life deployment / TOML config or python scripts).
* Modularize the Heart UCI example for its real-life deployment. (!34)
* Implement the MNIST example, in three flavors. (!41)
- Make MNIST the default demonstration example for the `declearn-quickrun`
and `declearn-split` CLI tools.
- Write a MNIST example using the Quickrun mode with a customizable config.
- Write a MNIST example as a set of python files, enabling real-life use.
### Unit and integration tests
* Compute code coverage as part of CI/CD pipelines. (!38)
* Replace `declearn.communication` unit tests. (!39)
* Modularize `test_regression` integration tests. (!39)
* Add the optional '--cpu-only' flag for unit tests. (!39)
* Add unit tests for `declearn.dataset.examples`. (!41)
* Add unit tests for `declearn.dataset.utils`. (!41)
* Add unit tests for `declearn.utils.TomlConfig`. (!44)
* Add unit tests for `declearn.aggregator.Aggregator` classes. (!44)
* Extend unit tests for type-registration utils. (!44)
# declearn v2.2.1
Released: 04/08/2023
This is a subminor release to patch a couple of issues where some operations
were wrongfully implemented as to their maths. It is therefore **strongly**
recommended to update any existing installation to a patched version - which
may freely be `~=2.0.3`, `~=2.1.1`, `~=2.2.1` or (to-be-released) `~=2.3`.
In addition, a couple of utilities were patched, and `SklearnSGDModel` had its
backend adjusted following the release of scikit-learn 1.13.
## Fixed math operations
- The `Vector.__rtruediv__` method was misdefined, so that computations of the
form `non_vector_object / vector_object` would result in wrongful values.
This was seemingly not used anywhere in declearn until now, and hopefully was
not used by any end-user either.
- The `L2Clipping` optimodule plug-in was misdefined, making it scale down the
gradients with a L2-norm below the cutoff threshold and leaving those with a
high norm unchanged.
## Other backend fixes
- Fix the `build_keras_loss` utility for TensorFlow. This is a backend fix
that adresses some newly-found issues with the way losses may be specified
as part of a `TensorflowModel`.
- Fix the `declearn.dataset.examples.load_heart_uci` utility following changes
on the source website.
- With the release of Scikit-Learn 1.3, the interfaced `SGDClassifier` and
`SGDRegressor` models now have support for other dtypes than float64. In
declearn 2.3 and above, this will be handled by adding the possibility
to specify which dtype end-users wish to use. For previous versions, the
backported path is merely about ensuring that input data and weights are
converted to the default float64 dtype.
# declearn v2.3.0
Released: 30/08/2023
## Release highlights
### New Dataset subclasses to interface TensorFlow and Torch dataset APIs
The most visible addition of v2.3 are the new `TensorflowDataset` and
`TorchDataset` classes, that respectively enable wrapping up
`torch.utils.data.Dataset` and `tensorflow.data.Dataset` objects into declearn
`Dataset` instances that can be used for training and evaluating models in a
federative way.
Both of these classes are implemented under manual-import submodules of
`declearn.dataset`: `declearn.dataset.tensorflow` and `declearn.dataset.torch`.
While applications that rely on memory-fitting tabular data can still use the
good old `InMemoryDataset`, these new interfaces are designed to enable users
to re-use existing code for interfacing any kind of data, including images or
text (thay may require framework-provided pre-processing), that may be loaded
on-demand from a database or distributed files, or even generated procedurally.
Our effort has been put on keeping the declearn-side code minimal and to try to
leave the door open for as much framework-provided features as possible, but it
is possible that we have missed some things; if you run into issues or limits
when using these new classes, feel free to drop us a message, using either the
historical Inria-Gitlab repository or the newly-created mirroring GitHub one!
### Support for Torch 2.0
Another less-visible but possibly high-impact update is the addition of support
for Torch 2.0. It took us a bit of time to adjust the backend code for this new
release of Torch as all of the DP-oriented functorch-based code has been made
incompatible, but we are now able to provide end-users with compatibility for
both the newest 2.0 version _and_ the previously-supported 1.10-1.13 versions.
Cherry on top, it should even be possible to have the server and clients use
different Torch major versions!
The main interest of this new support (apart from not losing pace with the
framework and its backend improvements) is to enable end-users to use the new
`torch.compile` feature to optimize their model's runtime. There is however a
major caveat to this: at the moment, options to `torch.compile` are lost, which
means that they cannot yet be properly-propagated to clients, making this new
feature usable only with default arguments. However, the Torch team is working
on improving that (see for example
[this issue](https://github.com/pytorch/pytorch/issues/101107)), and we will
hopefully be able to forward model-compilation instructions as part of declearn
in the near future!
In the meanwhile, if you encounter any issues with Torch support, notably as to
2.0-introduced features, please let us know, as we are eager to build on user
feedback to improve the package's backend as well as its APIs.
### Numerous test-driven backend fixes
Finally, a lot of effort has been put in making declearn more robust, by adding
more unit and integration tests, improving our CI/CD setup to cover our code
more extensively (notably systematically testing it on both CPU and GPU) and
efficiently, and adding a custom script to launch groups of tests in a verbose
and compact way. We thereof conducted a number of test-driven backend patches.
Some bugs were pretty awful and well-hidden (we recently backported a couple of
hopefully-unused operations' formula fix to all previous versions via sub-minor
version releases); some were visible but harmful (some metrics' computations
were just plain wrong under certain input shapes conditions, which showed as
values were uncanny, but made results' analysis and use a burden); some were
minor and/or edge-case but still worth fixing.
We hope that this effort enabled catching most if not all current potential
bugs, but will keep on improving unit tests coverage in the near future, and
are adopting a stricter policy as to testing new features as they are being
implemented.
## List of changes
### New features
* Add `declearn.dataset.torch.TorchDataset` (!47 and !53)
- Enable wrapping up `torch.utils.data.Dataset` instances.
- Enable setting up a custom collate function for batching.
- Expose the `collate_with_padding` util for padded-batching.
* Add `declearn.dataset.tensorflow.TensorflowDataset` (!53)
- Enable wrapping up `tensorflow.data.Dataset` instances.
- Enable batching inputs into padded or ragged tensors.
* Add support for Torch 2.0. (!49)
- Add backend compatibility with Torch 2.0 while preserving existing support
for versions 1.10 to 1.13.
- Enable the use of `torch.compile` on clients' side based on its use on the
server side, with some caveats that are due to Torch and on its roadmad.
- Add "torch1" and "torch2" extra-dependency specifiers to ease installation
of compatible versions of packages from the Torch ecosystem.
- Have both versions be tested as part of our CI/CD.
* Add `dtype` argument to `SklearnSGDModel`. (!50)
- Add a `dtype` parameter to `SklearnSGDModel` and use it to prevent dtype
issues related to the introduction of non-float64 support for SGD models
as part of Scikit-learn 1.3.0.
- A patch was back-ported to previous declearn relases to force conversion
to float64 as part of backend computations.
* Add `declearn.optimizer.modules.L2GlobalClipping`. (!56)
- This new OptiModule enables clipping gradients based on the L2-norm of all
of their concatenated values, rather than their weight-wise L2-norm (as is
done by the pre-existing `L2Clipping` module).
* Add `replacement: bool = False` argument to `Dataset.generate_batches`
(!47 and !53).
- Enable drawing fixed-size-batches of samples with replacement.
- Note that this is not yet available as part of the FL process, as sending
backward-incompatible keyword arguments would break compatibility between
any v2.3+ server and v2.0-2.2 client.
- This new option will therefore be deployed no sooner than in declearn 3.0
(where additional mechanisms are bound to be designed to anticipate this
kind of changes and make it so that older-version clients can dismiss the
unsupported arguments and/or be prohibited to join a process that requires
their use).
### Revisions
* Fix `build_keras_loss`. (!50)
- Fix `build_keras_loss` util, that caused an incompatibility with latest
TensorFlow versions in some cases.
* Fix `Vector.__rtruediv__` and `L2Clipping` (!54)
- Fix `Vector.__rtruediv__` formula, for `scalar / vector` operations.
- Fix `L2Clipping` L2-norm-based clipping.
- Both these fixes were backported to previous releases, as sub-minor version
releases.
* Fix exanded-dim inputs handling in MAE, MSE and R2 metrics (!57)
- Fix those metrics' computation when true and predicted labels
have the same shape up to one expanded dimension (typically for
single-target regression or single-label binary classification tasks using
a neural network without further flattening beyond the output layer).
* Miscellaneous minor fixes (!57)
- Fix 'DataTypeField.is_valid'.
- Fix 'InMemoryDataset' single-column target loading from csv.
- Fix 'InMemoryDataset.data_type'.
- Fix 'EarlyStopping.update' with repeated equal inputs.
### Deprecations
* Remove `declearn.dataset.Dataset.load_from_json` and `save_to_json` from the
API-defining API. (!47)
- These methods were not used anywhere in declearn, and unlikely to be used
in code that did not specifically use `InMemoryDataset` (the methods of
which are kept).
* Deprecate `Vector` subclasses' keyword arguments to the `sum` method. (!55)
- `Vector.sum` does not provide with kwargs, but subclasses do, for no good
reason as they are never used in declearn.
### Documentation & Examples
* Clean up the non-quickrun MNIST example. (!51)
* Update documentation on the CI/CD and the way(s) to run tests.
### Unit and integration tests
* Revise toy-regression integration tests for efficiency and coverage. (!57)
* Add proper unit tests for `Vector` and its subclasses. (!55)
* Add tests for 'declearn.data_info' submodule. (!57)
* Add tests for 'declearn.dataset.InMemoryDataset'. (!47 and !57)
* Add tests for 'declearn.main.utils.aggregate_clients_data_info'. (!57)
* Add tests for 'declearn.main.utils.EarlyStopping'. (!57)
* Add tests for large (chunked) messages' exchange over network. (!57)
* Add tests for exanded-dim inputs in MAE, MSE and R2 metrics. (!57)
* Add tests for some of the 'declearn.quickrun' backend utils. (!57)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment