Improve GPU support for TensorFlow and Torch
TensorFlow & Torch enable using GPUs when available to accelerate computations. At the moment, no GPU-dedicated tools are implemented as part of declearn, which means it is up to the clients to implement any required operations to make GPUs detectable and usable by the computation framework backing the federated model.
It could therefore be useful (and user-friendly) to implement (limited) framework-specific and/or generic tools that deal with GPU use - or at the very least to provide some documentation as to GPU support and use related to how the third-party frameworks work.
Notional task list (tasks' utility and complexity should be evaluated before completing them):
-
Add code to move TorchModel
to a given device (CPU or GPU). -
Add code to move TensorflowModel
to a given device (if needed). -
Add generic configuration tools to detect / hide / select GPU devices to use. -
Abstract device-management-related Model API changes. -
Fully test the implementation and improve the docs prior to merging.