A GitLab upgrade is scheduled for Monday, May 12, 2025. The service will be unavailable for a few minutes in the morning. We'll keep you posted on the progress of the upgrade on the Mattermost channel: https://mattermost.inria.fr/devel/channels/gitlab. We recommend that you do not work on the platform until an announcement indicates that maintenance is complete.
This MR aims at adding proper support for GPU acceleration of tensor computations.
It introduces additions to the current APIs to enable explicitly selecting the kind of device (CPU or GPU) backing computations when using a compatible framework, such as TensorFlow or Torch - or Jax/Haiku in the future.
At the moment:
This MR will:
Model
API to select the kind of device backing computations.TorchModel
and TensorflowModel
to properly manage the kind of device being used.Vector
API and/or revise the backend of TorchVector
and TensorflowVector
to ensure computations happening within optimizer plug-ins, aggregators, etc. preserve proper device placement.Tasklist:
TorchModel
and TorchVector
.TorchModel
placed on GPU remains there, and all computations are placed on the GPU.TensorflowModel
and TensorflowVector
.SklearnSGDModel
about the lack of GPU support.Closes #11 (closed) - See that issue for notes on how the frameworks work and implementation choices