[BUG] `TensorflowModel` weights and gradients have distinct labels.
As discovered by @nbigaud while implementing functional tests, TensorflowModel.compute_batch_gradients
returns integer-labeled gradients, as opposed to TensorflowModel.get_weights
that returns variable-name-labeled weights.
This discrepancy does not harm core optimization features, because set_weights
is coherent with get_weights
while apply_updates
is coherent with compute_batch_gradients
- hence its passing below the radar of our unit and functional tests until now. However, it breaks compatibility of Regularizer
plug-ins with TensorflowModel
, as mis-aligned gradients and weights-based vectors cannot be properly combined.
This issue will be quick to fix, by updating TensorflowModel.compute_batch_gradients
to use variables' names to label the returned tensors. TensorflowModel.apply_updates
may also be updated to run some verifications on inputs' names. Finally, the Model
unit test suite should be extended to include a verification that gradients and weights share the same specifications - so as to prevent this bug for re-occurring silently in the future.