torch.Tensor
Does it make sense to write Op @ x
where x
is a lazy linear operator and x a torch.Tensor ?
Let me explain why I ask the question.
I wrote a FWHT with a Pytorch, SciPy and lazylinop backend.
In the case of backend='pytorch'
I have to modify function like _sanitize_matmul(self, op, swap=False)
to make the code works. Otherwize it returns raise ValueError('dimensions must agree')
Indeed, op.size
returns something like <built-in method size of Tensor object at 0x7f881044a4f0>
.
Do you think the best idea would be to cast x
from torch.Tensor
to np.ndarray
before Op @ x
?
Or to modify the code in such a way x
could be a torch.Tensor
?