I have a problem I cannot manage to solve... I read the Pytorch documentation to get rid of this but my code looks fine to me, but an error occurs.
indices = ...
X = ...
Y = ...
print(torch.min(indices)) # tensor(0, device='cuda:0')
print(torch.max(indices)) # tensor(30, device='cuda:0')
print(indices.dtype) # torch.int64
print(indices.shape) # torch.Size([498])
print(X.shape) # torch.Size([498, 2048])
print(X.dtype) # torch.float32
print(Y.shape) # torch.Size([31, 2048])
print(Y.dtype) # torch.float32
Y = Y.index_add(0, indices, X)
The call of index_add results in:
RuntimeError: number of dims don't match in permute
What am I doing wrong? Thank you in advance.
EDIT: From the documentation:
The dim-th dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised.
In this case dim=0 tensor=X self=Y and it follows that X.shape[0] should be equal to len(indices) and that X.shape[1] should be equal Y.shape[1], that is the case... might it be a pytorch bug?
EDIT2: The error occurs just for specific values of tensors and using:
torch.use_deterministic_algorithms(True)
torch.manual_seed(1234)
So it should be a pytorch bug.