There is no direct way in PyTorch to accomplish this (i.e., through a function). However, a workaround can be.
Flattening both tensors:
combined = torch.cat((ListA.view(-1), ListB.view(-1)))
combined
Out[52]: tensor([1., 2., 1., 3., 4., 8., 5., 7., 1., 2., 4., 8.], device='cuda:0')
Finding unique elements:
unique, counts = combined.unique(return_counts=True)
intersection = unique[counts > 1].reshape(-1, ListA.shape[1])
intersection
Out[55]:
tensor([[1., 2.],
[4., 8.]], device='cuda:0')
Benchmarks:
def find_intersection_two_tensors(A: tensor, B:tensor):
combined = torch.cat((A.view(-1), B.view(-1)))
unique, counts = combined.unique(return_counts=True)
return unique[counts > 1].reshape(-1, A.shape[1])
Timing it
%timeit find_intersection_two_tensors(ListA, ListB)
207 μs ± 2.4 μs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
If you are ok with moving to CPU, numpy could be a better solution in regards to performance:
def find_intersection_two_ndarray(AGPU: tensor, BGPU: tensor):
A = AGPU.view(-1).cpu().numpy()
B = BGPU.view(-1).cpu().numpy()
C = np.intersect1d(A, B)
return torch.from_numpy(C).cuda('cuda:0')
Timing it
%timeit find_intersection_two_ndarray(ListA, ListB)
85.4 μs ± 1.57 μs per loop (mean ± std. dev. of 7 runs, 1000 loops each)