Here is an implementation of the broadcasting algorithm from the spec:
import operator
def broadcasted_shape(sh1, sh2):
if not isinstance(sh1, (tuple, list)) or not isinstance(sh2, (tuple, list)):
raise TypeError
shape1 = tuple(operator.index(i) for i in sh1)
shape2 = tuple(operator.index(i) for i in sh2)
n1 = len(shape1)
n2 = len(shape2)
n = max(n1, n2)
shape = [0] * n
i = n - 1
while i>=0:
_n1 = n1 - n + i
d1 = shape1[_n1] if (_n1 >=0) else 1
_n2 = n2 - n + i
d2 = shape2[_n2] if (_n2 >=0) else 1
if d1 == 1:
shape[i] = d2
elif d2 == 1 or d2 == d1:
shape[i] = d1
else:
raise ValueError
i = i - 1
return tuple(shape)
With this implementation broadcasting from 0d array to empty 1d array is allowed, which is also consistent with NumPy.
I am not sure why this is a logic thing to do other than to stay compatible with NumPy.
In [1]: from broadcast import broadcasted_shape
In [2]: broadcasted_shape((1,), (0,))
Out[2]: (0,)
In [3]: broadcasted_shape(tuple(), (0,))
Out[3]: (0,)
In [4]: import numpy as np
In [5]: np.broadcast_to(np.array(0), (0,)).shape
Out[5]: (0,)
In [6]: np.broadcast_to(np.array([0]), (0,)).shape
Out[6]: (0,)
The purpose of this issue is to discuss this behavior. If this is as designed, please feel free to close.
Here is an implementation of the broadcasting algorithm from the spec:
With this implementation broadcasting from 0d array to empty 1d array is allowed, which is also consistent with NumPy.
I am not sure why this is a logic thing to do other than to stay compatible with NumPy.
The purpose of this issue is to discuss this behavior. If this is as designed, please feel free to close.