I have two numpy arrays of different shapes, but with the same length (leading dimension). I want to shuffle each of them, such that corresponding elements continue to correspond — i.e. shuffle them in unison with respect to their leading indices.
This code works, and illustrates my goals:
def shuffle_in_unison(a, b):
assert len(a) == len(b)
shuffled_a = numpy.empty(a.shape, dtype=a.dtype)
shuffled_b = numpy.empty(b.shape, dtype=b.dtype)
permutation = numpy.random.permutation(len(a))
for old_index, new_index in enumerate(permutation):
shuffled_a[new_index] = a[old_index]
shuffled_b[new_index] = b[old_index]
return shuffled_a, shuffled_b
For example:
>>> a = numpy.asarray([[1, 1], [2, 2], [3, 3]])
>>> b = numpy.asarray([1, 2, 3])
>>> shuffle_in_unison(a, b)
(array([[2, 2],
[1, 1],
[3, 3]]), array([2, 1, 3]))
However, this feels clunky, inefficient, and slow, and it requires making a copy of the arrays — I’d rather shuffle them inplace, since they’ll be quite large.
Is there a better way to go about this? Faster execution and lower memory usage are my primary goals, but elegant code would be nice, too.
One other thought I had was this:
def shuffle_in_unison_scary(a, b):
rng_state = numpy.random.get_state()
numpy.random.shuffle(a)
numpy.random.set_state(rng_state)
numpy.random.shuffle(b)
This works…but it’s a little scary, as I see little guarantee it’ll continue to work — it doesn’t look like the sort of thing that’s guaranteed to survive across numpy version, for example.
2
Your “scary” solution does not appear scary to me. Calling shuffle()
for two sequences of the same length results in the same number of calls to the random number generator, and these are the only “random” elements in the shuffle algorithm. By resetting the state, you ensure that the calls to the random number generator will give the same results in the second call to shuffle()
, so the whole algorithm will generate the same permutation.
If you don’t like this, a different solution would be to store your data in one array instead of two right from the beginning, and create two views into this single array simulating the two arrays you have now. You can use the single array for shuffling and the views for all other purposes.
Example: Let’s assume the arrays a
and b
look like this:
a = numpy.array([[[ 0., 1., 2.],
[ 3., 4., 5.]],
[[ 6., 7., 8.],
[ 9., 10., 11.]],
[[ 12., 13., 14.],
[ 15., 16., 17.]]])
b = numpy.array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.]])
We can now construct a single array containing all the data:
c = numpy.c_[a.reshape(len(a), 1), b.reshape(len(b), 1)]
# array([[ 0., 1., 2., 3., 4., 5., 0., 1.],
# [ 6., 7., 8., 9., 10., 11., 2., 3.],
# [ 12., 13., 14., 15., 16., 17., 4., 5.]])
Now we create views simulating the original a
and b
:
a2 = c[:, :a.size//len(a)].reshape(a.shape)
b2 = c[:, a.size//len(a):].reshape(b.shape)
The data of a2
and b2
is shared with c
. To shuffle both arrays simultaneously, use numpy.random.shuffle(c)
.
In production code, you would of course try to avoid creating the original a
and b
at all and right away create c
, a2
and b2
.
This solution could be adapted to the case that a
and b
have different dtypes.
9
 1
Re: the scary solution: I just worry that arrays of different shapes could (conceivably) yield different numbers of calls to the rng, which would cause divergence. However, I think you are right that the current behavior is perhaps unlikely to change, and a very simple doctest does make confirming correct behavior very easy…
Jan 5, 2011 at 17:49
I like your suggested approach, and could definitely arrange to have a and b start life as a unified c array. However, a and b will need to be contiguous shortly after shuffling (for efficient transfer to a GPU), so I think that, in my particular case, I’d end up making copies of a and b anyway. 🙁
Jan 5, 2011 at 17:51
@Josh: Note that
numpy.random.shuffle()
operates on arbitrary mutable sequences, such as Python lists or NumPy arrays. The array shape does not matter, only the length of the sequence. This is very unlikely to change in my opinion.Jan 5, 2011 at 19:11
I didn’t know that. That makes me much more comfortable with it. Thank you.
Jan 5, 2011 at 19:17
@SvenMarnach : I posted an answer below. Can you comment on whether you think it makes sense/ is a good way to do it?
– ajfbiw.sFeb 10, 2016 at 17:43
X = np.array([[1., 0.], [2., 1.], [0., 0.]])
y = np.array([0, 1, 2])
from sklearn.utils import shuffle
X, y = shuffle(X, y, random_state=0)
To learn more, see http://scikitlearn.org/stable/modules/generated/sklearn.utils.shuffle.html
2
 4
This solution creates copies (“The original arrays are not impacted”), whereas the author’s “scary” solution doesn’t.
Mar 14, 2020 at 9:52
 1
Your “scary” solution does not appear scary to me. Calling shuffle()
for two sequences of the same length results in the same number of calls to the random number generator, and these are the only “random” elements in the shuffle algorithm. By resetting the state, you ensure that the calls to the random number generator will give the same results in the second call to shuffle()
, so the whole algorithm will generate the same permutation.
If you don’t like this, a different solution would be to store your data in one array instead of two right from the beginning, and create two views into this single array simulating the two arrays you have now. You can use the single array for shuffling and the views for all other purposes.
Example: Let’s assume the arrays a
and b
look like this:
a = numpy.array([[[ 0., 1., 2.],
[ 3., 4., 5.]],
[[ 6., 7., 8.],
[ 9., 10., 11.]],
[[ 12., 13., 14.],
[ 15., 16., 17.]]])
b = numpy.array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.]])
We can now construct a single array containing all the data:
c = numpy.c_[a.reshape(len(a), 1), b.reshape(len(b), 1)]
# array([[ 0., 1., 2., 3., 4., 5., 0., 1.],
# [ 6., 7., 8., 9., 10., 11., 2., 3.],
# [ 12., 13., 14., 15., 16., 17., 4., 5.]])
Now we create views simulating the original a
and b
:
a2 = c[:, :a.size//len(a)].reshape(a.shape)
b2 = c[:, a.size//len(a):].reshape(b.shape)
The data of a2
and b2
is shared with c
. To shuffle both arrays simultaneously, use numpy.random.shuffle(c)
.
In production code, you would of course try to avoid creating the original a
and b
at all and right away create c
, a2
and b2
.
This solution could be adapted to the case that a
and b
have different dtypes.
9
 1
Re: the scary solution: I just worry that arrays of different shapes could (conceivably) yield different numbers of calls to the rng, which would cause divergence. However, I think you are right that the current behavior is perhaps unlikely to change, and a very simple doctest does make confirming correct behavior very easy…
Jan 5, 2011 at 17:49
I like your suggested approach, and could definitely arrange to have a and b start life as a unified c array. However, a and b will need to be contiguous shortly after shuffling (for efficient transfer to a GPU), so I think that, in my particular case, I’d end up making copies of a and b anyway. 🙁
Jan 5, 2011 at 17:51
@Josh: Note that
numpy.random.shuffle()
operates on arbitrary mutable sequences, such as Python lists or NumPy arrays. The array shape does not matter, only the length of the sequence. This is very unlikely to change in my opinion.Jan 5, 2011 at 19:11
I didn’t know that. That makes me much more comfortable with it. Thank you.
Jan 5, 2011 at 19:17
@SvenMarnach : I posted an answer below. Can you comment on whether you think it makes sense/ is a good way to do it?
– ajfbiw.sFeb 10, 2016 at 17:43
Six years later, I’m amused and surprised by how popular this question proved to be. And in a bit of delightful coincidence, for Go 1.10 I contributed math/rand.Shuffle to the standard library. The design of the API makes it trivial to shuffle two arrays in unison, and doing so is even included as an example in the docs.
Dec 2, 2017 at 1:53
This is a different programming language however.
Mar 15, 2021 at 8:45
