Home Matlab: need some help for a seemingly simple vectorization of an operation
Reply: 3

Matlab: need some help for a seemingly simple vectorization of an operation

SebDL Published in 2018-01-12 00:28:56Z

I would like to optimize this piece of Matlab code but so far I have failed. I have tried different combinations of repmat and sums and cumsums, but all my attempts seem to not give the correct result. I would appreciate some expert guidance on this tough problem.

S=1000; T=10;
for c=1:T-1
    for cc=c+1:T

Basically I create 1000 vectors of 10 random numbers, and for each vector I calculate for each pair of values (say the mth and the nth) the difference between them, minus the difference (n-m). I sum over of possible pairs and I return the result for every vector.

I hope this explanation is clear,

Thanks a lot in advance.

Cris Luengo
Cris Luengo Reply to 2018-01-12 05:36:54Z

It is at least easy to vectorize your inner loop:

for c=1:T-1

Here, I'm using the new automatic singleton expansion. If you have an older version of MATLAB you'll need to use bsxfun for two of the subtraction operations. For example, X(c+1:T,:)-X(c,:) is the same as bsxfun(@minus,X(c+1:T,:),X(c,:)).

What is happening in the bit of code is that instead of looping cc=c+1:T, we take all of those indices at once. So I simply replaced cc for c+1:T. d is then a matrix with multiple rows (9 in the first iteration, and one fewer in each subsequent iteration).

Surprisingly, this is slower than the double loop, and similar in speed to Jodag's answer.

Next, we can try to improve indexing. Note that the code above extracts data row-wise from the matrix. MATLAB stores data column-wise. So it's more efficient to extract a column than a row from a matrix. Let's transpose X:

for c=1:T-1

This is more than twice as fast as the code that indexes row-wise.

But of course the same trick can be applied to the code in the question, speeding it up by about 50%:

for c=1:T-1
   for cc=c+1:T

My takeaway message from this exercise is that MATLAB's JIT compiler has improved things a lot. Back in the day any sort of loop would halt code to a grind. Today it's not necessarily the worst approach, especially if all you do is use built-in functions.

Wolfie Reply to 2018-01-12 08:33:45Z

The nchoosek(v,k) function generates all combinations of the elements in v taken k at a time. We can use this to generate all possible pairs of indicies then use this to vectorize the loops. It appears that in this case the vectorization doesn't actually improve performance (at least on my machine with 2017a). Maybe someone will come up with a more efficient approach.

idx = nchoosek(1:T,2);
d = bsxfun(@minus,(X(idx(:,2),:) - X(idx(:,1),:)), (idx(:,2)-idx(:,1))/T);
Result = sum(abs(d),1)';
SebDL Reply to 2018-01-12 10:57:07Z

Update: here are the results for the running times for the different proposals (10^5 trials):

So it looks like the transformation of the matrix is the most efficient intervention, and my original double-loop implementation is, amazingly, the best compared to the vectorized versions. However, in my hands (2017a) the improvement is only 16.6% compared to the original using the mean (18.2% using the median).

Maybe there is still room for improvement?

You need to login account before you can post.

About| Privacy statement| Terms of Service| Advertising| Contact us| Help| Sitemap|
Processed in 0.304827 second(s) , Gzip On .

© 2016 Powered by mzan.com design MATCHINFO