Send row of 2D vector with MPI

Greetings,

I try to send the "row" of a vector of vectors with an mpi reduction operation, but I get a runtime error:

A minimal example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

#include <vector>
#include <mpi.h>

int main(int argc, char *argv[]) {


MPI_Init(&argc, &argv);				// initialize MPI
MPI_Comm comm = MPI_COMM_WORLD;

std:: vector < std:: vector <double> > some_2d_vec {{1,2,3}, {10,20,30}};

std:: vector < std:: vector <double> > reduced_vec {2};    // vector of two empty vectors

int no_of_elems = some_2d_vec[0].size();   // number of elements to send via mpi 

reduced_vec[0].resize( no_of_elems ); // make enough space in the receiving vector
 
MPI_Reduce( &some_2d_vec[0], &reduced_vec[0], no_of_elems, MPI_DOUBLE, MPI_SUM, 0, comm);  // send first row


MPI_Finalize();    // error

}


The error says
1
2
3
4
double free or corruption (fasttop)
*** Process received signal ***
Signal: Aborted (6)
Signal code:  (-6)


Any idea what goes wrong? I know one should not try to send a complete 2D vector as mpi sends contiguous memory, but a 1D vector usually works...

PiF

edit: It happens even for only 1 process.
Last edited on
(1) This is not really a "send", except in so far as it is a reduction operation that dumps the elementwise sums in root.

(2) You are confusing std::vector with C-style arrays - MPI works with the latter. This is one place where people who tell you to use std::vector over C-style arrays are doing you no favours at all.


&some_2d_vec[0] is the address of a vector, NOT a pointer to the start of its data buffer. Change
MPI_Reduce( &some_2d_vec[0], &reduced_vec[0], no_of_elems, MPI_DOUBLE, MPI_SUM, 0, comm);
to either (note the [0][0], not just [0] )
MPI_Reduce( &some_2d_vec[0][0], &reduced_vec[0][0], no_of_elems, MPI_DOUBLE, MPI_SUM, 0, comm);
or (better):
MPI_Reduce( some_2d_vec[0].data(), reduced_vec[0].data(), no_of_elems, MPI_DOUBLE, MPI_SUM, 0, comm);



In full, with 4 processors:
C:\c++>"C:\Program Files\Microsoft MPI\bin"\mpiexec -n 4 test 
4
8
12
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#include <iostream>
#include <vector>
#include <mpi.h>

int main(int argc, char *argv[])
{
   MPI_Init( &argc, &argv );
   MPI_Comm comm = MPI_COMM_WORLD;

   std::vector < std::vector <double> > some_2d_vec {{1,2,3}, {10,20,30}};

   std::vector < std::vector <double> > reduced_vec {2};    // vector of two empty vectors

   int no_of_elems = some_2d_vec[0].size();   // number of elements to send via mpi

   reduced_vec[0].resize( no_of_elems ); // make enough space in the receiving vector
 
// MPI_Reduce( &some_2d_vec[0][0], &reduced_vec[0][0], no_of_elems, MPI_DOUBLE, MPI_SUM, 0, comm);
   MPI_Reduce( some_2d_vec[0].data(), reduced_vec[0].data(), no_of_elems, MPI_DOUBLE, MPI_SUM, 0, comm);

// Let's at least see the output ...
   int rank;
   MPI_Comm_rank( MPI_COMM_WORLD, &rank );
   if ( rank == 0 ) 
   {
      for ( double e : reduced_vec[0] ) std::cout << e <<'\n';
   }

   MPI_Finalize();
}


Last edited on
Thank you @lastchance.

I forgot I had to use the first index of the inner vector as well...
Topic archived. No new replies allowed.