Difference between revisions of "MPI on virtual nodes"

From NEClusterWiki
Jump to navigation Jump to search
Line 3: Line 3:
  
 
<code>--mca btl_tcp_if_include tun0</code>
 
<code>--mca btl_tcp_if_include tun0</code>
 +
 +
Alternatively one may specify the network to be used for communication to cover both real and virtual node networks:
 +
<code>--mca btl_tcp_if_include 192.168.100.0/23</code>
  
 
Source for more knowledge:
 
Source for more knowledge:
 
http://stackoverflow.com/questions/10466119/mpi-send-stops-working-after-mpi-barrier/10473106#10473106
 
http://stackoverflow.com/questions/10466119/mpi-send-stops-working-after-mpi-barrier/10473106#10473106

Revision as of 03:16, 25 September 2012

To run MPI on virtual nodes one has to specify the interface. The argument (to mpirun) needed is:

--mca btl_tcp_if_include tun0

Alternatively one may specify the network to be used for communication to cover both real and virtual node networks: --mca btl_tcp_if_include 192.168.100.0/23

Source for more knowledge: http://stackoverflow.com/questions/10466119/mpi-send-stops-working-after-mpi-barrier/10473106#10473106