Difference between revisions of "MPI on virtual nodes"

From NEClusterWiki
Jump to navigation Jump to search
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
=OBSOLETE=
 +
 +
 
To run MPI on virtual nodes one has to specify the interface.
 
To run MPI on virtual nodes one has to specify the interface.
 
The argument (to mpirun) needed is:
 
The argument (to mpirun) needed is:
  
 
<code>--mca btl_tcp_if_include tun0</code>
 
<code>--mca btl_tcp_if_include tun0</code>
 +
 +
Alternatively one may specify the network to be used for communication to cover both real and virtual node networks:
 +
 +
<code>--mca btl_tcp_if_include 192.168.100.0/23</code>
  
 
Source for more knowledge:
 
Source for more knowledge:
 
http://stackoverflow.com/questions/10466119/mpi-send-stops-working-after-mpi-barrier/10473106#10473106
 
http://stackoverflow.com/questions/10466119/mpi-send-stops-working-after-mpi-barrier/10473106#10473106

Latest revision as of 05:21, 10 January 2019

OBSOLETE

To run MPI on virtual nodes one has to specify the interface. The argument (to mpirun) needed is:

--mca btl_tcp_if_include tun0

Alternatively one may specify the network to be used for communication to cover both real and virtual node networks:

--mca btl_tcp_if_include 192.168.100.0/23

Source for more knowledge: http://stackoverflow.com/questions/10466119/mpi-send-stops-working-after-mpi-barrier/10473106#10473106