A very basic ingredient to coupling is communication. The participants you want to couple need to be able to communicate data. On this page, we explain how communication between participants can be configured.

The m2n tag

For each two participants that should exchange data, you have to define a m2n communication, for example like this:

<m2n:sockets from="MySolver1" to="MySolver2" exchange-directory="../"/>

This establishes an m2n (i.e. parallel, from the M processes of the one participant to the N processes of the other) communication channel based on TCP/IP sockets between MySolver1 and MySolver2. The used network defaults to the loopback network of your OS, which allows running multiple participants on a single machine. Certain systems may not provide a loopback interface, in which case you need to specify a network interface yourself.

In some situations, you may need to manually specify a network interface. The most common case being participants distributed over multiple hosts aka running on clusters. This may also be the case if you use participants in isolated Docker containers or if your system doesn’t provide a loopback interface.

To manually specify a network interface use the network="..." attribute. Common interface on clusters are the local ethernet "eth0" or the infiniband sytem "ib0".

<m2n:sockets from="MySolver1" to="MySolver2" network="ib0" />

On Unix systems, you can list network interfaces using the following command:

$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:e0:03:62 brd ff:ff:ff:ff:ff:ff

The alternative to TCP/IP sockets is MPI ports (an MPI 2.0 feature):

<m2n:mpi .../>

As the ports functionality is not a highly used feature of MPI, it has robustness issues for several MPI implementations (for OpenMPI, for example). In principle, MPI gives you faster communication roughly by a factor of 10, but, for most applications, you will not feel any difference as both are very fast. We recommend using sockets.

Which participant is from and which one is to makes almost no difference and cannot lead to deadlock. Only for massively parallel runs, it can make a performance difference at initialization. For such cases, ask us for advice.

The exchange-directory should point to the same location for both participants. We use this location to exchange hidden files with initial connection information. It defaults to ".", i.e. both participants need to be started in the same folder. We give some best practices on how to arrange your folder structure and start the coupled solvers here.

Advanced: the intra-comm tag

If you build preCICE without MPI (and only in this case) you might also need to change the communication preCICE uses to communicate between ranks of a single parallel participant. You can specify to use TCP/IP sockets with:

<participant name="MySolver1"> 
...
<intra-comm:sockets/>   
...
</participant>