Steps to Assembly MPI.NET Aplications
Our next release will be available soon. It's a very faster version of the Spawn method.
We are testing a release of a new version of the MPI_Comm_Spawn
primitive witch is very faster and allows the spawn of methods.
The implementation of the MPI_Comm_Spawn_multiple
is now available at the downloads page
primitives are implemented and available for download at
We are now entering the second part of our project. See the
Project Status page for more details.
MPI is the de-facto standard for HPC. The 2.0 standard specifies bindings for Fortran, C and C++. However, they clearly lack of high-level abstractions such are OO encapsulation or simple portability in heterogeneous platforms, i.e. distributed sets of different
CPUs, possibly interconnected by different networks.
Nearly all current, modern languages, have developed extensions such as to enable their use with MPI: this is the case of Java (JavaMPI, MPIJava), Python (pympi), Perl or Ruby. As far as C# is concerned, the Open Systems Laboratory at Indiana University has
proposed both a low-level binding (strongly inspired by the C++ binding) and a high-level one, called MPI.NET (see Willcock, J., Lumsdaine, A., Robison., A.: Using mpi with c# and the common language infrastructure. Concurrency and Computation: Practice and
Experience 17(7-8) (2005) 895–917).
The C# binding is relatively straightforward. Each object of C# bindings contains the underlying C representation of the MPI object. Similarly, the high-level objects in the MPI.NET are usually containers of underlying MPI objects. According to the referenced
article, the performance of the current MPI binding of C# is reasonnable. However, the implementation and the tests have only be partial and did not cover collective communication, one of the key features of MPI, neither do they cover non-blocking communication
or other advanced features of MPI such as the use of non-native, used-defined datatypes.
The goals of this project would be to build upon MPI.NET in order to complement it with the features that are missing, mainly regarding collective communication. Either they could benefit from C# native support for such communication, either they could be programmed
on top of the provided MPISend/MPI
Recv encapsulations. C# and .NET features such as fault tolerance or dynamicity support would be studied, in other to turn the MPI# implementation robust in large, dynamic and heterogeneous platforms.