MPICH
Overview
MPICH is a free implementation of MPI (Message Passing Interface) for distributed memory applications. The latest version MPICH2 implements the MPI-2 standard.
NOTE: The MPICH2 package is under construction and will be announced soon on this page.
The MPICH2 pascal package contains the bindings (= pascal translations of the c header files), which allows pascal programs to use the same variables and functions as c programs would do. So, you can easily convert c examples for MPI to pascal.
Tutorial
Installation
First you need to install the MPICH2 library on all computers of your cluster. A cluster can be any set of computers / virtual machines. They do not need to be homogeneous. Your cluster can contain for example Windows and Linux machines, with different number of CPUs and memory. On your development machines, where you compile your application, you must install the development libraries and FPC too.
Ubuntu / Debian
Under ubuntu/debian you can install the following packages: libmpich-mpd1.0-dev mpich-mpd-bin
There's a how-to for installing Mpich2 in Ubuntu/Debian in Ubuntu Wiki : https://wiki.ubuntu.com/MpichCluster
Install from source
Download the mpich2-1.0.6.tar.gz or any newer version from http://www-unix.mcs.anl.gov/mpi/mpich2/ and unpack it.
Read the README carefully. It describes, what you need to compile mpich. Under ubuntu feisty: sudo apt-get install build-essential.
You need a shared directory for all nodes. In the following steps it is assumed that the home directory is shared.
./configure --prefix=/home/you/mpich-install make sudo make install
Extend your path:
export PATH=/home/you/mpich-install/bin:$PATH
Check everything works:
which mpd which mpiexec which mpirun
Configuration
MPI expects the configuration file in your home directory named .mpd.conf (/etc/mpd.conf if root). It should contain one line:
secretword=<secretword>
where <secretword> is a password, that should not be your user password. Make it readable/writable only by you:
chmod 600 .mpd.conf
If your home is not shared, it must be copied to all cluster nodes. Check that you can login via ssh without password to all cluster nodes:
ssh othermachine date
should not ask for password and give only the date - nothing else.
Create a file named mpd.hosts with one line per node (hostnames). For example
host1 host2
Test MPD
MPD is the MPICH daemon, which controls/runs/stops the proceses on the cluster nodes. Bring up the ring:
mpd & mpdtrace mpdallexit
mpdtrace should output the hostname of your current working host. mpdallexit stops the daemon.
Now start the mpd on some machines:
mpdboot -n <number to start>
If the current machine is not part of the cluster (not in the mpd.hosts), then you need one additional mpd (add +1).
Test:
mpdtrace mpdringtest mpdringtest 100 mpiexec -l -n 30 hostname
Test a MPI program:
mpiexec -n 5 /home/you/mpich-install/examples/cpi