PETSc, stands for portable, extensible toolkit for scientific computation, it is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It employs the MPI standard for all message-passing communication. There are hundreds of applications that uses petsc in the area of nano-simulation, medical, fusion, Geo sciences, surface flow, CFD, optimization, and much more.
As one of the most important scientific libraries that run on top of MPI, I decided to try it out for our windows HPC cluster. It turned out that it was quite easy to get running using visual c++ 2008, Microsoft windows MPI stack, and cygwin as a build environment.
I took some simple notes to help anyone that needs to get it running on windows:
1. Install the following: petsc, windows HPCS 2008 SDK, Microsoft Visual C++ 2008, and Cygwin with python.
2. run visual studio command prompt for 32 bit build, and visual studio command prompt win64 for 64bit builds…
This ensures that you have CL (the ms c++ compiler in your path). Then, run c:\cygwin\cygwin.bat to get into the cygwin environment.
3. unpack petsc: gzip -cd petsc*.tar.gz |tar xf –
4. cd into the petsc directory, then Configure petsc, but first run: export PETSC_DIR=`pwd` # THIS SETS UP bash environment for you.
5. Run the command below (one line), this is a 32 bit build. You will have to change the path if you are building 64bit to x64 instead of i386.
config/configure.py –with-cc=”win32fe cl” –with-fc=0 –download-c-blas-lapack=1
–with-mpi-include=”/cygdrive/c/Program Files/Microsoft HPC Pack 2008 SDK/Include”
–with-mpi-lib=”/cygdrive/c/Program Files/Microsoft HPC Pack 2008 SDK/Lib/i386/msmpi.lib”
6.(only 32bit version).
edit python/BuildSystem/config/packages/MPI.py and comment out the line
> > self.functions = [‘MPI_Init’, ‘MPI_Comm_create’
You’ll also need to patch up the code, because 32 bit MSMPI stack uses STDCALL convention, thus it needs a little patch from petsc folks. to add MSMPI. 64 bit always uses fastcall convention, thus this step is not needed.
The patch file basically puts a MPIAPI macro in front of all their calls. This patch is either available upon request or by emailing petsc support and refer to [PETSC #17869].
7. Once this is all configured, type make all.
This will take a few hours…… it’s a BIG library to compile.
8. Let’s build example 23….in dir: ~/petsc-2.3.3-p13/src/ksp/ksp/examples/tutorials
type make ex23
$ make ex23
/cygdrive/c/Users/wenmingy/petsc-2.3.3-p13/bin/win32fe/win32fe cl -o ex23.o -c –
wd4996 -MT -Z7 -I/cygdrive/c/Users/wenmingy/petsc-2.3.3-p13/src/dm/mesh/sieve -I
e -I/cygdrive/c/Program Files/Microsoft HPC Pack 2008 SDK/Include -D__SDIR__=”sr
Petsc shows you all the link options…
9. Now, let’s run it:
$ mpiexec -n 4 ex23 // running 4 processors locally
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-030
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-007, absolute=1e-050, divergence=10000
linear system matrix = precond matrix:
type=mpiaij, rows=100, cols=100
total: nonzeros=298, allocated nonzeros=700
not using I-node (on process 0) routines
Norm of error 0.000101234, Iterations 502
10. That’s it, you just solved it. and you can submit this job to the compute cluster job scheduler, but I am on my laptop, that would be another blog.
As you can see, with some minor tweak, PETSc plays nicely with MSMPI stack.
I would also like to thank Serguei O. for his help!