mpirun not accelerating script

Bug #1065146 reported by Bernardo Kyotoku on 2012-10-10
This bug affects 1 person
Affects Status Importance Assigned to Milestone

Bug Description

I was hoping if you help with running meep_mpi python-meep.

I tested a meep_mpi python script. But the multiprocessing doesn't seem to be working properly.
To make a test I ran the the same script using a single core and 7 cores. No difference in time was observed. Both runs
took 116 seconds. The operating system CPU monitor showed that the 7 cores was being used in the 7 cores run and that 1 core was used in the single core run.

The command I used to run was:
mpirun -np 7 ./

Yes, I imported * from meep_mpi
I put the python shabang in the begining of the script.
I chmoded to script to make it executable.
I ran using openmpi. I tried using MPICH but I encountered some problems when I did that (described below).

The setup I am using is:
Intel Core i7, 8GB ram
Ubuntu 12.04 TLS
openmpi 1.4.3 (tried using openmpi 1.5 but libhdf5-openmpi 1.8.4 was incompatible with it)

Could give any suggestion?

BTW, nice wrapper!

Martin Fiers (mfiers-u) wrote :

Dear Bernardo,

It seems like meep is just running 7 times single-threaded. Is the text output on the terminal also replicated 7 times?

Also, does the output in the beginning of your scipt say something like this:
Python-meep starting...
Python-meep starting...
Python-meep starting...
Python-meep starting...
Using MPI version 2.1, 4 processes

To be sure, can you try to run your script as a normal python file? (so undo the chmod'ing, and use python instead of ./ There might be something going wrong with the shabang
The call would then be:
mpirun -np 7 python

I'm not sure whether it will give a difference but you can give it a shot.

My PC architecture is exactly the same as yours (Ub. 12.04, intel core i7), and it looks like it gives a speedup for the bent_waveguide example. I have only tried this with openmpi. (I just did a fresh install of python-meep), but I had to change meep_mpi library to meep_openmpi and make a symbolic link from /usr/include/meep-mpi/ to /usr/include/meep/ .

With kind regards,
Martin Fiers

Filip Dominec (fdominec) wrote :

It is also possible that the simulation used was so small that it could not take advantage of multiprocessing. You might try to increase the number of chunks when defining the volume -- see

My architecture is similar, Ub. 12.04, intel core i3 and I observe slight improvement when using 2 processes with MPI. However, three or more processes are slower, because the bottleneck is probably in the speed of RAM.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers