Forum

MPI / Interpretation of *.dia file / Options to reduce computational time

Julius Schlumberger, modified 4 Years ago.

MPI / Interpretation of *.dia file / Options to reduce computational time

Youngling Posts: 15 Join Date: 12/20/20 Recent Posts
Dear all,

I have a grid with around 2.2 million cells, I model a period of 5 days using 2D D-Flow. When running it from the GUI I reached 50% progress after around 18 hrs (with an estimate of 24hrs to go). For comparison I tried to run the model also using MPI. I used 12 partitions (as my machine as 12 logical partititions it could use). The MPI finished full after 50 hrs.
I was a bit surprised that it still took so long while using MPI. In the handbook it is only vaguely mentioned that the benefit of MPI depends on several different factors (problem being solved, characterised by the number of active grid points, length of the simulation in time and the time step being used). Also the analysis of the scalability of MPI in the technical handbook suggests that for me it should not be differ so much if i use 6, 8 or 12 partitions.
So my question is: Are there specific layers that should (not) be used when looking for some reduced computational time? Is there a lower limit regarding the dt of the model below which MPI is not useful? Are there any recommendations regarding how a model should be set up for the best benefit from MPI?
I also looked into the *.dia file (see screenshot of referred section). In the file I found a section talking about extra timer. Could somebody elaborate on what the values mean? I have comparably high values for the parameters 'setdt', 'setumod', and 'step_reduce' and was wondering if that might be the reason why my MPI is not as efficient as I was hoping it to be.

Looking forward to your comments and feedback!
Best, Julius
Julius Schlumberger, modified 4 Years ago.

RE: MPI / Interpretation of *.dia file / Options to reduce computational ti

Youngling Posts: 15 Join Date: 12/20/20 Recent Posts
...just writing down my assumptions here, so that maybe other people facing similar questions find them helpful.
My calculation was in the end about 8% faster using MPI than the original calculation.

Assumption 1: I probably used too many compartments for my current set up. When running MPI, the number of grid elements where computations will happen increases (due to the requirement of ghost cells). So apparently the benefit of calculating faster since parallel was neutralized by the additional time required for the computations in the ghost cells.
Assumption 2: The benefit of MPI is not very useful when running it on a single computer. I heard the D3D-FM is set up to run with 6 or 8 cores in standard mode. So the benefit of MPI is particularly noticeble when running it on a server of computer network with much more cores?

My conclusion from this is that using fewer compartments seems more beneficial.