RE: Running in parallel FLOW (FLOW2D3D Version 6.02. - 01_standard example) - Forum - Delft3D
Forum
- Home
- 3. Archive
- General
- Compilation on Linux
- RE: Running in parallel FLOW (FLOW2D3D Version 6.02. - 01_standard example)
RE: Running in parallel FLOW (FLOW2D3D Version 6.02. - 01_standard example)
Running in parallel FLOW (FLOW2D3D Version 6.02. - 01_standard example)
Youngling Posts: 3 Join Date: 3/18/15 Recent PostsHi everybody,
I just compiled Delft3D (8799) and everything looks good. I could ran the first example (01_standard) in parallel using MPICH2-3.2.1 but I found an issue when I tried to run with more than 5 cores this simulation, I mean, I can run this example with 1 or 2 or 3 or 4 or 5 cores but when I try to run using 6 or more cores it fails...
I got the following error:
Part IV - Reading complete MD-file...
Flow exited abnormally
Check diagnosis file
Flow exited
abnormally
Check diagnosis file
Flow exited
abnormally
Check diagnosis file
Flow exited
abnormally
Check diagnosis file
Flow exited
abnormally
Check diagnosis file
Flow exited
abnormally
Check diagnosis file
terminate called after
throwing an instance of 'char const*'
terminate called after
throwing an instance of 'char const*'
terminate called after
throwing an instance of 'char const*'
terminate called after
throwing an instance of 'char const*'
terminate called after
throwing an instance of 'char const*'
terminate called after
throwing an instance of 'char const*'
Is there any way to run this example with more cores? What file have I to pay attention to increase the number of cores?
Thanks in advanced!
RE: Running in parallel FLOW (FLOW2D3D Version 6.02. - 01_standard example)
Padawan Posts: 97 Join Date: 1/3/11 Recent PostsHi Juan,
example "01_standard" is too small to run with more than 5 partitions. When "max(mmax,nmax)/npart <= 4", the computation will not run.
Regards,
Adri
RE: Running in parallel FLOW (FLOW2D3D Version 6.02. - 01_standard example)
Youngling Posts: 3 Join Date: 3/18/15 Recent PostsHi Adri, thanks for your reply. Where can I check "max(mmax,nmax)/npart <= 4" part? Which file?
Regards
RE: Running in parallel FLOW (FLOW2D3D Version 6.02. - 01_standard example)
Padawan Posts: 97 Join Date: 1/3/11 Recent PostsHi Juan,
Flow does that for you and echoes an error message in the "tri-diag"-file.
If you want to check it manually:
mmax and nmax are the
dimensions of your grid (mdf-file, line "MNKmax", for
example "MNKmax= 15 22 5"), npart is the number of
partitions that you have chosen (for example 5 or 6), so in this
example "max(15,22)/5 = 4.4" is bigger than 4 will run fine
and "max(15,22)/6 = 3.666" is smaller than 4 and will stop
with an error message.
Regards,
Adri
RE: Running in parallel FLOW (FLOW2D3D Version 6.02. - 01_standard example)
Youngling Posts: 3 Join Date: 3/18/15 Recent PostsHi Adri! Thanks again for your explanation!
In this case my MNKmax variable is 583 335 14. If I apply your explanation, I got the following -> 583/14 = 41.642. That means that I can use 14 cores to run this simulation.
Unfortunately the only way that I could ran this simulation was using just 1 core, so when I used more cores it failed and I got the following errors on the tri-diag files:
*** WARNING Ratio in layer thickness of the two bottom layers is
larger than 1.5.
May lead to inaccurate
representation of the bottom boundary layer.
*** MESSAGE Upwind
advection scheme only near momentum discharges
*** WARNING
Discharge (m,n,k)=(42,311,0) is disabled: inlet and/or outfall not in
this partition
*** WARNING Discharge (m,n,k)=(70,252,0) is
disabled: inlet and/or outfall not in this partition
*** WARNING
Discharge (m,n,k)=(79,243,0) is disabled: inlet and/or outfall not in
this partition
*** WARNING Discharge (m,n,k)=(231,163,0) is
disabled: inlet and/or outfall not in this partition
*** WARNING
Discharge (m,n,k)=(254,248,0) is disabled: inlet and/or outfall not in
this partition
*** WARNING Discharge (m,n,k)=(317,64,0) is
disabled: inlet and/or outfall not in this partition
*** WARNING
Discharge (m,n,k)=(481,180,0) is disabled: inlet and/or outfall not in
this partition
*** WARNING Using Smoothing time and initial
condition file
*** MESSAGE Ocean heat model: Stanton number
specified to be 0.10000E-02
*** MESSAGE Ocean heat model:
Dalton number specified to be 0.10000E-02
*** MESSAGE Using
UNESCO density formulation by default
*** MESSAGE Evaporation
not taken into account in continuity equation
*** MESSAGE Number
of pivot points to convert wind speed to wind drag coef.: 2
***
MESSAGE Updating tidal node factors every 0.6000E+02 [min]
***
MESSAGE Momentum solver cyclic method is specified
*** MESSAGE
DRYFLP and DPSOPT both specified in MD-file. Using DPSOPT: MEAN
*** MESSAGE Transport solver cyclic-method method is specified
remap n= 19 mnit1 = 1 334 mint2 = 14 334 - node number
005
remap n= 20 mnit1 = 14 334 mint2 = 29 334 -
node number 005
*** MESSAGE His, map, drogue, and fourier files
written in single precision (except for time and horizontal
coordinates).
*** MESSAGE Uniform wind and pressure
specified
*** MESSAGE Heat model: Secchi depth specified to be
30.0000000 m
*** ERROR Drogues/walking monitor points are not
available in parallel computations
*** ERROR Flow exited abnormally
I attached the mdf file and one of the tri-diag files to explain better this case.
Regards,
Juan.
Attachments:
RE: Running in parallel FLOW (FLOW2D3D Version 6.02. - 01_standard example)
Padawan Posts: 97 Join Date: 1/3/11 Recent PostsHi Juan,
Unfortunately the combination of drogues (line "Filpar= #117.par#" in your mdf-file) with parallel computation is not implemented yet. If you remove the drogues, I expect the parallel computation to run.
Regards,
Adri