Skip to content
Snippets Groups Projects
Commit 780ce730 authored by Simon Schwitanski's avatar Simon Schwitanski :slight_smile:
Browse files

Add older logs

parent f469e59c
No related branches found
No related tags found
No related merge requests found
Showing
with 767 additions and 0 deletions
300.00947189331055 seconds
\ No newline at end of file
Compiling CallOrdering-unmatched-mpi_ireduce-mpi_iscatter-001.c (batchinfo:1/1)
$ typeart-mpicc -fPIC /MBI/scripts/gencodes/COLL/CallOrdering-unmatched-mpi_ireduce-mpi_iscatter-001.c -o /tmp/tmp0zbax10u/CallOrdering-unmatched-mpi_ireduce-mpi_iscatter-001 -L/MBI-builds/MUST192/lib -lpnmpi
clang: warning: -lpnmpi: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-L/MBI-builds/MUST192/lib' [-Wunused-command-line-argument]
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 2
Heap call filtered % : 0.00
> Stack Memory
Alloca : 0.00
Stack call filtered % : 0.00
Alloca of pointer discarded : 0.00
> Global Memory
Global : 6
Global filter total : 4
Global call filtered % : 0.00
Global filtered % : 66.67
--------------------------------------
------------------
TypeArtPass [Heap]
Malloc : 2
Free : 2
Alloca : 0
Global : 0
------------------
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 0
Heap call filtered % : 0.00
> Stack Memory
Alloca : 8.00
Stack call filtered % : 100.00
Alloca of pointer discarded : 4.00
> Global Memory
Global : 6
Global filter total : 6
Global call filtered % : 33.33
Global filtered % : 100.00
--------------------------------------
-------------------
TypeArtPass [Stack]
Malloc : 0
Free : 0
Alloca : 0
Global : 0
-------------------
Executing the command (cwd: /tmp/tmp0zbax10u)
$ mustrun --must:distributed --must:output stdout --must:typeart -np 2 ./CallOrdering-unmatched-mpi_ireduce-mpi_iscatter-001
[MUST] MUST configuration ... distributed checks without application crash handling
[MUST] Using prebuilt infrastructure at /root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d
[MUST] Weaver ... success
[MUST] Generating P^nMPI configuration ... success
[MUST] Infrastructure in "/root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d" is present and used.
[MUST] Search for linked P^nMPI ... found ... success
[MUST] Executing application:
[MUST-REPORT] Error: from: call MPI_Iscatter@1: A collective mismatch occured (The application executes two different collective calls on the same communicator)! The collective operation that does not matches this operation was executed at reference 1. (Information on communicator: MPI_COMM_WORLD)
[MUST-REPORT] Note that collective matching was disabled as a result, collectives won't be analysed for their correctness or blocking state anymore. You should solve this issue and rerun your application with MUST.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmp0zbax10u/CallOrdering-unmatched-mpi_ireduce-mpi_iscatter-001:
[MUST-REPORT] References of a representative process:
[MUST-REPORT] Reference 1: call MPI_Ireduce@rank 0, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmp0zbax10u/CallOrdering-unmatched-mpi_ireduce-mpi_iscatter-001:
[MUST-REPORT]
[MUST-REPORT] Information: from: call MPI_Iscatter@1: No buffer allocated at given address.
Command killed by signal 15, elapsed time: 300.21438694000244
8.703658103942871
\ No newline at end of file
Compiling CallOrdering-unmatched-mpi_irsend-001.c (batchinfo:1/1)
$ typeart-mpicc -fPIC /MBI/scripts/gencodes/P2P/CallOrdering-unmatched-mpi_irsend-001.c -o /tmp/tmpl5695j63/CallOrdering-unmatched-mpi_irsend-001 -L/MBI-builds/MUST192/lib -lpnmpi
clang: warning: -lpnmpi: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-L/MBI-builds/MUST192/lib' [-Wunused-command-line-argument]
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 1
Heap call filtered % : 0.00
> Stack Memory
Alloca : 0.00
Stack call filtered % : 0.00
Alloca of pointer discarded : 0.00
> Global Memory
Global : 5
Global filter total : 3
Global call filtered % : 0.00
Global filtered % : 60.00
--------------------------------------
------------------
TypeArtPass [Heap]
Malloc : 1
Free : 1
Alloca : 0
Global : 0
------------------
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 0
Heap call filtered % : 0.00
> Stack Memory
Alloca : 7.00
Stack call filtered % : 100.00
Alloca of pointer discarded : 3.00
> Global Memory
Global : 5
Global filter total : 5
Global call filtered % : 40.00
Global filtered % : 100.00
--------------------------------------
-------------------
TypeArtPass [Stack]
Malloc : 0
Free : 0
Alloca : 0
Global : 0
-------------------
Executing the command (cwd: /tmp/tmpl5695j63)
$ mustrun --must:distributed --must:output stdout --must:typeart -np 2 ./CallOrdering-unmatched-mpi_irsend-001
[MUST] MUST configuration ... distributed checks without application crash handling
[MUST] Using prebuilt infrastructure at /root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d
[MUST] Weaver ... success
[MUST] Generating P^nMPI configuration ... success
[MUST] Infrastructure in "/root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d" is present and used.
[MUST] Search for linked P^nMPI ... found ... success
[MUST] Executing application:
[MUST-REPORT] Information: from: call MPI_Irsend@1: No buffer allocated at given address.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmpl5695j63/CallOrdering-unmatched-mpi_irsend-001:
[MUST-REPORT]
[MUST-RUNTIME] ============MUST===============
[MUST-RUNTIME] ERROR: MUST detected a deadlock, writing output.===============================
[MUST-REPORT] Error global: The application issued a set of MPI calls that can cause a deadlock! A graphical representation of this situation is available in the file named "/tmp/tmpl5695j63/MUST_Output-files/MUST_Deadlock.dot". Use the dot tool of the graphviz package to visualize it, e.g. issue "dot -Tps /tmp/tmpl5695j63/MUST_Output-files/MUST_Deadlock.dot -o deadlock.ps". The graph shows the nodes that form the root cause of the deadlock, any other active MPI calls have been removed. A legend is available in the dot format in the file named "/tmp/tmpl5695j63/MUST_Output-files/MUST_DeadlockLegend.dot", further information on these graphs is available in the MUST manual. References 1-2 list the involved calls (limited to the first 5 calls, further calls may be involved). The application still runs, if the deadlock manifested (e.g. caused a hang on this MPI implementation) you can attach to the involved ranks with a debugger or abort the application (if necessary). References of a representative process:
[MUST-REPORT] Reference 1: call MPI_Finalize@rank 0, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmpl5695j63/CallOrdering-unmatched-mpi_irsend-001:
[MUST-REPORT] Reference 2: call MPI_Wait@rank 1, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmpl5695j63/CallOrdering-unmatched-mpi_irsend-001:
[MUST-REPORT]
[MUST-RUNTIME] ============MUST===============
[MUST-RUNTIME] ERROR: MUST detected a deadlock, detailed information is available in the MUST output file. You should either investigate details with a debugger or abort, the operation of MUST will stop from now.
[MUST-RUNTIME] ===============================
[MUST-RUNTIME] ----Deadlock detection timing ----
[MUST-RUNTIME] syncTime=839
[MUST-RUNTIME] wfgGatherTme=2456
[MUST-RUNTIME] preparationTime=764
[MUST-RUNTIME] wfgCheckTime=1106
[MUST-RUNTIME] outputTime=4965
[MUST-RUNTIME] dotTime=8
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD
with errorcode 666.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
rank 0 (of 2), pid 70256 caught signal nr 15
rank 1 (of 2), pid 70259 caught signal nr 15
rank 0 (of 1), pid 70262 caught signal nr 15
[2ed4abe6885c:70249] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
[2ed4abe6885c:70249] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Command killed by signal 15, elapsed time: 8.703658103942871
9.196121454238892
\ No newline at end of file
Compiling CallOrdering-unmatched-mpi_iscan-001.c (batchinfo:1/1)
$ typeart-mpicc -fPIC /MBI/scripts/gencodes/COLL/CallOrdering-unmatched-mpi_iscan-001.c -o /tmp/tmp61z3p7p0/CallOrdering-unmatched-mpi_iscan-001 -L/MBI-builds/MUST192/lib -lpnmpi
clang: warning: -lpnmpi: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-L/MBI-builds/MUST192/lib' [-Wunused-command-line-argument]
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 2
Heap call filtered % : 0.00
> Stack Memory
Alloca : 0.00
Stack call filtered % : 0.00
Alloca of pointer discarded : 0.00
> Global Memory
Global : 6
Global filter total : 4
Global call filtered % : 0.00
Global filtered % : 66.67
--------------------------------------
------------------
TypeArtPass [Heap]
Malloc : 2
Free : 2
Alloca : 0
Global : 0
------------------
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 0
Heap call filtered % : 0.00
> Stack Memory
Alloca : 8.00
Stack call filtered % : 100.00
Alloca of pointer discarded : 4.00
> Global Memory
Global : 6
Global filter total : 6
Global call filtered % : 33.33
Global filtered % : 100.00
--------------------------------------
-------------------
TypeArtPass [Stack]
Malloc : 0
Free : 0
Alloca : 0
Global : 0
-------------------
Executing the command (cwd: /tmp/tmp61z3p7p0)
$ mustrun --must:distributed --must:output stdout --must:typeart -np 2 ./CallOrdering-unmatched-mpi_iscan-001
[MUST] MUST configuration ... distributed checks without application crash handling
[MUST] Using prebuilt infrastructure at /root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d
[MUST] Weaver ... success
[MUST] Generating P^nMPI configuration ... success
[MUST] Infrastructure in "/root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d" is present and used.
[MUST] Search for linked P^nMPI ... found ... success
[MUST] Executing application:
[MUST-REPORT] Error: from: call MPI_Finalize@1: A collective mismatch occured (The application executes two different collective calls on the same communicator)! The collective operation that does not matches this operation was executed at reference 1. (Information on communicator: MPI_COMM_WORLD)
[MUST-REPORT] Note that collective matching was disabled as a result, collectives won't be analysed for their correctness or blocking state anymore. You should solve this issue and rerun your application with MUST.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmp61z3p7p0/CallOrdering-unmatched-mpi_iscan-001:
[MUST-REPORT] References of a representative process:
[MUST-REPORT] Reference 1: call MPI_Iscan@rank 0, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmp61z3p7p0/CallOrdering-unmatched-mpi_iscan-001:
[MUST-REPORT]
[MUST-REPORT] Information: from: call MPI_Iscan@0: No buffer allocated at given address.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmp61z3p7p0/CallOrdering-unmatched-mpi_iscan-001:
[MUST-REPORT]
[MUST-REPORT] Warning: from: call MPI_Wait@1: Argument 1 (request) is MPI_REQUEST_NULL was this intended?
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmp61z3p7p0/CallOrdering-unmatched-mpi_iscan-001:
[MUST-REPORT]
[MUST-RUNTIME] ============MUST===============
[MUST-RUNTIME] ERROR: MUST detected a deadlock, writing output.===============================
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD
with errorcode 666.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[MUST-RUNTIME] ============MUST===============
[MUST-RUNTIME] ERROR: MUST detected a deadlock, detailed information is available in the MUST output file. You should either investigate details with a debugger or abort, the operation of MUST will stop from now.
[MUST-RUNTIME] ===============================
[MUST-RUNTIME] ----Deadlock detection timing ----
[MUST-RUNTIME] syncTime=5631
[MUST-RUNTIME] wfgGatherTme=2919
[MUST-RUNTIME] preparationTime=1445
[MUST-RUNTIME] wfgCheckTime=1207
[MUST-RUNTIME] outputTime=4932
[MUST-RUNTIME] dotTime=8
rank 0 (of 2), pid 27533 caught signal nr 15
rank 1 (of 2), pid 27536 caught signal nr 15
rank 0 (of 1), pid 27539 caught signal nr 15
[MUST-REPORT] Error global: The application issued a set of MPI calls that can cause a deadlock! A graphical representation of this situation is available in the file named "/tmp/tmp61z3p7p0/MUST_Output-files/MUST_Deadlock.dot". Use the dot tool of the graphviz package to visualize it, e.g. issue "dot -Tps /tmp/tmp61z3p7p0/MUST_Output-files/MUST_Deadlock.dot -o deadlock.ps". The graph shows the nodes that form the root cause of the deadlock, any other active MPI calls have been removed. A legend is available in the dot format in the file named "/tmp/tmp61z3p7p0/MUST_Output-files/MUST_DeadlockLegend.dot", further information on these graphs is available in the MUST manual. References 1-2 list the involved calls (limited to the first 5 calls, further calls may be involved). The application still runs, if the deadlock manifested (e.g. caused a hang on this MPI implementation) you can attach to the involved ranks with a debugger or abort the application (if necessary). References of a representative process:
[MUST-REPORT] Reference 1: call MPI_Wait@rank 0, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmp61z3p7p0/CallOrdering-unmatched-mpi_iscan-001:
[MUST-REPORT] Reference 2: call MPI_Finalize@rank 1, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmp61z3p7p0/CallOrdering-unmatched-mpi_iscan-001:
[MUST-REPORT]
[2ed4abe6885c:27526] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
[2ed4abe6885c:27526] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Command killed by signal 15, elapsed time: 9.196121454238892
8.843403816223145
\ No newline at end of file
Compiling CallOrdering-unmatched-mpi_iscatter-001.c (batchinfo:1/1)
$ typeart-mpicc -fPIC /MBI/scripts/gencodes/COLL/CallOrdering-unmatched-mpi_iscatter-001.c -o /tmp/tmpheox2o8p/CallOrdering-unmatched-mpi_iscatter-001 -L/MBI-builds/MUST192/lib -lpnmpi
clang: warning: -lpnmpi: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-L/MBI-builds/MUST192/lib' [-Wunused-command-line-argument]
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 2
Heap call filtered % : 0.00
> Stack Memory
Alloca : 0.00
Stack call filtered % : 0.00
Alloca of pointer discarded : 0.00
> Global Memory
Global : 5
Global filter total : 3
Global call filtered % : 0.00
Global filtered % : 60.00
--------------------------------------
------------------
TypeArtPass [Heap]
Malloc : 2
Free : 2
Alloca : 0
Global : 0
------------------
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 0
Heap call filtered % : 0.00
> Stack Memory
Alloca : 8.00
Stack call filtered % : 100.00
Alloca of pointer discarded : 4.00
> Global Memory
Global : 5
Global filter total : 5
Global call filtered % : 40.00
Global filtered % : 100.00
--------------------------------------
-------------------
TypeArtPass [Stack]
Malloc : 0
Free : 0
Alloca : 0
Global : 0
-------------------
Executing the command (cwd: /tmp/tmpheox2o8p)
$ mustrun --must:distributed --must:output stdout --must:typeart -np 2 ./CallOrdering-unmatched-mpi_iscatter-001
[MUST] MUST configuration ... distributed checks without application crash handling
[MUST] Using prebuilt infrastructure at /root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d
[MUST] Weaver ... success
[MUST] Generating P^nMPI configuration ... success
[MUST] Infrastructure in "/root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d" is present and used.
[MUST] Search for linked P^nMPI ... found ... success
[MUST] Executing application:
[MUST-REPORT] Error: from: call MPI_Finalize@1: A collective mismatch occured (The application executes two different collective calls on the same communicator)! The collective operation that does not matches this operation was executed at reference 1. (Information on communicator: MPI_COMM_WORLD)
[MUST-REPORT] Note that collective matching was disabled as a result, collectives won't be analysed for their correctness or blocking state anymore. You should solve this issue and rerun your application with MUST.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmpheox2o8p/CallOrdering-unmatched-mpi_iscatter-001:
[MUST-REPORT] References of a representative process:
[MUST-REPORT] Reference 1: call MPI_Iscatter@rank 0, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmpheox2o8p/CallOrdering-unmatched-mpi_iscatter-001:
[MUST-REPORT]
[MUST-REPORT] Information: from: call MPI_Iscatter@0: No buffer allocated at given address.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmpheox2o8p/CallOrdering-unmatched-mpi_iscatter-001:
[MUST-REPORT]
[MUST-REPORT] Warning: from: call MPI_Wait@1: Argument 1 (request) is MPI_REQUEST_NULL was this intended?
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmpheox2o8p/CallOrdering-unmatched-mpi_iscatter-001:
[MUST-REPORT]
[MUST-REPORT] Error: from: call MPI_Iscatter@0: Buffer too small: Transfer of type [20x"MPI_INT"] with byte count of 80 longer than buffer argument of type [10x"int32"] with byte count of 40.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmpheox2o8p/CallOrdering-unmatched-mpi_iscatter-001:
[MUST-REPORT]
[MUST-RUNTIME] ============MUST===============
[MUST-RUNTIME] ERROR: MUST detected a deadlock, writing output.===============================
[MUST-REPORT] Error global: The application issued a set of MPI calls that can cause a deadlock! A graphical representation of this situation is available in the file named "/tmp/tmpheox2o8p/MUST_Output-files/MUST_Deadlock.dot". Use the dot tool of the graphviz package to visualize it, e.g. issue "dot -Tps /tmp/tmpheox2o8p/MUST_Output-files/MUST_Deadlock.dot -o deadlock.ps". The graph shows the nodes that form the root cause of the deadlock, any other active MPI calls have been removed. A legend is available in the dot format in the file named "/tmp/tmpheox2o8p/MUST_Output-files/MUST_DeadlockLegend.dot", further information on these graphs is available in the MUST manual. References 1-2 list the involved calls (limited to the first 5 calls, further calls may be involved). The application still runs, if the deadlock manifested (e.g. caused a hang on this MPI implementation) you can attach to the involved ranks with a debugger or abort the application (if necessary). References of a representative process:
[MUST-REPORT] Reference 1: call MPI_Wait@rank 0, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmpheox2o8p/CallOrdering-unmatched-mpi_iscatter-001:
[MUST-REPORT] Reference 2: call MPI_Finalize@rank 1, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmpheox2o8p/CallOrdering-unmatched-mpi_iscatter-001:
[MUST-REPORT]
[MUST-RUNTIME] ============MUST===============
[MUST-RUNTIME] ERROR: MUST detected a deadlock, detailed information is available in the MUST output file. You should either investigate details with a debugger or abort, the operation of MUST will stop from now.
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD
with errorcode 666.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[MUST-RUNTIME] ===============================
[MUST-RUNTIME] ----Deadlock detection timing ----
[MUST-RUNTIME] syncTime=5452
[MUST-RUNTIME] wfgGatherTme=2547
[MUST-RUNTIME] preparationTime=1384
[MUST-RUNTIME] wfgCheckTime=1178
[MUST-RUNTIME] outputTime=7344
[MUST-RUNTIME] dotTime=9
rank 0 (of 1), pid 28213 caught signal nr 15
rank 0 (of 2), pid 28207 caught signal nr 15
rank 1 (of 2), pid 28210 caught signal nr 15
[2ed4abe6885c:28200] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
[2ed4abe6885c:28200] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Command killed by signal 15, elapsed time: 8.843403816223145
300.21243834495544
\ No newline at end of file
300.0089793205261 seconds
\ No newline at end of file
Compiling CallOrdering-unmatched-mpi_iscatter-mpi_igather-001.c (batchinfo:1/1)
$ typeart-mpicc -fPIC /MBI/scripts/gencodes/COLL/CallOrdering-unmatched-mpi_iscatter-mpi_igather-001.c -o /tmp/tmpjlyo8sm9/CallOrdering-unmatched-mpi_iscatter-mpi_igather-001 -L/MBI-builds/MUST192/lib -lpnmpi
clang: warning: -lpnmpi: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-L/MBI-builds/MUST192/lib' [-Wunused-command-line-argument]
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 2
Heap call filtered % : 0.00
> Stack Memory
Alloca : 0.00
Stack call filtered % : 0.00
Alloca of pointer discarded : 0.00
> Global Memory
Global : 5
Global filter total : 3
Global call filtered % : 0.00
Global filtered % : 60.00
--------------------------------------
------------------
TypeArtPass [Heap]
Malloc : 2
Free : 2
Alloca : 0
Global : 0
------------------
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 0
Heap call filtered % : 0.00
> Stack Memory
Alloca : 8.00
Stack call filtered % : 100.00
Alloca of pointer discarded : 4.00
> Global Memory
Global : 5
Global filter total : 5
Global call filtered % : 40.00
Global filtered % : 100.00
--------------------------------------
-------------------
TypeArtPass [Stack]
Malloc : 0
Free : 0
Alloca : 0
Global : 0
-------------------
Executing the command (cwd: /tmp/tmpjlyo8sm9)
$ mustrun --must:distributed --must:output stdout --must:typeart -np 2 ./CallOrdering-unmatched-mpi_iscatter-mpi_igather-001
[MUST] MUST configuration ... distributed checks without application crash handling
[MUST] Using prebuilt infrastructure at /root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d
[MUST] Weaver ... success
[MUST] Generating P^nMPI configuration ... success
[MUST] Infrastructure in "/root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d" is present and used.
[MUST] Search for linked P^nMPI ... found ... success
[MUST] Executing application:
[MUST-REPORT] Error: from: call MPI_Igather@1: A collective mismatch occured (The application executes two different collective calls on the same communicator)! The collective operation that does not matches this operation was executed at reference 1. (Information on communicator: MPI_COMM_WORLD)
Command killed by signal 15, elapsed time: 300.21243834495544
300.21005153656006
\ No newline at end of file
300.00533652305603 seconds
\ No newline at end of file
Compiling CallOrdering-unmatched-mpi_iscatter-mpi_iscan-001.c (batchinfo:1/1)
$ typeart-mpicc -fPIC /MBI/scripts/gencodes/COLL/CallOrdering-unmatched-mpi_iscatter-mpi_iscan-001.c -o /tmp/tmpkjh_fhhq/CallOrdering-unmatched-mpi_iscatter-mpi_iscan-001 -L/MBI-builds/MUST192/lib -lpnmpi
clang: warning: -lpnmpi: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-L/MBI-builds/MUST192/lib' [-Wunused-command-line-argument]
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 2
Heap call filtered % : 0.00
> Stack Memory
Alloca : 0.00
Stack call filtered % : 0.00
Alloca of pointer discarded : 0.00
> Global Memory
Global : 6
Global filter total : 4
Global call filtered % : 0.00
Global filtered % : 66.67
--------------------------------------
------------------
TypeArtPass [Heap]
Malloc : 2
Free : 2
Alloca : 0
Global : 0
------------------
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 0
Heap call filtered % : 0.00
> Stack Memory
Alloca : 8.00
Stack call filtered % : 100.00
Alloca of pointer discarded : 4.00
> Global Memory
Global : 6
Global filter total : 6
Global call filtered % : 33.33
Global filtered % : 100.00
--------------------------------------
-------------------
TypeArtPass [Stack]
Malloc : 0
Free : 0
Alloca : 0
Global : 0
-------------------
Executing the command (cwd: /tmp/tmpkjh_fhhq)
$ mustrun --must:distributed --must:output stdout --must:typeart -np 2 ./CallOrdering-unmatched-mpi_iscatter-mpi_iscan-001
[MUST] MUST configuration ... distributed checks without application crash handling
[MUST] Using prebuilt infrastructure at /root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d
[MUST] Weaver ... success
[MUST] Generating P^nMPI configuration ... success
[MUST] Infrastructure in "/root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d" is present and used.
[MUST] Search for linked P^nMPI ... found ... success
[MUST] Executing application:
[MUST-REPORT] Error: from: call MPI_Iscan@1: A collective mismatch occured (The application executes two different collective calls on the same communicator)! The collective operation that does not matches this operation was executed at reference 1. (Information on communicator: MPI_COMM_WORLD)
[MUST-REPORT] Note that collective matching was disabled as a result, collectives won't be analysed for their correctness or blocking state anymore. You should solve this issue and rerun your application with MUST.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmpkjh_fhhq/CallOrdering-unmatched-mpi_iscatter-mpi_iscan-001:
[MUST-REPORT] References of a representative process:
[MUST-REPORT] Reference 1: call MPI_Iscatter@rank 0, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmpkjh_fhhq/CallOrdering-unmatched-mpi_iscatter-mpi_iscan-001:
[MUST-REPORT]
[MUST-REPORT] Information: from: call MPI_Iscan@1: No buffer allocated at given address.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmpkjh_fhhq/CallOrdering-unmatched-mpi_iscatter-mpi_iscan-001:
[MUST-REPORT]
[MUST-REPORT] Information: from: call MPI_Iscatter@0: No buffer allocated at given address.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmpkjh_fhhq/CallOrdering-unmatched-mpi_iscatter-mpi_iscan-001:
[MUST-REPORT]
[MUST-REPORT] Error: from: call MPI_Iscatter@0: Buffer too small: Transfer of type [20x"MPI_INT"] with byte count of 80 longer than buffer argument of type [10x"int32"] with byte count of 40.
Command killed by signal 15, elapsed time: 300.21005153656006
8.66148853302002
\ No newline at end of file
Compiling CallOrdering-unmatched-mpi_isend-001.c (batchinfo:1/1)
$ typeart-mpicc -fPIC /MBI/scripts/gencodes/P2P/CallOrdering-unmatched-mpi_isend-001.c -o /tmp/tmp4knj31_2/CallOrdering-unmatched-mpi_isend-001 -L/MBI-builds/MUST192/lib -lpnmpi
clang: warning: -lpnmpi: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-L/MBI-builds/MUST192/lib' [-Wunused-command-line-argument]
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 1
Heap call filtered % : 0.00
> Stack Memory
Alloca : 0.00
Stack call filtered % : 0.00
Alloca of pointer discarded : 0.00
> Global Memory
Global : 5
Global filter total : 3
Global call filtered % : 0.00
Global filtered % : 60.00
--------------------------------------
------------------
TypeArtPass [Heap]
Malloc : 1
Free : 1
Alloca : 0
Global : 0
------------------
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 0
Heap call filtered % : 0.00
> Stack Memory
Alloca : 7.00
Stack call filtered % : 100.00
Alloca of pointer discarded : 3.00
> Global Memory
Global : 5
Global filter total : 5
Global call filtered % : 40.00
Global filtered % : 100.00
--------------------------------------
-------------------
TypeArtPass [Stack]
Malloc : 0
Free : 0
Alloca : 0
Global : 0
-------------------
Executing the command (cwd: /tmp/tmp4knj31_2)
$ mustrun --must:distributed --must:output stdout --must:typeart -np 2 ./CallOrdering-unmatched-mpi_isend-001
[MUST] MUST configuration ... distributed checks without application crash handling
[MUST] Using prebuilt infrastructure at /root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d
[MUST] Weaver ... success
[MUST] Generating P^nMPI configuration ... success
[MUST] Infrastructure in "/root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d" is present and used.
[MUST] Search for linked P^nMPI ... found ... success
[MUST] Executing application:
[MUST-REPORT] Information: from: call MPI_Isend@1: No buffer allocated at given address.
[MUST-REPORT] Representative location:
[MUST-REPORT] #0 main@/tmp/tmp4knj31_2/CallOrdering-unmatched-mpi_isend-001:
[MUST-REPORT]
[MUST-RUNTIME] ============MUST===============
[MUST-RUNTIME] ERROR: MUST detected a deadlock, writing output.===============================
[MUST-REPORT] Error global: The application issued a set of MPI calls that can cause a deadlock! A graphical representation of this situation is available in the file named "/tmp/tmp4knj31_2/MUST_Output-files/MUST_Deadlock.dot". Use the dot tool of the graphviz package to visualize it, e.g. issue "dot -Tps /tmp/tmp4knj31_2/MUST_Output-files/MUST_Deadlock.dot -o deadlock.ps". The graph shows the nodes that form the root cause of the deadlock, any other active MPI calls have been removed. A legend is available in the dot format in the file named "/tmp/tmp4knj31_2/MUST_Output-files/MUST_DeadlockLegend.dot", further information on these graphs is available in the MUST manual. References 1-2 list the involved calls (limited to the first 5 calls, further calls may be involved). The application still runs, if the deadlock manifested (e.g. caused a hang on this MPI implementation) you can attach to the involved ranks with a debugger or abort the application (if necessary). References of a representative process:
[MUST-REPORT] Reference 1: call MPI_Finalize@rank 0, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmp4knj31_2/CallOrdering-unmatched-mpi_isend-001:
[MUST-REPORT] Reference 2: call MPI_Wait@rank 1, threadid 0;
[MUST-REPORT] Stacktrace:
[MUST-REPORT] #0 main@/tmp/tmp4knj31_2/CallOrdering-unmatched-mpi_isend-001:
[MUST-REPORT]
[MUST-RUNTIME] ============MUST===============
[MUST-RUNTIME] ERROR: MUST detected a deadlock, detailed information is available in the MUST output file. You should either investigate details with a debugger or abort, the operation of MUST will stop from now.
[MUST-RUNTIME] ===============================
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD
with errorcode 666.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[MUST-RUNTIME] ----Deadlock detection timing ----
[MUST-RUNTIME] syncTime=705
[MUST-RUNTIME] wfgGatherTme=2488
[MUST-RUNTIME] preparationTime=744
[MUST-RUNTIME] wfgCheckTime=1076
[MUST-RUNTIME] outputTime=5651
[MUST-RUNTIME] dotTime=8
rank 0 (of 2), pid 70930 caught signal nr 15
rank 1 (of 2), pid 70933 caught signal nr 15
rank 0 (of 1), pid 70936 caught signal nr 15
[2ed4abe6885c:70923] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
[2ed4abe6885c:70923] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Command killed by signal 15, elapsed time: 8.66148853302002
9.80198884010315
\ No newline at end of file
Compiling CallOrdering-unmatched-mpi_issend-001.c (batchinfo:1/1)
$ typeart-mpicc -fPIC /MBI/scripts/gencodes/P2P/CallOrdering-unmatched-mpi_issend-001.c -o /tmp/tmpya0bww6o/CallOrdering-unmatched-mpi_issend-001 -L/MBI-builds/MUST192/lib -lpnmpi
clang: warning: -lpnmpi: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: argument unused during compilation: '-L/MBI-builds/MUST192/lib' [-Wunused-command-line-argument]
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 1
Heap call filtered % : 0.00
> Stack Memory
Alloca : 0.00
Stack call filtered % : 0.00
Alloca of pointer discarded : 0.00
> Global Memory
Global : 5
Global filter total : 3
Global call filtered % : 0.00
Global filtered % : 60.00
--------------------------------------
------------------
TypeArtPass [Heap]
Malloc : 1
Free : 1
Alloca : 0
Global : 0
------------------
--------------------------------------
MemInstFinderPass
Filter string : *MPI_*
> Heap Memory
Heap alloc : 0
Heap call filtered % : 0.00
> Stack Memory
Alloca : 7.00
Stack call filtered % : 100.00
Alloca of pointer discarded : 3.00
> Global Memory
Global : 5
Global filter total : 5
Global call filtered % : 40.00
Global filtered % : 100.00
--------------------------------------
-------------------
TypeArtPass [Stack]
Malloc : 0
Free : 0
Alloca : 0
Global : 0
-------------------
Executing the command (cwd: /tmp/tmpya0bww6o)
$ mustrun --must:distributed --must:output stdout --must:typeart -np 2 ./CallOrdering-unmatched-mpi_issend-001
[MUST] MUST configuration ... distributed checks without application crash handling
[MUST] Using prebuilt infrastructure at /root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d
[MUST] Weaver ... success
[MUST] Generating P^nMPI configuration ... success
[MUST] Infrastructure in "/root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d" is present and used.
[MUST] Search for linked P^nMPI ... found ... success
[MUST] Executing application:
[2ed4abe6885c:71613] *** Process received signal ***
[2ed4abe6885c:71613] Signal: Segmentation fault (11)
[2ed4abe6885c:71613] Signal code: Address not mapped (1)
[2ed4abe6885c:71613] Failing at address: 0x18
[2ed4abe6885c:71613] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x4001be7050]
[2ed4abe6885c:71613] [ 1] /MBI-builds/MUST192/modules//libdWaitStateWfgMgr.so(_ZN4must16DWaitStateWfgMgr15waitForInfoCollEimmiiyii+0x36)[0x40046d8ce6]
[2ed4abe6885c:71613] [ 2] /root/.cache/must/prebuilds/a97718021a440d50cb5d0ae17ffcf83d/modules//libweaver-receival-gen-output-2.so(_ZN3gti43GenReceivalweaver_receival_gen_output_2_cpp13ReceiveRecordEPvmS1_PFNS_10GTI_RETURNES1_mS1_EPmPNS_11I_ChannelIdEPbPNSt7__cxx114listIS7_SaIS7_EEE+0x4b42)[0x400462e872]
[2ed4abe6885c:71613] [ 3] /MBI-builds/MUST192/modules//libthreadedMpiPlace.so(_ZN3gti16ThreadedMPIPlace3runEv+0x2da)[0x400426d0aa]
[2ed4abe6885c:71613] [ 4] /MBI-builds/MUST192/modules//libthreadedMpiPlace.so(_Z18handlePlaceStartupv+0xbe)[0x400426eaae]
[2ed4abe6885c:71613] [ 5] /MBI-builds/MUST192/modules//libthreadedMpiPlace.so(MPI_Init+0x21)[0x400426b721]
[2ed4abe6885c:71613] [ 6] /MBI-builds/MUST192/lib/libpnmpi.so.1(NQJ_Init+0x182)[0x40019b64b2]
[2ed4abe6885c:71613] [ 7] /MBI-builds/MUST192/lib/libpnmpi.so.1(XMPI_Init_NewStack+0x3d)[0x4001a1facd]
[2ed4abe6885c:71613] [ 8] /MBI-builds/MUST192/modules//libcProtMpiSplitComm.so(MPI_Init+0x1221)[0x40039b8fa1]
[2ed4abe6885c:71613] [ 9] /MBI-builds/MUST192/lib/libpnmpi.so.1(NQJ_Init+0x182)[0x40019b64b2]
[2ed4abe6885c:71613] [10] /MBI-builds/MUST192/lib/libpnmpi.so.1(XMPI_Init+0x2c)[0x4001a1795c]
[2ed4abe6885c:71613] [11] /MBI-builds/MUST192/lib/libpnmpi.so.1(NQJ_Init+0x182)[0x40019b64b2]
[2ed4abe6885c:71613] [12] /MBI-builds/MUST192/lib/libpnmpi.so.1(MPI_Init+0xe1)[0x40019b6261]
[2ed4abe6885c:71613] [13] ./CallOrdering-unmatched-mpi_issend-001(+0x1221)[0x4000001221]
[2ed4abe6885c:71613] [14] /lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x4001bd224a]
[2ed4abe6885c:71613] [15] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x4001bd2305]
[2ed4abe6885c:71613] [16] ./CallOrdering-unmatched-mpi_issend-001(+0x1111)[0x4000001111]
[2ed4abe6885c:71613] *** End of error message ***
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 143.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
rank 0 (of 2), pid 71604 caught signal nr 15
rank 1 (of 2), pid 71607 caught signal nr 15
rank 0 (of 1), pid 71610 caught signal nr 15
--------------------------------------------------------------------------
mpiexec noticed that process rank 3 with PID 0 on node 2ed4abe6885c exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
[2ed4abe6885c:71597] 2 more processes have sent help message help-mpi-api.txt / mpi-abort
[2ed4abe6885c:71597] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[MUST] Execution finished.
Command return code: 0, elapsed time: 9.80198884010315
1009.779890537262
\ No newline at end of file
1009.5733513832092 seconds
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment