forked from deepin-community/mpich
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathRELEASE_NOTES
64 lines (47 loc) · 2.44 KB
/
RELEASE_NOTES
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
----------------------------------------------------------------------
KNOWN ISSUES
----------------------------------------------------------------------
### CH4
* Dynamic process support
- Requires fi_tagged capability in OFI netmod
- Not supported in UCX netmod at this time
* Test failures
- test/mpi/errors/coll does not 100% pass.
- Test failures differ based on experimental setup and network
support built in.
### OS/X
* C++ bindings - Exception handling in C++ bindings is broken because
of a bug in libc++ (see
https://github.com/android-ndk/ndk/issues/289). You can workaround
this issue by explicitly linking your application to libstdc++
instead (e.g., mpicxx -o foo foo.cxx -lstdc++).
### Fine-grained thread safety
* Multiple VCIs (--enable-thread-cs=per-vci) is only supported in ch4.
* ch3:sock does not (and will not) support fine-grained threading.
* MPI-IO APIs are not currently thread-safe when using fine-grained
threading (--enable-thread-cs=per-object).
* ch3:nemesis:tcp fine-grained threading is still experimental and may
have correctness or performance issues. Known correctness issues
include dynamic process support and generalized request support.
### Fortran
* The Fortran mpi module (use mpi) does not define explicit interfaces
for MPI routines with choice buffer arguments. Because of this, some
Fortran compilers require extra flags to build the Fortran bindings,
and compile-time type checking is not available for choice buffer
routines. The Fortran 2008 module (use mpi_f08) does defines
explicit interfaces for all MPI routines, therefore providing
compile-time type checking even for choice buffer routines.
### Lacking channel-specific features
* MPICH does not presently support communication across heterogeneous
platforms (e.g., a big-endian machine communicating with a
little-endian machine).
* ch3 has known problems in some cases when threading and dynamic
processes are used together on communicators of size greater than
one.
### Performance issues
* MPI_Irecv operations that are not explicitly completed before
MPI_Finalize is called may fail to complete before MPI_Finalize
returns, and thus never complete. Furthermore, any matching send
operations may erroneously fail. By explicitly completed, we mean
that the request associated with the operation is completed by one
of the MPI_Test or MPI_Wait routines.