tailieunhanh - Parallel Programming: for Multicore and Cluster Systems- P25
Parallel Programming: for Multicore and Cluster Systems- P25: Innovations in hardware architecture, like hyper-threading or multicore processors, mean that parallel computing resources are available for inexpensive desktop computers. In only a few years, many standard software products will be based on concepts of parallel programming implemented on such hardware, and the range of applications will be much broader than that of scientific computing, up to now the main application area for parallel computing | 232 5 Message-Passing Programming Data structures of type MPI_Group cannot be directly accessed by the programmer. But MPI provides operations to obtain information about process groups. The size of a process group can be obtained by calling int MPLfirou size MPI_Group group int size where the size of the group is returned in parameter size. The rank of the calling process in a group can be obtained by calling int MPLGroupxank MPI_Group group int rank where the rank is returned in parameter rank. The function intMPLGrou compare MPI_Group group1 MPI-Group group2 int res can be used to check whether two group representations group1 and group2 describe the same group. The parameter value res is returned if both groups contain the same processes in the same order. The parameter value res MPI_SIMILAR is returned if both groups contain the same processes but group1 uses a different order than group2. The parameter value res means that the two groups contain different processes. The function int MPLGrou free MPI_Group group can be used to free a group representation if it is no longer needed. The group handle is set to MPI_GROUP_NULL. Operations on Communicators A new intra-communicator to a given group of processes can be generated by calling int MPI_Comm comm MPI_Group group MPI_Comm new_comm where comm specifies an existing communicator. The parameter group must specify a process group which is a subset of the process group associated with comm. For a correct execution it is required that all processes of comm perform the call of and that each of these processes specifies the same group argument. As a result of this call each calling process which is a member of group obtains a pointer to the new communicator in . Processes not belonging to group get MPI_COMM_NULL as return value in . MPI also provides functions to get information about communicators. These functions are implemented as local .
đang nạp các trang xem trước