Springe zum Hauptinhalt
Professur Praktische Informatik
Projekte


Orthogonal Processor Groups

Programming Library

ORT

Orthogonal Processor Groups are a generalization of group-SPMD programming. The intention is to structure the program code into group-SPMD phases with different sets of processors active at different program phases. Processor groups are subsets (subgrids) of a processor grid and distinct sets can work in parallel on different data. Data are usually aligned at the dimensions of the processor grid, and so the selection of processor groups implies the access to distinct parts of data structures. Orthogonal Processor Groups are primarily developed for scientific computing especially when matrix and vector structures are used. Numerical algorithms which allow a separation of computational parts to orthogonal subgrids of their central data structures and require different subsets to be active at different execution times might benefit from the use of Orthogonal Processor Groups. A restriction of collective communication operations to these subgrids can further improve the performance because of the linear or logarithmic dependence of runtimes on the number of processors involved. In addition, Orthogonal Processor Groups can prevent highly optimized programming structures which efficiently combine task and data parallelism from the intricate and error prone code usually resulting from programming MPI processor groups by hand.

To support programming with Orthogonal Processor Groups we are developing a Programming Library which provides a C-Interface. The user can create a partition of the processor grid and structure the program into parts with different processor groups active at distinct execution times. The library is built on top of MPI and can be used simultaneously. The C-Interface is inspired by the Pthread-Standard and allows the execution of C functions on distinct sets of processor groups. Users may decide to structure the whole program into group-SPMD phases or only central computational parts using pure MPI for the remaining code.