Abu Asaduzzaman and Govipalagodage H. Gunasekara
Data dependency, Message passing interface (MPI), Multiprocessor systems, Parallel programming, Pthread
The innovation from single-processor to multiprocessor systems demands application codes be more parallel. To fulfill this demand, parallel programming models such as multithreading and message passing interface (MPI) are introduced. By using such parallel models, the execution time can be reduced significantly. However, the performance of data dependent programs and power consumption depend on how many processors the system has and how they are organized. In this work, we explore the behavior of data dependency of parallel programming. Two parallel programming models (Pthread/C and Open MPI/C) and two computing systems (Xeon 2xQuad-Core Workstation and Opteron 4xOcta-Core per node Supercomputer) are used in this study. We use matrix multiplication (data independent) and heat conduction on 2D surface (data dependent) problems. Simulation results show that in an average Workstation takes more time to complete the execution than the Supercomputer does. However, it is observed that in an average Workstation consumes less power to complete the execution. Simulation results suggest that effective multithreaded programming may reduce the execution time of MPI implementation with negligible or little increase in total power consumption. It is noticed that the performance of MPI implementation varies due to communication overhead among the processors.
Important Links:
Go Back