Achieving Parallelism 'Easily' through Pshell

D.F. Saffioti, I. Piper, and J. Fulcher (Australia)

Keywords

Shell, Concurrent Programming Languages, Communicating Sequential Processes (CSP), Process Migration, Distributed Shared Memory, Beowulf, Cluster Computing and Grid Computing.

Abstract

Communications and the scheduling of tasks are the two most important issues of parallel programming on clusters. Over time, various parallel computing programming models such as remote threads, transparent process migration, message passing, distributed shared memory and optimizing parallel compilers have emerged, assisting the programmer to develop applications which can work seamlessly in such environments. Considerable research has been done to optimize these models, which typically have large communications overheads, resulting in a detrimental effect on performance. In addition to this, their acceptance has varied by virtue of the fact that each has introduced new problems with reference to portability, scalability and usability. Sometimes these problems completely violate the underlying notion of such computing. To overcome these issues Pshell, which provides transparent scheduling and communication of jobs between disparate hosts, has been developed. Pshell is the `glue' for producing high-performance parallel applications that can work securely and efficiently in heterogeneous environments. It represents a major shift in traditional parallel programming environments because it is a language using the syntax of the Bourne shell sh(1). The Bourne shell process and communications models can easily be extended to parallel computing environments using the concurrent programming model. This paper will examine the evolution of cluster computing and identify where deficiencies lie in current programming models, providing a justification for simple languages while providing the reader with an understanding of the Pshell programming environment. In addition to this, the paper illustrates how such a language can be used to ease the process of writing parallel applications and to overcome some of the limitations inherent in traditional programming models without sacrificing performance.

Important Links:



Go Back