Pipelined Mutual Exclusion on Large-scale Cache-coherent Multiprocessors

M. Takesue (Japan)

Keywords

Parallel computer architectures, mutual exclusion,queue-based locking, pipelining, cache policies.

Abstract

This paper proposes a pipelined mutual exclusion that allows different processors to access different memory blocks for a set D of shared data in a pipeline manner. In the pipelining, processors requesting for D are first serialized into the queue for each memory block of D. Second, a permission to access the memory block or its word is acquired and released according to the or der in the queue. The first phase exploits a tree data structure of which leaf nodes are associated with the memory blocks of D. With the tree, a total order of processors requesting for D is preserved in each of the queues for the memory blocks of D, so that mutual exclusion on D is ensured. The pipelining is achieved since the access permission is passed on a block or a word basis. The queue is distributed in the caches, that support efficient pipelining. Results with a cycle by-cycle simulator show that for mutual exclusion on 256-block shared data with 256 processors, about 82 or 225 processors simultaneously access different blocks or words of the data, leading to a speedup of about 80 or 41 over a pipelining without the tree. For a few small applications, the speedup is about 1 to 21.

Important Links:



Go Back