M. Sato, I. Kotera, R. Egawa, H. Takizawa, and H. Kobayashi (Japan)
Parallel Computing Systems, Multi-Core Processors,Thread Scheduling, Dynamic Cache Partitioning
A modern high-performancemulti-core processor has large shared cache memories. However, simultaneously running threads do not always require the entire capacities of the shared caches. Besides, some threads cause severe performance degradation by inter-thread cache conflicts and shortage of capacity on the shared cache. To achieve high performance processing on multi-core processors, effective usage of shared cache memories plays important role. In this paper, we propose a cache-aware thread scheduling policy for multi-core processors with multiple shared cache memories. The total processor performance becomes more sensitive to the cache capacity shortage, as larger caches are requested by the threads sharing one cache. The proposed policy can prevent multiple threads requesting a large cache capacity from sharing one cache. As a result, the policy can prevent inter-thread resource conflicts and hence severe performance degradation. Experimental results clearly demonstrate that the policy assists the cache partitioning mechanisms and avoids unfair performance degradation among threads. Thread scheduling based on the proposed policy can improve the performance by up to 10% and an average of 5% compared with thread scheduling without the proposed policy.
Important Links:
Go Back