- 39 Downloads
A key problem in designing and efficiently using large multiprocessor systems is how to extract the available parallelism in a given problem while keeping the overhead such as communication and synchronization as small as possible. Until a few years ago computer system architects as well as parallel algorithm designers and software developers were primarily concerned with the computing aspects of parallel processing , , , . The communication aspects such as the amount of data exchanged or the number of message transfers required, the number and the frequency of synchronizations needed, etc., received relatively little attention. There are instances such as , where considerable attention was paid to data layout and to data movement, but such efforts were motivated primarily by the difficulties of data movement on particular machine architectures (specifically, the SIMD type architectures) rather than by an appreciation of the intrinsic importance of data movement on the complexity of the computation. In recent years, however, there is a growing belief that in order to expect reasonable performance from multiple processor systems, communication aspects of an algorithm and of its implementation are at least as important as the computing aspects. The extent to which the communication aspects influence the performance was recognized first by the hardware designers and the system architects. In  the author states that “the most critical system control mechanisms in a distributed computer are clearly those involved with interprocess and interprocessor communication”. Now this fact has been accepted by algorithm designers and others in the software community as well. It is well recognized that having a high degree of parallelism alone is not sufficient for speeding up the parallel execution time .
KeywordsShared Memory Data Dependency Nest Loop Communication Step Communication Requirement
Unable to display preview. Download preview PDF.