000 | 02918nam a22003137a 4500 | ||
---|---|---|---|
001 | 16490336 | ||
003 | IIITD | ||
005 | 20230311130759.0 | ||
008 | 101005s2011 ne a b 001 0 eng | ||
010 | _a 2010039584 | ||
020 | _a9780128046050 | ||
040 |
_aDLC _cDLC _dDLC |
||
042 | _apcc | ||
050 | 0 | 0 |
_aQA76.642 _b.P29 2011 |
082 | 0 | 0 |
_a005.2 _222 _bPAC-I |
100 | 1 | _aPacheco, Peter S. | |
245 | 1 | 3 |
_aAn introduction to parallel programming _cby Peter S. Pacheco and Matthew Malensek |
250 | _a2nd ed. | ||
260 |
_aAmsterdam : _bMorgan Kaufmann, _c©2022 |
||
300 |
_axix, 468 p. : _bill. ; _c25 cm. |
||
504 | _aIncludes bibliographical references (p. 459-468) and index. | ||
505 | 8 | _aMachine generated contents note: 1 Why Parallel Computing1.1 Why We Need Ever-Increasing Performance 1.2 Why We're Building Parallel Systems 1.3 Why We Need to Write Parallel Programs 1.4 How Do We Write Parallel Programs? 1.5 What We'll Be Doing 1.6 Concurrent, Parallel, Distributed 1.7 The Rest of the Book 1.8 A Word of Warning 1.9 Typographical Conventions 1.10 Summary 1.11 Exercises 2 Parallel Hardware and Parallel Software2.1 Some Background 2.2 Modifications to the von Neumann Model 2.3 Parallel Hardware 2.4 Parallel Software 2.5 Input and Output 2.6 Performance 2.7 Parallel Program Design 2.8 Writing and Running Parallel Programs 2.9 Assumptions 2.10 Summary 2.11 Exercises 3 Distributed Memory Programming with MPI3.1 Getting Started 3.2 The Trapezoidal Rule in MPI 3.3 Dealing with I/O 3.4 Collective Communication 3.5 MPI Derived Datatypes 3.7 A Parallel Sorting Algorithm 3.8 Summary3.9 Exercises 3.10 Programming Assignments 4 Shared Memory Programming with Pthreads4.1 Processes, Threads and Pthreads 4.2 Hello, World4.3 Matrix-Vector Multiplication 4.4 Critical Sections 4.5 Busy-Waiting 4.6 Mutexes 4.7 Producer-Consumer Synchronization and Semaphores 4.8 Barriers and Condition Variables 4.9 Read-Write Locks 4.10 Caches, Cache-Coherence, and False Sharing 4.11 Thread-Safety 4.12 Summary 4.13 Exercises4.14 Programming Assignments 5 Shared Memory Programming with OpenMP5.1 Getting Started 5.2 The Trapezoidal Rule 5.3 Scope of Variables 5.4 The Reduction Clause 5.5 The Parallel For Directive 5.6 More About Loops in OpenMP: Sorting 5.7 Scheduling Loops 5.8 Producers and Consumers 5.9 Caches, Cache-Coherence, and False Sharing 5.10 Thread-Safety 5.11 Summary 5.12 Exercises 5.13 Programming Assignments 6 Parallel Program Development6.1 Two N-Body Solvers 6.2 Tree Search 6.3 A Word of Caution 6.4 Which API? 6.5 Summary 6.6 Exercises 6.7 Programming Assignments 7 Where to Go from Here . | |
650 | 0 | _aParallel programming | |
650 | 0 | _aComputer science | |
700 | _aMalensek, Matthew | ||
906 |
_a7 _bcbc _corignew _d1 _eecip _f20 _gy-gencatlg |
||
942 |
_2ddc _cBK |
||
999 |
_c170957 _d170957 |