Skip to content

Parallel Programming in OpenMP

Spend $50 to get a free DVD!

ISBN-10: 1558606718

ISBN-13: 9781558606715

Edition: 2001

Authors: Rohit Chandra, Dave Kohr, Ramesh Menon, Leonardo Dagum, Dror Maydan

List price: $70.95
Blue ribbon 30 day, 100% satisfaction guarantee!
Out of stock
what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!

OpenMP was designed to help ease the task of parallel programming. This text allows the novice and expert to develop applications using this new standard.
Customers also bought

Book details

List price: $70.95
Copyright year: 2001
Publisher: Elsevier Science & Technology
Publication date: 10/11/2000
Binding: Paperback
Pages: 231
Size: 7.25" wide x 9.00" long x 0.75" tall
Weight: 0.946
Language: English

Rohit Chandra is currently a Chief Scientist at NARUS, Inc., a provider of internet business infrastructure solutions. He previously was a Principal Engineer in the Compiler Group of Silicon Graphics, where he helped design & implement OpenMP.

Dave Kohr is currently a member of the Technical Staff at NARUS, Inc. He previously was a Member of the Technical Staff in the Compiler Group at Silicon Graphics, where he helped define & implement the OpenMP.

Ramesh Menon is a Staff Engineer at NARUS, Inc. Prior to NARUS, Ramesh was a Staff Engineer at SGI representing SGI in the OpenMP forum. He was the founding Chairman of the OpenMP Architecture Review Board (ARB) & supervised the writing of the first OpenMP specifications.

Leonardo Dagum is currently a member of the Technical Staff at NARUS, Inc. He previously was a Member of the Technical Staff in the Compiler Group at Silicon Graphics, where he helped define & implement the OpenMP.

Dror Maydan is currently Director of Software at Tensilica, Inc., the provider of application-specific processor technology. He previously was an Engineering Department Manager in the Compiler Group of Silicon Graphics where he helped design & implement OpenMP.

Performance with OpenMP
A First Glimpse of OpenMP
The OpenMP Parallel Computer
Why OpenMP?
History of OpenMP
Navigating the Rest of the Book
Getting Started with OpenMP
OpenMP from 10,000 Meters
OpenMP Compiler Directives or Pragmas
Parallel Control Structures
Communication and Data Environment
Parallelizing a Simple Loop
Runtime Execution Model of an OpenMP Program
Communication and Data Scoping
Synchronization in the Simple Loop Example
Final Words on the Simple Loop Example
A More Complicated Loop
Explicit Synchronization
The reduction Clause
Expressing Parallelism with Parallel Regions
Concluding Remarks
Exploiting Loop-Level Parallelism
Form and Usage of the parallel do Directive
Restrictions on Parallel Loops
Meaning of the parallel do Directive
Loop Nests and Parallelism
Controlling Data Sharing
General Properties of Data Scope Clauses
The shared Clause
The private Clause
Default Variable Scopes
Changing Default Scoping Rules
Parallelizing Reduction Operations
Private Variable Initialization and Finalization
Removing Data Dependences
Why Data Dependences Are a Problem
The First Step: Detection
The Second Step: Classification
The Third Step: Removal
Enhancing Performance
Ensuring Sufficient Work
Scheduling Loops to Balance the Load
Static and Dynamic Scheduling
Scheduling Options
Comparison of Runtime Scheduling Behavior
Concluding Remarks
Beyond Loop-Level Parallelism: Parallel Regions
Form and Usage of the parallel Directive
Clauses on the parallel Directive
Restrictions on the parallel Directive
Meaning of the parallel Directive
Parallel Regions and SPMD-Style Parallelism
threadprivate Variables and the copyin Clause
The threadprivate Directive
The copyin Clause
Work-Sharing in Parallel Regions
A Parallel Task Queue
Dividing Work Based on Thread Number
Work-Sharing Constructs in OpenMP
Restrictions on Work-Sharing Constructs
Block Structure
Entry and Exit
Nesting of Work-Sharing Constructs
Orphaning of Work-Sharing Constructs
Data Scoping of Orphaned Constructs
Writing Code with Orphaned Work-Sharing Constructs
Nested Parallel Regions
Directive Nesting and Binding
Controlling Parallelism in an OpenMP Program
Dynamically Disabling the parallel Directives
Controlling the Number of Threads
Dynamic Threads
Runtime Library Calls and Environment Variables
Concluding Remarks
Data Conflicts and the Need for Synchronization
Getting Rid of Data Races
Examples of Acceptable Data Races
Synchronization Mechanisms in OpenMP
Mutual Exclusion Synchronization
The Critical Section Directive
The atomic Directive
Runtime Library Lock Routines
Event Synchronization
Ordered Sections
The master Directive
Custom Synchronization: Rolling Your Own
The flush Directive
Some Practical Considerations
Concluding Remarks
Key Factors That Impact Performance
Coverage and Granularity
Load Balance
Performance-Tuning Methodology
Dynamic Threads
Bus-Based and NUMA Machines
Concluding Remarks
A Quick Reference to OpenMP