Introduction to Parallel Computing

Table of Contents

  1. Abstract
  2. Overview
    1. What is Parallel Computing?
    2. Why Use Parallel Computing?
  3. Concepts and Terminology
    1. von Neumann Computer Architecture
    2. Flynn's Classical Taxonomy
    3. Some General Parallel Terminology
  4. Parallel Computer Memory Architectures
    1. Shared Memory
    2. Distributed Memory
    3. Hybrid Distributed-Shared Memory
  5. Parallel Programming Models
    1. Overview
    2. Shared Memory Model
    3. Threads Model
    4. Message Passing Model
    5. Data Parallel Model
    6. Other Models
  6. Designing Parallel Programs
    1. Automatic vs. Manual Parallelization
    2. Understand the Problem and the Program
    3. Partitioning
    4. Communications
    5. Synchronization
    6. Data Dependencies
    7. Load Balancing
    8. Granularity
    9. I/O
    10. Limits and Costs of Parallel Programming
    11. Performance Analysis and Tuning
  7. Parallel Examples
    1. Array Processing
    2. PI Calculation
    3. Simple Heat Equation
    4. 1-D Wave Equation
  8. References and More Information


Abstract


This presentation covers the basics of parallel computing. Beginning with a brief overview and some concepts and terminology associated with parallel computing, the topics of parallel memory architectures and programming models are then explored. These topics are followed by a discussion on a number of issues related to designing parallel programs. The last portion of the presentation is spent examining how to parallelize several different types of serial programs.

Level/Prerequisites: None



Overview

What is Parallel Computing?



Overview

Why Use Parallel Computing?



Concepts and Terminology

von Neumann Architecture



Concepts and Terminology

Flynn's Classical Taxonomy

Single Instruction, Single Data (SISD):
  • A serial (non-parallel) computer
  • Single instruction: only one instruction stream is being acted on by the CPU during any one clock cycle
  • Single data: only one data stream is being used as input during any one clock cycle
  • Deterministic execution
  • This is the oldest and until recently, the most prevalent form of computer
  • Examples: most PCs, single CPU workstations and mainframes
SISD
Single Instruction, Multiple Data (SIMD):
  • A type of parallel computer
  • Single instruction: All processing units execute the same instruction at any given clock cycle
  • Multiple data: Each processing unit can operate on a different data element
  • This type of machine typically has an instruction dispatcher, a very high-bandwidth internal network, and a very large array of very small-capacity instruction units.
  • Best suited for specialized problems characterized by a high degree of regularity,such as image processing.
  • Synchronous (lockstep) and deterministic execution
  • Two varieties: Processor Arrays and Vector Pipelines
  • Examples (some extinct):
    • Processor Arrays: Connection Machine CM-2, Maspar MP-1, MP-2
    • Vector Pipelines: IBM 9000, Cray C90, Fujitsu VP, NEC SX-2, Hitachi S820


SIMD

Multiple Instruction, Single Data (MISD):

Multiple Instruction, Multiple Data (MIMD):
  • Currently, the most common type of parallel computer
  • Multiple Instruction: every processor may be executing a different instruction stream
  • Multiple Data: every processor may be working with a different data stream
  • Execution can be synchronous or asynchronous, deterministic or non- deterministic
  • Examples: most current supercomputers, networked parallel computer "grids" and multi-processor SMP computers - including some types of PCs.
MIMD



Concepts and Terminology

Some General Parallel Terminology

Like everything else, parallel computing has its own "jargon". Some of the more commonly used terms associated with parallel computing are listed below.

Task
A logically discrete section of computational work. A task is typically a program or program-like set of instructions that is executed by a processor.

Parallel Task
A task that can be executed by multiple processors safely (yields correct results)

Serial Execution
Execution of a program sequentially, one statement at a time. In the simplest sense, this is what happens on a one processor machine. However, virtually all parallel tasks will have sections of a parallel program that must be executed serially.

Parallel Execution
Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time.

Shared Memory
From a strictly hardware point of view, describes a computer architecture where all processors have direct (usually bus based) access to common physical memory. In a programming sense, it describes a model where parallel tasks all have the same "picture" of memory and can directly address and access the same logical memory locations regardless of where the physical memory actually exists.

Distributed Memory
In hardware, refers to network based memory access for physical memory that is not common. As a programming model, tasks can only logically "see" local machine memory and must use communications to access memory on other machines where other tasks are executing.

Communications
Parallel tasks typically need to exchange data. There are several ways this can be accomplished, such as through a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employed.

Synchronization
The coordination of parallel tasks in real time, very often associated with communications. Often implemented by establishing a synchronization point within an application where a task may not proceed further until another task(s) reaches the same or logically equivalent point.

Synchronization usually involves waiting by at least one task, and can therefore cause a parallel application's wall clock execution time to increase.

Granularity
In parallel computing, granularity is a qualitative measure of the ratio of computation to communication.

Observed Speedup
Observed speedup of a code which has been parallelized, defined as:

wall-clock time of serial execution
wall-clock time of parallel execution

One of the simplest and most widely used indicators for a parallel program's performance.

Parallel Overhead
The amount of time required to coordinate parallel tasks, as opposed to doing useful work. Parallel overhead can include factors such as:

Massively Parallel
Refers to the hardware that comprises a given parallel system - having many processors. The meaning of many keeps increasing, but currently means more than 1000.

Scalability
Refers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate increase in parallel speedup with the addition of more processors. Factors that contribute to scalability include:



Parallel Computer Memory Architectures

Shared Memory

General Characteristics:

Uniform Memory Access (UMA):

Non-Uniform Memory Access (NUMA):

Advantages:

Disadvantages:



Parallel Computer Memory Architectures

Distributed Memory

General Characteristics:

Advantages:

Disadvantages:



Parallel Computer Memory Architectures

Hybrid Distributed-Shared Memory



Parallel Programming Models

Overview



Parallel Programming Models

Shared Memory Model

Implementations:



Parallel Programming Models

Threads Model

Implementations:



Parallel Programming Models

Message Passing Model

Implementations:



Parallel Programming Models

Data Parallel Model


Implementations:



Parallel Programming Models

Other Models

Hybrid:

Single Program Multiple Data (SPMD):

Multiple Program Multiple Data (MPMD):



Designing Parallel Programs

Automatic vs. Manual Parallelization



Designing Parallel Programs

Understand the Problem and the Program



Designing Parallel Programs

Partitioning

Domain Decomposition:

Functional Decomposition:



Designing Parallel Programs

Communications

Who Needs Communications?

Factors to Consider:



Designing Parallel Programs

Synchronization

Types of Synchronization:



Designing Parallel Programs

Data Dependencies

Definition:

Examples:

How to Handle Data Dependencies:



Designing Parallel Programs

Load Balancing

How to Achieve Load Balance:



Designing Parallel Programs

Granularity

Computation / Communication Ratio:

Fine-grain Parallelism: Fine-grain parallelism


Coarse-grain Parallelism: Coarse-grain parallelism


Which is Best?

Designing Parallel Programs

I/O

The Bad News:

The Good News:



Designing Parallel Programs

Limits and Costs of Parallel Programming

Amdahl's Law:

Complexity:

Portability:

Resource Requirements:

Scalability:



Designing Parallel Programs

Performance Analysis and Tuning



Parallel Examples

Array Processing

Embarrassingly parallel array calculation


Array Processing
Parallel Solution 1

One Possible Solution:


Array Processing
Parallel Solution 2: Pool of Tasks

Pool of Tasks Scheme:

Discussion:



Parallel Examples

PI Calculation

  • The value of PI can be calculated in a number of ways. Consider the following method of approximating PI
    1. Inscribe a circle in a square
    2. Randomly generate points in the square
    3. Determine the number of points in the square that are also in the circle
    4. Let r be the number of points in the circle divided by the number of points in the square
    5. PI ~ 4 r
    6. Note that the more points generated, the better the approximation

  • Serial pseudo code for this procedure:

    
    npoints = 10000
    circle_count = 0
    
    do j = 1,npoints
      generate 2 random numbers between 0 and 1
      xcoordinate = random1 ; ycoordinate = random2
      if (xcoordinate, ycoordinate) inside circle
      then circle_count = circle_count + 1
    end do
    
    PI = 4.0*circle_count/npoints
    
    

One method of determining PI

  • Note that most of the time in running this program would be spent executing the loop

  • Leads to an embarrassingly parallel solution


    PI Calculation
    Parallel Solution



    Parallel Examples

    Simple Heat Equation

    • Most problems in parallel computing require communication among the tasks. A number of common problems require communication with "neighbor" tasks.

    • The heat equation describes the temperature change over time, given initial temperature distribution and boundary conditions.

    • A finite differencing scheme is employed to solve the heat equation numerically on a square region.

    • The initial temperature is zero on the boundaries and high in the middle.

    • The boundary temperature is held at zero.

    • For the fully explicit problem, a time stepping algorithm is used. The elements of a 2-dimensional array represent the temperature at points on the square.

    • The calculation of an element is dependent upon neighbor element values. Heat equation

    • A serial program would contain code like:

      
      do iy = 2, ny - 1
      do ix = 2, nx - 1
        u2(ix, iy) =
          u1(ix, iy)  +
            cx * (u1(ix+1,iy) + u1(ix-1,iy) - 2.*u1(ix,iy)) +
            cy * (u1(ix,iy+1) + u1(ix,iy-1) - 2.*u1(ix,iy))
      end do
      end do
      
      

    Initial heat conditions Heat equation


    Simple Heat Equation
    Parallel Solution 1

    Heat equation - partitioned data


    Simple Heat Equation
    Parallel Solution 2: Overlapping Communication and Computation



    Parallel Examples

    1-D Wave Equation

    Wave equation


    1-D Wave Equation
    Parallel Solution

    Wave equation partition


    This completes the tutorial.

    Evaluation Form       Please complete the online evaluation form.

    Where would you like to go now?



    References and More Information