Concurrent computing
From Wikipedia, the free encyclopedia
Jump to navigationJump to search
For a more theoretical discussion, see Concurrency (computer science).
This article needs additional citations for verification. Please
help improve this article by adding citations to reliable sources.
Unsourced material may be challenged and removed.
Find sources: "Concurrent
computing" – news · newspapers · books · scholar · JSTOR (February 2014) (Learn
how and when to remove this template message)
Programming paradigms
Action
Agent-oriented
Array-oriented
Automata-based
Concurrent computing
o Relativistic programming
Data-driven
Declarative (contrast: Imperative)
o Functional
Functional logic
Purely functional
o Logic
Abductive logic
Answer set
Concurrent logic
Functional logic
Inductive logic
o Constraint
Constraint logic
Concurrent constraint logic
o Dataflow
Flow-based
Reactive
o Ontology
Differentiable
Dynamic/scripting
Event-driven
Function-level (contrast: Value-level)
o Point-free style
Concatenative
Generic
Imperative (contrast: Declarative)
o Procedural
o Object-oriented
Polymorphic
Intentional
Language-oriented
o Domain-specific
Literate
Natural-language programming
Metaprogramming
o Automatic
Inductive programming
o Reflective
Attribute-oriented
o Macro
o Template
Non-structured (contrast: Structured)
o Array
Nondeterministic
Parallel computing
o Process-oriented
Probabilistic
Quantum
Set-theoretic
Stack-based
Structured (contrast: Non-structured)
o Block-structured
Structured concurrency
o Object-oriented
Actor-based
Class-based
Concurrent
Prototype-based
By separation of concerns:
Aspect-oriented
Role-oriented
Subject-oriented
o Recursive
Symbolic
Value-level (contrast: Function-level)
v
t
e
Concurrent computing is a form of computing in which several computations are
executed concurrently—during overlapping time periods—instead of sequentially, with
one completing before the next starts.
This is a property of a system—whether a program, computer, or a network—where
there is a separate execution point or "thread of control" for each process. A concurrent
system is one where a computation can advance without waiting for all other
computations to complete.[1]
Concurrent computing is a form of modular programming. In its paradigm an overall
computation is factored into subcomputations that may be executed concurrently.
Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch
Hansen, and C.A.R. Hoare.
Contents
1Introduction
o 1.1Coordinating access to shared resources
o 1.2Advantages
2Models
3Implementation
o 3.1Interaction and communication
4History
5Prevalence
6Languages supporting concurrent programming
7See also
8Notes
9References
10Sources
11Further reading
12External links
Introduction[edit]
See also: Parallel computing
hideThis section has multiple issues. Please help improve it or discuss
these issues on the talk page. (Learn how and when to remove these template
messages)
This section needs additional citations for verification. (December 2016)
This section possibly contains original research. (December 2016)
The concept of concurrent computing is frequently confused with the related but distinct
concept of parallel computing,[2][3] although both can be described as "multiple processes
executing during the same period of time". In parallel computing, execution occurs at
the same physical instant: for example, on separate processors of a multi-
processor machine, with the goal of speeding up computations—parallel computing is
impossible on a (one-core) single processor, as only one computation can occur at any
instant (during any single clock cycle).[a] By contrast, concurrent computing consists of
process lifetimes overlapping, but execution need not happen at the same instant. The
goal here is to model processes in the outside world that happen concurrently, such as
multiple clients accessing a server at the same time. Structuring software systems as
composed of multiple concurrent, communicating parts can be useful for tackling
complexity, regardless of whether the parts can be executed in parallel. [4]:1
For example, concurrent processes can be executed on one core by interleaving the
execution steps of each process via time-sharing slices: only one process runs at a
time, and if it does not complete during its time slice, it is paused, another process
begins or resumes, and then later the original process is resumed. In this way, multiple
processes are part-way through execution at a single instant, but only one process is
being executed at that instant.[citation needed]
Concurrent computations may be executed in parallel,[2][5] for example, by assigning each
process to a separate processor or processor core, or distributing a computation across
a network. In general, however, the languages, tools, and techniques for parallel
programming might not be suitable for concurrent programming, and vice versa. [citation needed]
The exact timing of when tasks in a concurrent system are executed depend on
the scheduling, and tasks need not always be executed concurrently. For example,
given two tasks, T1 and T2:[citation needed]
T1 may be executed and finished before T2 or vice versa (serial and sequential)
T1 and T2 may be executed alternately (serial and concurrent)
T1 and T2 may be executed simultaneously at the same instant of time
(parallel and concurrent)
The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when
these are explicitly distinguished, concurrent/sequential and parallel/serial are used as
opposing pairs.[6] A schedule in which tasks execute one at a time (serially, no
parallelism), without interleaving (sequentially, no concurrency: no task begins until the
prior task ends) is called a serial schedule. A set of tasks that can be scheduled serially
is serializable, which simplifies concurrency control.[citation needed]
Coordinating access to shared resources[edit]
The main challenge in designing concurrent programs is concurrency control: ensuring
the correct sequencing of the interactions or communications between different
computational executions, and coordinating access to resources that are shared among
executions.[5] Potential problems include race conditions, deadlocks, and resource
starvation. For example, consider the following algorithm to make withdrawals from a
checking account represented by the shared resource balance :
1 bool withdraw(int withdrawal)
2 {
3 if (balance >= withdrawal)
4 {
5 balance -= withdrawal;
6 return true;
7 }
8 return false;
9 }
Suppose balance = 500 , and two concurrent threads make the
calls withdraw(300) and withdraw(350) . If line 3 in both operations executes before
line 5 both operations will find that balance >= withdrawal evaluates to true , and
execution will proceed to subtracting the withdrawal amount. However, since both
processes perform their withdrawals, the total amount withdrawn will end up being more
than the original balance. These sorts of problems with shared resources benefit from
the use of concurrency control, or non-blocking algorithms.
Advantages[edit]
This section does not cite any sources. Please help improve this
section by adding citations to reliable sources. Unsourced material may
be challenged and removed.
Find sources: "Concurrent
computing" – news · newspapers · books · scholar · JSTOR (December 2006) (Learn
how and when to remove this template message)
The advantages of concurrent computing include:
Increased program throughput—parallel execution of a concurrent program allows the
number of tasks completed in a given time to increase proportionally to the number of
processors according to Gustafson's law
High responsiveness for input/output—input/output-intensive programs mostly wait for
input or output operations to complete. Concurrent programming allows the time that would
be spent waiting to be used for another task.[citation needed]
More appropriate program structure—some problems and problem domains are well-
suited to representation as concurrent tasks or processes.[citation needed]
Models[edit]
Models for understanding and analyzing concurrent computing systems include:
Actor model
o Object-capability model for security
Input/output automaton
Software transactional memory (STM)
Petri nets
Process calculi such as
o Ambient calculus
o Calculus of communicating systems (CCS)
o Communicating sequential processes (CSP)
o Join-calculus
o π-calculus