Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 5338817

Browse files
author
twhuang
committed
Merge branch 'dev'
2 parents f423a10 + 92bd8af commit 5338817

File tree

201 files changed

+7052
-5118
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

201 files changed

+7052
-5118
lines changed

docs/Algorithms.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,8 +87,8 @@ <h1>
8787
</div>
8888
</div>
8989
</div>
90-
<script src="search-v1.js"></script>
91-
<script src="searchdata-v1.js" async="async"></script>
90+
<script src="search-v2.js"></script>
91+
<script src="searchdata-v2.js" async="async"></script>
9292
<footer><nav>
9393
<div class="m-container">
9494
<div class="m-row">

docs/AsyncTasking.html

Lines changed: 49 additions & 49 deletions
Large diffs are not rendered by default.

docs/BenchmarkTaskflow.html

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ <h1>
4949
<span class="m-breadcrumb"><a href="install.html">Building and Installing</a> &raquo;</span>
5050
Benchmark Taskflow
5151
</h1>
52-
<div class="m-block m-default">
52+
<nav class="m-block m-default">
5353
<h3>Contents</h3>
5454
<ul>
5555
<li><a href="#CompileAndRunBenchmarks">Compile and Run Benchmarks</a></li>
@@ -62,8 +62,8 @@ <h3>Contents</h3>
6262
</ul>
6363
</li>
6464
</ul>
65-
</div>
66-
<section id="CompileAndRunBenchmarks"><h2><a href="#CompileAndRunBenchmarks">Compile and Run Benchmarks</a></h2><p>To build the benchmark code, enable the CMake option <code>TF_BUILD_BENCHMARKS</code> to <code>ON</code> as follows:</p><pre class="m-console"><span class="gp">#</span> under /taskflow/build
65+
</nav>
66+
<section id="CompileAndRunBenchmarks"><h2><a href="#CompileAndRunBenchmarks">Compile and Run Benchmarks</a></h2><p>To build the benchmark code, enable the CMake option <code>TF_BUILD_BENCHMARKS</code> to <code>ON</code> as follows:</p><pre class="m-console"><span class="gp"># </span>under /taskflow/build
6767
<span class="go">~$ cmake ../ -DTF_BUILD_BENCHMARKS=ON</span>
6868
<span class="go">~$ make</span></pre><p>After you successfully build the benchmark code, you can find all benchmark instances in the <code>benchmarks/</code> folder. You can run the executable of each instance in the corresponding folder.</p><pre class="m-console"><span class="go">~$ cd benchmarks &amp; ls</span>
6969
<span class="go">black_scholes binary_tree graph_traversal ...</span>
@@ -87,9 +87,9 @@ <h3>Contents</h3>
8787
<span class="go"> -r,--num_rounds UINT number of rounds (default=1)</span>
8888
<span class="go"> -m,--model TEXT model name tbb|omp|tf (default=tf)</span></pre><p>We currently implement the following instances that are commonly used by the parallel computing community to evaluate the system performance.</p><table class="m-table"><thead><tr><th>Instance</th><th>Description</th></tr></thead><tbody><tr><td>binary_tree</td><td>traverses a complete binary tree</td></tr><tr><td>black_scholes</td><td>computes option pricing with Black-Shcoles Models</td></tr><tr><td>graph_traversal</td><td>traverses a randomly generated direct acyclic graph</td></tr><tr><td>linear_chain</td><td>traverses a linear chain of tasks</td></tr><tr><td>mandelbrot</td><td>exploits imbalanced workloads in a Mandelbrot set</td></tr><tr><td>matrix_multiplication</td><td>multiplies two 2D matrices</td></tr><tr><td>mnist</td><td>trains a neural network-based image classifier on the MNIST dataset</td></tr><tr><td>parallel_sort</td><td>sorts a range of items</td></tr><tr><td>reduce_sum</td><td>sums a range of items using reduction</td></tr><tr><td>wavefront</td><td>propagates computations in a 2D grid</td></tr><tr><td>linear_pipeline</td><td>pipeline scheduling on a linear chain of pipes</td></tr><tr><td>graph_pipeline</td><td>pipeline scheduling on a graph of pipes</td></tr></tbody></table></section><section id="ConfigureRunOptions"><h2><a href="#ConfigureRunOptions">Configure Run Options</a></h2><p>We implement consistent options for each benchmark instance. Common options are:</p><table class="m-table"><thead><tr><th>option</th><th>value</th><th>function</th></tr></thead><tbody><tr><td><code>-h</code></td><td>none</td><td>display the help message</td></tr><tr><td><code>-t</code></td><td>integer</td><td>configure the number of threads to run</td></tr><tr><td><code>-r</code></td><td>integer</td><td>configure the number of rounds to run</td></tr><tr><td><code>-m</code></td><td>string</td><td>configure the baseline models to run, tbb, omp, or tf</td></tr></tbody></table><p>You can configure the benchmarking environment by giving different options.</p><section id="SpecifyTheRunModel"><h3><a href="#SpecifyTheRunModel">Specify the Run Model</a></h3><p>In addition to a Taskflow-based implementation for each benchmark instance, we have implemented two baseline models using the state-of-the-art parallel programming libraries, <a href="https://www.openmp.org/">OpenMP</a> and <a href="https://github.com/oneapi-src/oneTBB">Intel TBB</a>, to measure and evaluate the performance of Taskflow. You can select different implementations by passing the option <code>-m</code>.</p><pre class="m-console"><span class="go">~$ ./graph_traversal -m tf # run the Taskflow implementation (default)</span>
8989
<span class="go">~$ ./graph_traversal -m tbb # run the TBB implementation</span>
90-
<span class="go">~$ ./graph_traversal -m omp # run the OpenMP implementation</span></pre></section><section id="SpecifyTheNumberOfThreads"><h3><a href="#SpecifyTheNumberOfThreads">Specify the Number of Threads</a></h3><p>You can configure the number of threads to run a benchmark instance by passing the option <code>-t</code>. The default value is one.</p><pre class="m-console"><span class="gp">#</span> run the Taskflow implementation using <span class="m">4</span> threads
91-
<span class="go">~$ ./graph_traversal -m tf -t 4</span></pre><p>Depending on your environment, you may need to use <code>taskset</code> to set the CPU affinity of the running process. This allows the OS scheduler to keep process on the same CPU(s) as long as practical for performance reason.</p><pre class="m-console"><span class="gp">#</span> affine the process to <span class="m">4</span> CPUs, CPU <span class="m">0</span>, CPU <span class="m">1</span>, CPU <span class="m">2</span>, and CPU <span class="m">3</span>
92-
<span class="go">~$ taskset -c 0-3 graph_traversal -t 4 </span></pre></section><section id="SpecifyTheNumberOfRounds"><h3><a href="#SpecifyTheNumberOfRounds">Specify the Number of Rounds</a></h3><p>Each benchmark instance evaluates the runtime of the implementation at different problem sizes. Each problem size corresponds to one iteration. You can configure the number of rounds per iteration to average the runtime.</p><pre class="m-console"><span class="gp">#</span> measure the runtime in an average of <span class="m">10</span> runs
90+
<span class="go">~$ ./graph_traversal -m omp # run the OpenMP implementation</span></pre></section><section id="SpecifyTheNumberOfThreads"><h3><a href="#SpecifyTheNumberOfThreads">Specify the Number of Threads</a></h3><p>You can configure the number of threads to run a benchmark instance by passing the option <code>-t</code>. The default value is one.</p><pre class="m-console"><span class="gp"># </span>run the Taskflow implementation using <span class="m">4</span> threads
91+
<span class="go">~$ ./graph_traversal -m tf -t 4</span></pre><p>Depending on your environment, you may need to use <code>taskset</code> to set the CPU affinity of the running process. This allows the OS scheduler to keep process on the same CPU(s) as long as practical for performance reason.</p><pre class="m-console"><span class="gp"># </span>affine the process to <span class="m">4</span> CPUs, CPU <span class="m">0</span>, CPU <span class="m">1</span>, CPU <span class="m">2</span>, and CPU <span class="m">3</span>
92+
<span class="go">~$ taskset -c 0-3 graph_traversal -t 4 </span></pre></section><section id="SpecifyTheNumberOfRounds"><h3><a href="#SpecifyTheNumberOfRounds">Specify the Number of Rounds</a></h3><p>Each benchmark instance evaluates the runtime of the implementation at different problem sizes. Each problem size corresponds to one iteration. You can configure the number of rounds per iteration to average the runtime.</p><pre class="m-console"><span class="gp"># </span>measure the runtime <span class="k">in</span> an average of <span class="m">10</span> runs
9393
<span class="go">~$ ./graph_traversal -r 10</span>
9494
<span class="go">|V|+|E| Runtime</span>
9595
<span class="go"> 2 0.109 # the runtime value 0.109 is an average of 10 runs</span>
@@ -135,8 +135,8 @@ <h3>Contents</h3>
135135
</div>
136136
</div>
137137
</div>
138-
<script src="search-v1.js"></script>
139-
<script src="searchdata-v1.js" async="async"></script>
138+
<script src="search-v2.js"></script>
139+
<script src="searchdata-v2.js" async="async"></script>
140140
<footer><nav>
141141
<div class="m-container">
142142
<div class="m-row">

docs/CUDASTDExecutionPolicy.html

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -49,20 +49,20 @@ <h1>
4949
<span class="m-breadcrumb"><a href="cudaStandardAlgorithms.html">CUDA Standard Algorithms</a> &raquo;</span>
5050
Execution Policy
5151
</h1>
52-
<div class="m-block m-default">
52+
<nav class="m-block m-default">
5353
<h3>Contents</h3>
5454
<ul>
5555
<li><a href="#CUDASTDExecutionPolicyIncludeTheHeader">Include the Header</a></li>
5656
<li><a href="#CUDASTDParameterizePerformance">Parameterize Performance</a></li>
5757
<li><a href="#CUDASTDDefineAnExecutionPolicy">Define an Execution Policy</a></li>
5858
<li><a href="#CUDASTDAllocateMemoryBufferForAlgorithms">Allocate Memory Buffer for Algorithms</a></li>
5959
</ul>
60-
</div>
61-
<p>Taskflow provides standalone template methods for expressing common parallel algorithms on a GPU. Each of these methods is governed by an <em>execution policy object</em> to configure the kernel execution parameters.</p><section id="CUDASTDExecutionPolicyIncludeTheHeader"><h2><a href="#CUDASTDExecutionPolicyIncludeTheHeader">Include the Header</a></h2><p>You need to include the header file, <code>taskflow/cuda/cudaflow.hpp</code>, for creating a CUDA execution policy object.</p></section><section id="CUDASTDParameterizePerformance"><h2><a href="#CUDASTDParameterizePerformance">Parameterize Performance</a></h2><p>Taskflow parameterizes most CUDA algorithms in terms of <em>the number of threads per block</em> and <em>units of work per thread</em>, which can be specified in the execution policy template type, <a href="classtf_1_1cudaExecutionPolicy.html" class="m-doc">tf::<wbr />cudaExecutionPolicy</a>. The design is inspired by <a href="https://moderngpu.github.io/">Modern GPU Programming</a> authored by Sean Baxter to achieve high-performance GPU computing.</p></section><section id="CUDASTDDefineAnExecutionPolicy"><h2><a href="#CUDASTDDefineAnExecutionPolicy">Define an Execution Policy</a></h2><p>The following example defines an execution policy object, <code>policy</code>, which configures (1) each block to invoke 512 threads and (2) each of these <code>512</code> threads to perform <code>11</code> units of work. Block size must be a power of two. It is always a good idea to specify an odd number in the second parameter to avoid bank conflicts.</p><pre class="m-code"><span class="n">tf</span><span class="o">::</span><span class="n">cudaExecutionPolicy</span><span class="o">&lt;</span><span class="mi">512</span><span class="p">,</span> <span class="mi">11</span><span class="o">&gt;</span> <span class="n">policy</span><span class="p">;</span></pre><aside class="m-note m-info"><h4>Note</h4><p>To use CUDA standard algorithms, you need to include the header taskflow/cudaflow.hpp.</p></aside><p>By default, the execution policy object is associated with the CUDA <em>default stream</em> (i.e., 0). Default stream can incur significant overhead due to the global synchronization. You can associate an execution policy with another stream as shown below:</p><pre class="m-code"><span class="c1">// assign a stream to a policy at construction time</span>
62-
<span class="n">tf</span><span class="o">::</span><span class="n">cudaExecutionPolicy</span><span class="o">&lt;</span><span class="mi">512</span><span class="p">,</span> <span class="mi">11</span><span class="o">&gt;</span> <span class="n">policy</span><span class="p">(</span><span class="n">my_stream</span><span class="p">);</span>
60+
</nav>
61+
<p>Taskflow provides standalone template methods for expressing common parallel algorithms on a GPU. Each of these methods is governed by an <em>execution policy object</em> to configure the kernel execution parameters.</p><section id="CUDASTDExecutionPolicyIncludeTheHeader"><h2><a href="#CUDASTDExecutionPolicyIncludeTheHeader">Include the Header</a></h2><p>You need to include the header file, <code>taskflow/cuda/cudaflow.hpp</code>, for creating a CUDA execution policy object.</p></section><section id="CUDASTDParameterizePerformance"><h2><a href="#CUDASTDParameterizePerformance">Parameterize Performance</a></h2><p>Taskflow parameterizes most CUDA algorithms in terms of <em>the number of threads per block</em> and <em>units of work per thread</em>, which can be specified in the execution policy template type, <a href="classtf_1_1cudaExecutionPolicy.html" class="m-doc">tf::<wbr />cudaExecutionPolicy</a>. The design is inspired by <a href="https://moderngpu.github.io/">Modern GPU Programming</a> authored by Sean Baxter to achieve high-performance GPU computing.</p></section><section id="CUDASTDDefineAnExecutionPolicy"><h2><a href="#CUDASTDDefineAnExecutionPolicy">Define an Execution Policy</a></h2><p>The following example defines an execution policy object, <code>policy</code>, which configures (1) each block to invoke 512 threads and (2) each of these <code>512</code> threads to perform <code>11</code> units of work. Block size must be a power of two. It is always a good idea to specify an odd number in the second parameter to avoid bank conflicts.</p><pre class="m-code"><span class="n">tf</span><span class="o">::</span><span class="n">cudaExecutionPolicy</span><span class="o">&lt;</span><span class="mi">512</span><span class="p">,</span><span class="w"> </span><span class="mi">11</span><span class="o">&gt;</span><span class="w"> </span><span class="n">policy</span><span class="p">;</span><span class="w"></span></pre><aside class="m-note m-info"><h4>Note</h4><p>To use CUDA standard algorithms, you need to include the header taskflow/cudaflow.hpp.</p></aside><p>By default, the execution policy object is associated with the CUDA <em>default stream</em> (i.e., 0). Default stream can incur significant overhead due to the global synchronization. You can associate an execution policy with another stream as shown below:</p><pre class="m-code"><span class="c1">// assign a stream to a policy at construction time</span>
62+
<span class="n">tf</span><span class="o">::</span><span class="n">cudaExecutionPolicy</span><span class="o">&lt;</span><span class="mi">512</span><span class="p">,</span><span class="w"> </span><span class="mi">11</span><span class="o">&gt;</span><span class="w"> </span><span class="n">policy</span><span class="p">(</span><span class="n">my_stream</span><span class="p">);</span><span class="w"></span>
6363

6464
<span class="c1">// assign another stream to the policy</span>
65-
<span class="n">policy</span><span class="p">.</span><span class="n">stream</span><span class="p">(</span><span class="n">another_stream</span><span class="p">);</span></pre><p>All the CUDA standard algorithms in Taskflow are asynchronous with respect to the stream assigned to the execution policy. This enables high execution efficiency for large GPU workloads that call for many different algorithms. You can synchronize the execution at your own wish by calling <code>synchronize</code>.</p><pre class="m-code"><span class="n">policy</span><span class="p">.</span><span class="n">synchronize</span><span class="p">();</span> <span class="c1">// synchronize the associated stream</span></pre><p>The best-performing configurations for each algorithm, each GPU architecture, and each data type can vary significantly. You should experiment different configurations and find the optimal tuning parameters for your applications. A default policy is given in <a href="namespacetf.html#aa18f102977c3257b75e21fde05efdb68" class="m-doc">tf::<wbr />cudaDefaultExecutionPolicy</a>.</p><pre class="m-code"><span class="n">tf</span><span class="o">::</span><span class="n">cudaDefaultExecutionPolicy</span> <span class="n">default_policy</span><span class="p">;</span></pre></section><section id="CUDASTDAllocateMemoryBufferForAlgorithms"><h2><a href="#CUDASTDAllocateMemoryBufferForAlgorithms">Allocate Memory Buffer for Algorithms</a></h2><p>A key difference between our CUDA standard algorithms and others (e.g., Thrust) is the <em>memory management</em>. Unlike CPU-parallel algorithms, many GPU-parallel algorithms require extra buffer to store the temporary results during the multi-phase computation, for instance, <a href="namespacetf.html#a8a872d2a0ac73a676713cb5be5aa688c" class="m-doc">tf::<wbr />cuda_reduce</a> and <a href="namespacetf.html#a06804cb1598e965febc7bd35fc0fbbb0" class="m-doc">tf::<wbr />cuda_sort</a>. We <em>DO NOT</em> allocate any memory during these algorithms call but ask you to provide the memory buffer required for each of such algorithms. This decision seems to complicate the code a little bit, but it gives applications freedom to optimize the memory; also, it makes all algorithm calls capturable to a CUDA graph to improve the execution efficiency.</p></section>
65+
<span class="n">policy</span><span class="p">.</span><span class="n">stream</span><span class="p">(</span><span class="n">another_stream</span><span class="p">);</span><span class="w"></span></pre><p>All the CUDA standard algorithms in Taskflow are asynchronous with respect to the stream assigned to the execution policy. This enables high execution efficiency for large GPU workloads that call for many different algorithms. You can synchronize the execution at your own wish by calling <code>synchronize</code>.</p><pre class="m-code"><span class="n">policy</span><span class="p">.</span><span class="n">synchronize</span><span class="p">();</span><span class="w"> </span><span class="c1">// synchronize the associated stream</span></pre><p>The best-performing configurations for each algorithm, each GPU architecture, and each data type can vary significantly. You should experiment different configurations and find the optimal tuning parameters for your applications. A default policy is given in <a href="namespacetf.html#aa18f102977c3257b75e21fde05efdb68" class="m-doc">tf::<wbr />cudaDefaultExecutionPolicy</a>.</p><pre class="m-code"><span class="n">tf</span><span class="o">::</span><span class="n">cudaDefaultExecutionPolicy</span><span class="w"> </span><span class="n">default_policy</span><span class="p">;</span><span class="w"></span></pre></section><section id="CUDASTDAllocateMemoryBufferForAlgorithms"><h2><a href="#CUDASTDAllocateMemoryBufferForAlgorithms">Allocate Memory Buffer for Algorithms</a></h2><p>A key difference between our CUDA standard algorithms and others (e.g., Thrust) is the <em>memory management</em>. Unlike CPU-parallel algorithms, many GPU-parallel algorithms require extra buffer to store the temporary results during the multi-phase computation, for instance, <a href="namespacetf.html#a8a872d2a0ac73a676713cb5be5aa688c" class="m-doc">tf::<wbr />cuda_reduce</a> and <a href="namespacetf.html#a06804cb1598e965febc7bd35fc0fbbb0" class="m-doc">tf::<wbr />cuda_sort</a>. We <em>DO NOT</em> allocate any memory during these algorithms call but ask you to provide the memory buffer required for each of such algorithms. This decision seems to complicate the code a little bit, but it gives applications freedom to optimize the memory; also, it makes all algorithm calls capturable to a CUDA graph to improve the execution efficiency.</p></section>
6666
</div>
6767
</div>
6868
</div>
@@ -101,8 +101,8 @@ <h3>Contents</h3>
101101
</div>
102102
</div>
103103
</div>
104-
<script src="search-v1.js"></script>
105-
<script src="searchdata-v1.js" async="async"></script>
104+
<script src="search-v2.js"></script>
105+
<script src="searchdata-v2.js" async="async"></script>
106106
<footer><nav>
107107
<div class="m-container">
108108
<div class="m-row">

0 commit comments

Comments
 (0)