An example of writing a C++ extension for PyTorch. See here for the accompanying tutorial.
There are a few "sights" you can metaphorically visit in this repository:
- Inspect the C++ and CUDA extensions in the
cpp/andcuda/folders, - Build C++ and/or CUDA extensions by going into the
cpp/orcuda/folder and executingpython setup.py install, - JIT-compile C++ and/or CUDA extensions by going into the
cpp/orcuda/folder and callingpython jit.py, which will JIT-compile the extension and load it, - Benchmark Python vs. C++ vs. CUDA by running
python benchmark.py {py, cpp, cuda} [--cuda], - Run gradient checks on the code by running
python grad_check.py {py, cpp, cuda} [--cuda]. - Run output checks on the code by running
python check.py {forward, backward} [--cuda].