Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit f61bf93

Browse files
cluster2600claude
andcommitted
Fix typos and grammar in documentation
Documentation fixes: - Fix capitalization (pythonic → Pythonic) for consistency across repo - Fix grammar (re-structured → restructured, bring up → introduce) - Fix subject-verb agreement (list interfaces are → is) - Fix SECURITY.md header reference (nvmath-python → CUDA Python) - Fix spelling errors (transferring, absence) - Fix verb forms (Set up, with vs against) - Fix directory paths (test/cython → tests/cython) Note: No dependency version bumps per maintainer request 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
1 parent 4c21824 commit f61bf93

File tree

4 files changed

+17
-17
lines changed

4 files changed

+17
-17
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,23 +5,23 @@ CUDA Python is the home for accessing NVIDIA’s CUDA platform from Python. It c
55
* [cuda.core](https://nvidia.github.io/cuda-python/cuda-core/latest): Pythonic access to CUDA Runtime and other core functionalities
66
* [cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest): Low-level Python bindings to CUDA C APIs
77
* [cuda.cccl.cooperative](https://nvidia.github.io/cccl/cuda_cooperative/): A Python module providing CCCL's reusable block-wide and warp-wide *device* primitives for use within Numba CUDA kernels
8-
* [cuda.cccl.parallel](https://nvidia.github.io/cccl/cuda_parallel/): A Python module for easy access to CCCL's highly efficient and customizable parallel algorithms, like `sort`, `scan`, `reduce`, `transform`, etc, that are callable on the *host*
8+
* [cuda.cccl.parallel](https://nvidia.github.io/cccl/cuda_parallel/): A Python module for easy access to CCCL's highly efficient and customizable parallel algorithms, like `sort`, `scan`, `reduce`, `transform`, etc. that are callable on the *host*
99
* [numba.cuda](https://nvidia.github.io/numba-cuda/): Numba's target for CUDA GPU programming by directly compiling a restricted subset of Python code into CUDA kernels and device functions following the CUDA execution model.
1010
* [nvmath-python](https://docs.nvidia.com/cuda/nvmath-python/latest): Pythonic access to NVIDIA CPU & GPU Math Libraries, with both [*host*](https://docs.nvidia.com/cuda/nvmath-python/latest/overview.html#host-apis) and [*device* (nvmath.device)](https://docs.nvidia.com/cuda/nvmath-python/latest/overview.html#device-apis) APIs. It also provides low-level Python bindings to host C APIs ([nvmath.bindings](https://docs.nvidia.com/cuda/nvmath-python/latest/bindings/index.html)).
1111

12-
CUDA Python is currently undergoing an overhaul to improve existing and bring up new components. All of the previously available functionalities from the `cuda-python` package will continue to be available, please refer to the [cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest) documentation for installation guide and further detail.
12+
CUDA Python is currently undergoing an overhaul to improve existing and introduce new components. All of the previously available functionalities from the `cuda-python` package will continue to be available, please refer to the [cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest) documentation for installation guide and further detail.
1313

1414
## cuda-python as a metapackage
1515

16-
`cuda-python` is being re-structured to become a metapackage that contains a collection of subpackages. Each subpackage is versioned independently, allowing installation of each component as needed.
16+
`cuda-python` is being restructured to become a metapackage that contains a collection of subpackages. Each subpackage is versioned independently, allowing installation of each component as needed.
1717

1818
### Subpackage: `cuda.core`
1919

20-
The `cuda.core` package offers idiomatic, pythonic access to CUDA Runtime and other functionalities.
20+
The `cuda.core` package offers idiomatic, Pythonic access to CUDA Runtime and other functionalities.
2121

2222
The goals are to
2323

24-
1. Provide **idiomatic ("pythonic")** access to CUDA Driver, Runtime, and JIT compiler toolchain
24+
1. Provide **idiomatic ("Pythonic")** access to CUDA Driver, Runtime, and JIT compiler toolchain
2525
2. Focus on **developer productivity** by ensuring end-to-end CUDA development can be performed quickly and entirely in Python
2626
3. **Avoid homegrown** Python abstractions for CUDA for new Python GPU libraries starting from scratch
2727
4. **Ease** developer **burden of maintaining** and catching up with latest CUDA features
@@ -31,7 +31,7 @@ The goals are to
3131

3232
The `cuda.bindings` package is a standard set of low-level interfaces, providing full coverage of and access to the CUDA host APIs from Python.
3333

34-
The list of available interfaces are:
34+
The list of available interfaces is:
3535

3636
* CUDA Driver
3737
* CUDA Runtime

SECURITY.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ including all source code repositories managed through our organization.
66
If you need to report a security issue, please use the appropriate contact points outlined
77
below. **Please do not report security vulnerabilities through GitHub/GitLab.**
88

9-
## Reporting Potential Security Vulnerability in nvmath-python
9+
## Reporting Potential Security Vulnerability in CUDA Python
1010

11-
To report a potential security vulnerability in nvmath-python:
11+
To report a potential security vulnerability in CUDA Python:
1212

1313
- Web: [Security Vulnerability Submission
1414
Form](https://www.nvidia.com/object/submit-security-vulnerability.html)

cuda_core/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `cuda.core`: (experimental) pythonic CUDA module
1+
# `cuda.core`: (experimental) Pythonic CUDA module
22

33
Currently under active development; see [the documentation](https://nvidia.github.io/cuda-python/cuda-core/latest/) for more details.
44

@@ -13,16 +13,16 @@ This subpackage adheres to the developing practices described in the parent meta
1313
## Testing
1414

1515
To run these tests:
16-
* `python -m pytest tests/` against editable installations
17-
* `pytest tests/` against installed packages
16+
* `python -m pytest tests/` with editable installations
17+
* `pytest tests/` with installed packages
1818

1919
### Cython Unit Tests
2020

2121
Cython tests are located in `tests/cython` and need to be built. These builds have the same CUDA Toolkit header requirements as [those of cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest/install.html#requirements) where the major.minor version must match `cuda.bindings`. To build them:
2222

23-
1. Setup environment variable `CUDA_HOME` with the path to the CUDA Toolkit installation.
24-
2. Run `build_tests` script located in `test/cython` appropriate to your platform. This will both cythonize the tests and build them.
23+
1. Set up environment variable `CUDA_HOME` with the path to the CUDA Toolkit installation.
24+
2. Run `build_tests` script located in `tests/cython` appropriate to your platform. This will both cythonize the tests and build them.
2525

2626
To run these tests:
27-
* `python -m pytest tests/cython/` against editable installations
28-
* `pytest tests/cython/` against installed packages
27+
* `python -m pytest tests/cython/` with editable installations
28+
* `pytest tests/cython/` with installed packages

cuda_core/docs/source/getting-started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ including:
88
- Compiling and launching CUDA kernels
99
- Asynchronous concurrent execution with CUDA graphs, streams and events
1010
- Coordinating work across multiple CUDA devices
11-
- Allocating, transfering, and managing device memory
11+
- Allocating, transferring, and managing device memory
1212
- Runtime linking of device code with Link-Time Optimization (LTO)
1313
- and much more!
1414

@@ -94,7 +94,7 @@ s.sync()
9494
```
9595

9696
This example demonstrates one of the core workflows enabled by `cuda.core`: compiling and launching CUDA code.
97-
Note the clean, Pythonic interface, and absense of any direct calls to the CUDA runtime/driver APIs.
97+
Note the clean, Pythonic interface, and absence of any direct calls to the CUDA runtime/driver APIs.
9898

9999
## Examples and Recipes
100100

0 commit comments

Comments
 (0)