Tags: pytorch/ao
Tags
Fix TORCHAO_SKIP_LOADING_SO_FILES behavior (#3189) **Summary:** Today, if users want to force skip loading the .so files, they can pass in `TORCHAO_SKIP_LOADING_SO_FILES=1`. However, if they pass in `TORCHAO_SKIP_LOADING_SO_FILES=0` or `TORCHAO_SKIP_LOADING_SO_FILES=false`, we will still skip loading these files. This commit fixes this behavior by: 1. Renaming this env var to `TORCHAO_FORCE_SKIP_LOADING_SO_FILES` 2. Only accepting value of "1" for this env var **Test Plan:** ``` $ TORCHAO_FORCE_SKIP_LOADING_SO_FILES=1 python -c "import torchao" Skipping import of cpp extensions due to TORCHAO_FORCE_SKIP_LOADING_SO_FILES=1 \# No effect $ TORCHAO_FORCE_SKIP_LOADING_SO_FILES=0 python -c "import torchao" $ TORCHAO_FORCE_SKIP_LOADING_SO_FILES=False python -c "import torchao" $ TORCHAO_FORCE_SKIP_LOADING_SO_FILES=false python -c "import torchao" ```
Fix setuptools version for docs build (#3150) * Add numpy install for docs build * Install torch nightly * Add numpy * Python3.10 * Python3..11 * Set use_cpp flag * use_cpp * use env var * no build isolation * fix pip * fix pip * fix pip * Fix pip upgrade command in doc_build.yml
enable select for NVFP4Tensor (#3117) * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned] * Update [ghstack-poisoned]
better check for mxfp8 cuda kernel presence (#2933) Summary: Short term fix for #2932. If torchao was build without CUDA 10.0 (such as in our CI), ensures that: a. only callsites which actually use the mxfp8 dim1 kernel see the error message. Using NVFP4 no longer hits this error. b. make the error message point to github issue for more info on the workaround (for now, build from souce). Test Plan: 1. hardcode mxfp8 kernel from being built: https://github.com/pytorch/ao/blob/85557135c93d3429320a4a360c0ee9cb49f84a00/setup.py#L641 2. build torchao from source, verify `torchao/prototype` does not have any `.so` files 3. run nvfp4 tests, verify they now pass: `pytest test/prototype/mx_formats/test_nvfp4_tensor.py -s -x` 4. run mxfp8 linear tests, verify the new error message is displayed for dim1 kernel tests: `pytest test/prototype/mx_formats/test_mx_linear.py -s -x -k test_linear_eager_vs_hp` 5. undo the change in (1), rebuild torchao, verify all mx tests pass: `pytest test/prototype/mx_formats/ -s -x` Reviewers: Subscribers: Tasks: Tags:
another fix for torch version (#2922) Summary: `torch.__version__` has unexpected behavior when comparing to a string: ```python (Pdb) torch.__version__ '2.9.0.dev20250902+cu128' (Pdb) str(torch.__version__) '2.9.0.dev20250902+cu128' (Pdb) '2.9.0.dev20250902+cu128' >= '2.9' True (Pdb) torch.__version__ >= '2.9' False (Pdb) torch.__version__ >= (2, 9) False (Pdb) torch.__version__ >= (2, 9, 0) False (Pdb) str(torch.__version__) >= '2.9' True ``` To unblock the release, for now compare `str(torch.__version__)` to force the behavior we want for `torch==2.9.x`. We should make this more robust, saving that for a future PR. Test Plan: ``` 1. install torchao 0.13.0 from pip 2. install torch 2.8.0, verify torchao imports without errors 3. isntall torch 2.9.x, verify torchao imports correctly and a warning for skipping c++ kernel import is shown ``` Reviewers: Subscribers: Tasks: Tags:
Exclude libcuda.so from auditwheel replair (#2927) * Exclude libcuda.so from auditwheel replair * Update build_wheels_linux.yml * Update post_build_script.sh
PreviousNext