Thanks to visit codestin.com
Credit goes to github.com

Skip to content

missalignment with differenet shape in F.linear with bf16 dtype #153033

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
likelyzhao opened this issue May 7, 2025 · 2 comments
Open

missalignment with differenet shape in F.linear with bf16 dtype #153033

likelyzhao opened this issue May 7, 2025 · 2 comments
Labels
module: bfloat16 module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: padding needs reproduction Someone else needs to try reproducing the issue given the instructions. No action needed from user triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@likelyzhao
Copy link

likelyzhao commented May 7, 2025

🐛 Describe the bug

For the F.linear function, when constructing matrix multiplications of varying dimensions via zero-padding, output consistency cannot be guaranteed under bf16 precision (outputs are consistent for some dimensions but inconsistent for others).

import torch
import torch.nn.functional as F
import pdb 

torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False
torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = False
torch.backends.cuda.allow_fp16_bf16_reduction_math_sdp(True)
torch.use_deterministic_algorithms = True
torch.backends.cudnn.benchmark = False
torch.utils.deterministic.fill_uninitialized_memory = True

#pdb.set_trace()

## reproduce with rand variable 
for i in range(100):
    input_shape = [1661+ i, 3584]
    weight_shape = [3584, 3584]
    bias_shape = [3584]

    #device = torch.device("cpu")
    device = torch.device("cuda")
    dtype = torch.bfloat16
    torch.random.manual_seed(0)

    r_input = torch.rand(input_shape, device=device, dtype=dtype)
    r_weight_q = torch.rand(weight_shape, device=device, dtype=dtype)
    r_bias_q = torch.rand(bias_shape, device=device, dtype=dtype)


    # expand weight_q with zeros 
    zeros_w = torch.zeros((1024, r_weight_q.shape[1]), device=device, dtype=dtype)
    zeros_b = torch.zeros((1024), device=device, dtype=dtype)
    # 沿行方向拼接(dim=0)
    weight_expand = torch.cat((r_weight_q, zeros_w), dim=0)
    bias_expand = torch.cat((r_bias_q, zeros_b), dim=0)

    #pdb.set_trace()

    output_ori = F.linear(r_input, r_weight_q, r_bias_q)
    output_expand = F.linear(r_input, weight_expand, bias_expand)

    split_dim =-1
    split_op_q, split_op_k, split_op_v = output_expand.split([weight_shape[split_dim], 512 , 512], dim=split_dim)

    sum = torch.sum(torch.abs(split_op_q.float() - output_ori.float()))
    print("diff split_op_q vs output_ori", sum.item()) # assume to be zero

    # sum split_op_k
    sum = torch.sum(split_op_k)
    print("sum split_op_k", sum.item()) # assume to be zero

    # sum split_op_v
    sum = torch.sum(split_op_v)
    print("sum split_op_v", sum.item()) # assume to be zero

Versions

Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35

Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090

Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6258R CPU @ 2.70GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1000.0000
BogoMIPS: 5400.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.8 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 56 MiB (56 instances)
L3 cache: 77 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled

Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.7.0
[pip3] triton==3.3.0
[conda] numpy 2.2.5 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.7.0 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi

cc @jianyuh @nikitaved @mruberry @walterddr @xwang233 @lezcano

@janeyx99 janeyx99 added module: bfloat16 module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: padding labels May 7, 2025
@janeyx99
Copy link
Contributor

janeyx99 commented May 7, 2025

Help us understand the issue--what is the expected behavior? When I locally the script, everything returns 0.

@janeyx99 janeyx99 added needs reproduction Someone else needs to try reproducing the issue given the instructions. No action needed from user triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels May 7, 2025
@likelyzhao
Copy link
Author

in our envriments ,this output is not print("diff split_op_q vs output_ori", sum.item()) is not 0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: bfloat16 module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: padding needs reproduction Someone else needs to try reproducing the issue given the instructions. No action needed from user triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

2 participants