[FlexAttention] export fails to trace with functorch #153063
Labels
module: flex attention
module: functorch
Pertaining to torch.func or pytorch/functorch
oncall: pt2
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
๐ Describe the bug
It seems to me that at pre-dispatch level, we are not properly peeking into fake tensor inside BatchedTensor
Versions
main
cc @chauhang @penguinwu @zou3519 @Chillee @samdow @kshitij12345 @ydwu4 @drisspg @yanboliang @BoyuanFeng
The text was updated successfully, but these errors were encountered: