Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Add support for torch.cuda.FloatTensor() #152208

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jijiew
Copy link
Contributor

@jijiew jijiew commented Apr 25, 2025

Copy link

pytorch-bot bot commented Apr 25, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152208

Note: Links to docs will display an error until the docs builds have been completed.

❌ 8 New Failures, 1 Unrelated Failure

As of commit 24a6a20 with merge base 99ae7d4 (image):

NEW FAILURES - The following jobs have failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link

linux-foundation-easycla bot commented Apr 25, 2025

CLA Missing ID CLA Not Signed

Copy link
Contributor

@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like issue number is wrong (PR has nothing to do with C++14 compilation issue that has been fixed a while back)
Also, torch.cuda.FloatTensor has been deprecated for a while, perhaps not worth adding a support for it

@zou3519
Copy link
Contributor

zou3519 commented Apr 28, 2025

Looks like issue number is wrong (PR has nothing to do with C++14 compilation issue that has been fixed a while back) Also, torch.cuda.FloatTensor has been deprecated for a while, perhaps not worth adding a support for it

@malfet this is a starter task, Dynamo also supports torch.FloatTensor, and usages of torch.FloatTensor are really popular in open source. Agreed that the issue number looks wrong.

Comment on lines 173 to 175
GemmConfig(128, 128, 64, 5, 8),
GemmConfig(256, 128, 64, 4, 8),
GemmConfig(128, 128, 64, 5, 4),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's going on here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

Copy link
Contributor

@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your change deletes template_heuristics.py, which it should not. Also, please sign CLA

@jijiew jijiew force-pushed the feature_c branch 2 times, most recently from 941f07d to bc241a4 Compare April 29, 2025 06:31
Summary:

Add support for torch.cuda.FloatTensor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[dynamo] Add support for torch.cuda.FloatTensor()
3 participants