Thanks to visit codestin.com
Credit goes to github.com

Skip to content

POC: enable to train at the double precision #207

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

raimis
Copy link
Collaborator

@raimis raimis commented Jul 24, 2023

No description provided.

@raimis raimis requested a review from RaulPPelaez July 24, 2023 14:07
@raimis raimis self-assigned this Jul 24, 2023
@@ -37,7 +37,7 @@ def get_args():
parser.add_argument('--ema-alpha-neg-dy', type=float, default=1.0, help='The amount of influence of new losses on the exponential moving average of dy')
parser.add_argument('--ngpus', type=int, default=-1, help='Number of GPUs, -1 use all available. Use CUDA_VISIBLE_DEVICES=1, to decide gpus')
parser.add_argument('--num-nodes', type=int, default=1, help='Number of nodes')
parser.add_argument('--precision', type=int, default=32, choices=[16, 32], help='Floating point precision')
parser.add_argument('--precision', type=int, default=32, choices=[16, 32, 64], help='Floating point precision')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh wow I totally missed this argument when I implemented #182

loss_y = loss_fn(y, batch.y)
# y
y_dtype = {16: torch.float16, 32: torch.float32, 64: torch.float64}[self.hparams.precision]
loss_y = loss_fn(y, batch.y.to(y_dtype))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come you need this here but not a few lines above for neg_dy?

# Keep molecules with specific elements
if self.atomic_numbers:
if not set(z.numpy()).issubset(self.atomic_numbers):
continue
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This got mixed from #206, right?

y = pt.tensor(self.y_mm[idx], dtype=pt.float32).view(
1, 1
) # It would be better to use float64, but the trainer complaints
y = pt.tensor(self.y_mm[idx], dtype=pt.float64).view(1, 1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would pass dtype as an argument to Ace here and store everything in the correct type. I do not see why store pos in float32 and y in float64.

@RaulPPelaez
Copy link
Collaborator

@raimis I believe #208 solves what you are trying to do here. I can train with float64 with hte code in that PR.

@raimis
Copy link
Collaborator Author

raimis commented Sep 5, 2023

Obsolete

@raimis raimis closed this Sep 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants