-
Notifications
You must be signed in to change notification settings - Fork 24.4k
"clamp_min_cpu" not implemented for 'ComplexDouble' #73915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Comparison ops are not implemented for complex numbers, this is expected behavior #36444 (comment) |
In my opinion, this operation makes sense, and it could potentially be performed independently on the real and imaginary part independently.
I wanted to use this operation for the backward of the LU decomposition, and in the end we had to work around it by implementing this operation manually. While the general rule of "we don't support comparison operations for complex numbers" is indeed the correct one, we can consider implementing some operations that are not so "mathematical" in appropriate ways, as long as we document this behaviour accordingly. @anjali411 wdyt? |
Does numpy implement clamp in this way? :) |
it seems that numpy doesn't implement clamp for complexdouble |
What semantics do you expect from |
Oh, I know, but does PyTorch has any non-linear active function for complex? |
Any non-linear activation that does not depend on |
How do you do a workaround of this issue? splitting into real and imaginary? |
Clamping the real and imaginary parts of a complex tensor separately is possible; one option for "clamping" complex numbers is to compute a complex number with the same angle/phase but scaled magnitude. In this case, however, a hypothetical |
@SantaTitular if you want to clamp the absolute value, then this is probably the recommended way.
If you want to clamp the real or imag value for some reason, then that can easily done by accessing the |
Thank you for the fast response @mruberry @anjali411 . Indeed both your ideas would work nicely (I'll use it as a reference for future applications). For my case, the application was using a complex ReLu DEEP COMPLEX NETWORKS
|
@anjali411 is definitely our expert on complex losses -- in fact she's preparing a blog post that includes a complex neural network now! |
Damn, thats great news! I'm very interested in reading and commenting on it. @anjali411 please let me know when it is available (not sure if there is a way on github to track/read about new blog posts) |
Uh oh!
There was an error while loading. Please reload this page.
π Describe the bug
Versions
PyTorch version: 1.10.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.2.1 (x86_64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.30)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.7 (default, Sep 16 2021, 08:50:36) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.2
[pip3] numpydoc==1.1.0
[pip3] torch==1.10.2
[pip3] torchaudio==0.10.2
[pip3] torchvision==0.11.3
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 h0a44026_0 pytorch
[conda] mkl 2021.4.0 hecd8cb5_637
[conda] mkl-service 2.4.0 py39h9ed2024_0
[conda] mkl_fft 1.3.1 py39h4ab4a9b_0
[conda] mkl_random 1.2.2 py39hb2f4e1b_0
[conda] mypy_extensions 0.4.3 py39hecd8cb5_0
[conda] numpy 1.22.2 pypi_0 pypi
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] pytorch 1.10.2 py3.9_0 pytorch
[conda] torchaudio 0.10.2 py39_cpu pytorch
[conda] torchvision 0.11.3 py39_cpu pytorch
cc @ezyang @anjali411 @dylanbespalko @mruberry @lezcano @nikitaved
The text was updated successfully, but these errors were encountered: