-
-
Notifications
You must be signed in to change notification settings - Fork 56.4k
core: Rename cv::float16_t to cv::hfloat
#25217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| #endif | ||
| }; | ||
|
|
||
| typedef fp16_t float16_t; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we need to drop this line when porting to 5.x.
|
@fengyuentau, it looks good. I have been thinking about those names today, and actually since those types are becoming the most important data types nowadays, maybe it would be nice to have shorter or at least more convenient to type names, e.g. 'hfloat' and 'bfloat' (same length as fp16_t and bf16_t, just easier to type). What do you think, @mshabunin, @opencv-alalek, @asmorkalov? |
Advantage is they look closer to By the way, @vpisarev , the opencv/modules/core/src/persistence.cpp Line 1812 in 625eeba
I dont have a good idea how to workaround it if we are still going to remove |
|
@fengyuentau, float16_t::fromBits should be replaced with external function |
cv::float16_t to cv::fp16_tcv::float16_t to cv::hfloat
|
This PR is mostly done. Legacy CI showed there is a error check |
|
I'm going to merge this PR. @opencv-alalek, @asmorkalov, any objections? |
| #endif | ||
| } | ||
|
|
||
| typedef hfloat float16_t; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That should be available for EXTERNAL users only.
We should not use that in OpenCV code anywhere else.
Because these hits still lead to conflicts with future C++ compilers:
$ grep -Rn float16_t ./
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:497: typedef __fp16 float16_t;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:498: const float16_t* a = (const float16_t*)_a;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:499: const float16_t* b = (const float16_t*)_b;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:500: float16_t* c = (float16_t*)_c;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:641: typedef __fp16 float16_t;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:643: const float16_t* a = (const float16_t*)_a;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:644: const float16_t* b = (const float16_t*)_b;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:646: const float16_t bias = (float16_t)_bias;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.cpp:88: esz = sizeof(float16_t);
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:438: typedef __fp16 float16_t;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:439: const float16_t* inwptr = (const float16_t*)_inwptr;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:440: const float16_t* wptr = (const float16_t*)_wptr;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:441: float16_t* outbuf = (float16_t*)_outbuf;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:594: typedef __fp16 float16_t;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:595: float16_t* outptr = (float16_t*)_outptr;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:760: typedef __fp16 float16_t;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:761: const float16_t* inptr = (const float16_t*)_inptr;
./modules/dnn/src/layers/cpu_kernels/convolution.hpp:65: std::vector<float16_t> weightsBuf_FP16;
./modules/dnn/src/layers/cpu_kernels/convolution.hpp:66: std::vector<float16_t> weightsWinoBuf_FP16;
./modules/dnn/src/layers/cpu_kernels/convolution.hpp:67: float16_t* getWeightsFP16();
./modules/dnn/src/layers/cpu_kernels/convolution.hpp:68: float16_t* getWeightsWinoFP16();
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:29:static inline void _cvt32f16f(const float* src, float16_t* dst, int len)
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:63: dst[j] = float16_t(src[j]);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:77:float16_t* FastConv::getWeightsFP16()
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:82:float16_t* FastConv::getWeightsWinoFP16()
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:274: float16_t* wptrWino_FP16 = nullptr;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:330: float16_t* wptr = wptrWino_FP16 + (g*Kg_nblocks + ki) * Cg *CONV_WINO_KBLOCK*CONV_WINO_AREA +
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:338: wptr[j] = (float16_t)kernelTm[i * CONV_WINO_ATOM_F16 + j];
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:374: float16_t* weightsPtr_FP16 = nullptr;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:403: float16_t* packed_wptr = weightsPtr_FP16 + DkHkWkCg * (startK + g * Kg_aligned_FP16);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:414: packed_wptr[k] = (float16_t)(*wptr);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:416: packed_wptr[k] = (float16_t)0.f;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:476: float16_t* inpbufC_FP16 = (float16_t *)inpbufC;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:477: if (esz == sizeof(float16_t))
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:574: float16_t* inpbufC_FP16 = (float16_t *)inpbufC;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:575: if (esz == sizeof(float16_t))
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:582: inpbufC_FP16[k*CONV_NR_FP16] = (float16_t)v0;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:583: inpbufC_FP16[k*CONV_NR_FP16+1] = (float16_t)v1;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:639: _cvt32f16f(inptr, (float16_t *)inpbuf, CONV_NR);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:653: _cvt32f16f(inptr, (float16_t *)inpbuf, slice_len);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:715: float16_t* inpbufC = (float16_t *)inpbuf + s0;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:719: inpbufC[w*CONV_NR] = (float16_t)inptrInC[imgofs];
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:776: float16_t* inpbufC = (float16_t *)inpbuf + s0;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:783: inpbufC[(h*Wk + w)*CONV_NR] = (float16_t)inptrInC[imgofs];
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:849: float16_t* inpbufC = (float16_t* )inpbuf + s0;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:858: inpbufC[((d*Hk + h)*Wk + w)*CONV_NR] = (float16_t)inptrInC[imgofs];
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:900: float16_t * inpbuf_ki_FP16 = (float16_t *)inpbuf + k * CONV_NR * Cg + i;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:1064: inpbuf_ki_FP16[0] = (float16_t)(*inptr_ki);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:1080: inpbuf_ki_FP16[0] = (float16_t)0.f;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:1268: esz = sizeof(float16_t);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:1522: float16_t* cptr_f16 = (float16_t*)cbuf_task + stripe*CONV_NR;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:1558: const float16_t *cptr_fp16 = (const float16_t *)cbuf_task;
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1637: AutoBuffer<float16_t, 16> aligned_val;
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1640: float16_t* bufPtr = aligned_val.data();
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1642: float16_t *fp16Ptr = (float16_t *)field.data();
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1654: AutoBuffer<float16_t, 16> aligned_val;
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1655: if (!isAligned<sizeof(float16_t)>(val))
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1658: aligned_val.allocate(divUp(sz, sizeof(float16_t)));
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
float16_t used in convolution related code should be valid, which is exactly float16_t from arm_neon.h. I am not sure about the float16_t in onnx_graph_simplifier.cpp though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may conflict with compilers too: typedef __fp16 float16_t;
Need to use #if !defined(__OPENCV_BUILD) && !defined(OPENCV_HIDE_FLOAT16_T) compilation guard.
Rename remaining float16_t for future proof #25387 Resolves comment: #25217 (comment). `std::float16_t` and `std::bfloat16_t` are introduced since c++23: https://en.cppreference.com/w/cpp/types/floating-point. ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [x] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [x] The feature is well documented and sample code can be built with the project CMake
* rename cv::float16_t to cv::fp16_t * add typedef fp16_t float16_t * remove zero(), bits() from fp16_t class * fp16_t -> hfloat * remove cv::float16_t::fromBits; add hfloatFromBits * undo changes in conv_winograd_f63.simd.hpp and conv_block.simd.hpp * undo some changes in dnn
…enaming Rename remaining float16_t for future proof opencv#25387 Resolves comment: opencv#25217 (comment). `std::float16_t` and `std::bfloat16_t` are introduced since c++23: https://en.cppreference.com/w/cpp/types/floating-point. ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [x] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [x] The feature is well documented and sample code can be built with the project CMake
* rename cv::float16_t to cv::fp16_t * add typedef fp16_t float16_t * remove zero(), bits() from fp16_t class * fp16_t -> hfloat * remove cv::float16_t::fromBits; add hfloatFromBits * undo changes in conv_winograd_f63.simd.hpp and conv_block.simd.hpp * undo some changes in dnn
…enaming Rename remaining float16_t for future proof opencv#25387 Resolves comment: opencv#25217 (comment). `std::float16_t` and `std::bfloat16_t` are introduced since c++23: https://en.cppreference.com/w/cpp/types/floating-point. ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [x] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [x] The feature is well documented and sample code can be built with the project CMake
* rename cv::float16_t to cv::fp16_t * add typedef fp16_t float16_t * remove zero(), bits() from fp16_t class * fp16_t -> hfloat * remove cv::float16_t::fromBits; add hfloatFromBits * undo changes in conv_winograd_f63.simd.hpp and conv_block.simd.hpp * undo some changes in dnn
…enaming Rename remaining float16_t for future proof opencv#25387 Resolves comment: opencv#25217 (comment). `std::float16_t` and `std::bfloat16_t` are introduced since c++23: https://en.cppreference.com/w/cpp/types/floating-point. ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [x] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [x] The feature is well documented and sample code can be built with the project CMake
Merge with opencv/opencv_contrib#3697
Partially resolves #25210
Checklist:
cv::float16_ttocv::fp16_t.typedef fp16_t float16_tfor backward compatibility.class fp16_tmethods except contructor andoperator float.Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.