Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@fengyuentau
Copy link
Member

@fengyuentau fengyuentau commented Mar 15, 2024

Merge with opencv/opencv_contrib#3697

Partially resolves #25210

Checklist:

  • Renam cv::float16_t to cv::fp16_t.
  • Add typedef fp16_t float16_t for backward compatibility.
  • Remove class fp16_t methods except contructor and operator float.

Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
  • The PR is proposed to the proper branch
  • There is a reference to the original bug report and related work
  • There is accuracy test, performance test and test data in opencv_extra repository, if applicable
    Patch to opencv_extra has the same branch name.
  • The feature is well documented and sample code can be built with the project CMake

@fengyuentau fengyuentau added category: core port to 5.x is needed Label for maintainers. Authors of PR can ignore this labels Mar 15, 2024
@fengyuentau fengyuentau added this to the 4.10.0 milestone Mar 15, 2024
#endif
};

typedef fp16_t float16_t;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we need to drop this line when porting to 5.x.

@vpisarev
Copy link
Contributor

@fengyuentau, it looks good. I have been thinking about those names today, and actually since those types are becoming the most important data types nowadays, maybe it would be nice to have shorter or at least more convenient to type names, e.g. 'hfloat' and 'bfloat' (same length as fp16_t and bf16_t, just easier to type). What do you think, @mshabunin, @opencv-alalek, @asmorkalov?

@fengyuentau
Copy link
Member Author

e.g. 'hfloat' and 'bfloat'

Advantage is they look closer to float. I am okay to any option that does not conflict.


By the way, @vpisarev , the fromBits function is used in the following line

fval = (float) float16_t::fromBits(base64decoder.getUInt16());

I dont have a good idea how to workaround it if we are still going to remove fromBits from the data type class.

@vpisarev
Copy link
Contributor

@fengyuentau, float16_t::fromBits should be replaced with external function hfloatFromBits(ushort x) or something like that.

@fengyuentau fengyuentau changed the title core: Rename cv::float16_t to cv::fp16_t core: Rename cv::float16_t to cv::hfloat Mar 18, 2024
@fengyuentau
Copy link
Member Author

This PR is mostly done. Legacy CI showed there is a error check
https://pullrequest.opencv.org/buildbot/builders/precommit_linux64/builds/106635, but it should be fine since it's expected.

@vpisarev
Copy link
Contributor

I'm going to merge this PR. @opencv-alalek, @asmorkalov, any objections?

@vpisarev vpisarev mentioned this pull request Mar 20, 2024
@vpisarev vpisarev self-requested a review March 20, 2024 12:04
@vpisarev vpisarev merged commit 3afe8dd into opencv:4.x Mar 21, 2024
@fengyuentau fengyuentau deleted the float16_datatype_renaming branch March 26, 2024 07:52
@asmorkalov asmorkalov mentioned this pull request Apr 1, 2024
#endif
}

typedef hfloat float16_t;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should be available for EXTERNAL users only.
We should not use that in OpenCV code anywhere else.

Because these hits still lead to conflicts with future C++ compilers:

$ grep -Rn float16_t ./
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:497:    typedef __fp16 float16_t;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:498:    const float16_t* a = (const float16_t*)_a;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:499:    const float16_t* b = (const float16_t*)_b;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:500:    float16_t* c = (float16_t*)_c;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:641:    typedef __fp16 float16_t;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:643:    const float16_t* a = (const float16_t*)_a;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:644:    const float16_t* b = (const float16_t*)_b;
./modules/dnn/src/layers/cpu_kernels/conv_block.simd.hpp:646:    const float16_t bias = (float16_t)_bias;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.cpp:88:        esz = sizeof(float16_t);
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:438:    typedef __fp16 float16_t;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:439:    const float16_t* inwptr = (const float16_t*)_inwptr;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:440:    const float16_t* wptr = (const float16_t*)_wptr;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:441:    float16_t* outbuf = (float16_t*)_outbuf;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:594:    typedef __fp16 float16_t;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:595:    float16_t* outptr = (float16_t*)_outptr;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:760:    typedef __fp16 float16_t;
./modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.simd.hpp:761:    const float16_t* inptr = (const float16_t*)_inptr;
./modules/dnn/src/layers/cpu_kernels/convolution.hpp:65:    std::vector<float16_t> weightsBuf_FP16;
./modules/dnn/src/layers/cpu_kernels/convolution.hpp:66:    std::vector<float16_t> weightsWinoBuf_FP16;
./modules/dnn/src/layers/cpu_kernels/convolution.hpp:67:    float16_t* getWeightsFP16();
./modules/dnn/src/layers/cpu_kernels/convolution.hpp:68:    float16_t* getWeightsWinoFP16();
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:29:static inline void _cvt32f16f(const float* src, float16_t* dst, int len)
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:63:        dst[j] = float16_t(src[j]);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:77:float16_t* FastConv::getWeightsFP16()
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:82:float16_t* FastConv::getWeightsWinoFP16()
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:274:        float16_t* wptrWino_FP16 = nullptr;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:330:                    float16_t* wptr = wptrWino_FP16 + (g*Kg_nblocks + ki) * Cg *CONV_WINO_KBLOCK*CONV_WINO_AREA +
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:338:                            wptr[j] = (float16_t)kernelTm[i * CONV_WINO_ATOM_F16 + j];
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:374:        float16_t* weightsPtr_FP16 = nullptr;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:403:                float16_t* packed_wptr = weightsPtr_FP16 + DkHkWkCg * (startK + g * Kg_aligned_FP16);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:414:                            packed_wptr[k] = (float16_t)(*wptr);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:416:                            packed_wptr[k] = (float16_t)0.f;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:476:    float16_t* inpbufC_FP16 = (float16_t *)inpbufC;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:477:    if (esz == sizeof(float16_t))
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:574:    float16_t* inpbufC_FP16 = (float16_t *)inpbufC;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:575:    if (esz == sizeof(float16_t))
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:582:            inpbufC_FP16[k*CONV_NR_FP16] = (float16_t)v0;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:583:            inpbufC_FP16[k*CONV_NR_FP16+1] = (float16_t)v1;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:639:                        _cvt32f16f(inptr, (float16_t *)inpbuf, CONV_NR);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:653:                        _cvt32f16f(inptr, (float16_t *)inpbuf, slice_len);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:715:                                float16_t* inpbufC = (float16_t *)inpbuf + s0;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:719:                                    inpbufC[w*CONV_NR] = (float16_t)inptrInC[imgofs];
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:776:                                float16_t* inpbufC = (float16_t *)inpbuf + s0;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:783:                                        inpbufC[(h*Wk + w)*CONV_NR] = (float16_t)inptrInC[imgofs];
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:849:                                float16_t* inpbufC = (float16_t* )inpbuf + s0;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:858:                                            inpbufC[((d*Hk + h)*Wk + w)*CONV_NR] = (float16_t)inptrInC[imgofs];
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:900:                    float16_t * inpbuf_ki_FP16 = (float16_t *)inpbuf + k * CONV_NR * Cg + i;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:1064:                                    inpbuf_ki_FP16[0] = (float16_t)(*inptr_ki);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:1080:                                inpbuf_ki_FP16[0] = (float16_t)0.f;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:1268:        esz = sizeof(float16_t);
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:1522:                            float16_t* cptr_f16 = (float16_t*)cbuf_task + stripe*CONV_NR;
./modules/dnn/src/layers/cpu_kernels/convolution.cpp:1558:                    const float16_t *cptr_fp16 = (const float16_t *)cbuf_task;
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1637:            AutoBuffer<float16_t, 16> aligned_val;
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1640:            float16_t* bufPtr = aligned_val.data();
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1642:            float16_t *fp16Ptr = (float16_t *)field.data();
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1654:            AutoBuffer<float16_t, 16> aligned_val;
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1655:            if (!isAligned<sizeof(float16_t)>(val))
./modules/dnn/src/onnx/onnx_graph_simplifier.cpp:1658:                aligned_val.allocate(divUp(sz, sizeof(float16_t)));

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

float16_t used in convolution related code should be valid, which is exactly float16_t from arm_neon.h. I am not sure about the float16_t in onnx_graph_simplifier.cpp though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may conflict with compilers too: typedef __fp16 float16_t;

Need to use #if !defined(__OPENCV_BUILD) && !defined(OPENCV_HIDE_FLOAT16_T) compilation guard.

asmorkalov pushed a commit that referenced this pull request Apr 11, 2024
Rename remaining float16_t for future proof #25387

Resolves comment: #25217 (comment).

`std::float16_t` and `std::bfloat16_t` are introduced since c++23: https://en.cppreference.com/w/cpp/types/floating-point.

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
klatism pushed a commit to klatism/opencv that referenced this pull request May 17, 2024
* rename cv::float16_t to cv::fp16_t

* add typedef fp16_t float16_t

* remove zero(), bits() from fp16_t class

* fp16_t -> hfloat

* remove cv::float16_t::fromBits; add hfloatFromBits

* undo changes in conv_winograd_f63.simd.hpp and conv_block.simd.hpp

* undo some changes in dnn
klatism pushed a commit to klatism/opencv that referenced this pull request May 17, 2024
…enaming

Rename remaining float16_t for future proof opencv#25387

Resolves comment: opencv#25217 (comment).

`std::float16_t` and `std::bfloat16_t` are introduced since c++23: https://en.cppreference.com/w/cpp/types/floating-point.

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
savuor pushed a commit to savuor/opencv that referenced this pull request Nov 8, 2024
* rename cv::float16_t to cv::fp16_t

* add typedef fp16_t float16_t

* remove zero(), bits() from fp16_t class

* fp16_t -> hfloat

* remove cv::float16_t::fromBits; add hfloatFromBits

* undo changes in conv_winograd_f63.simd.hpp and conv_block.simd.hpp

* undo some changes in dnn
savuor pushed a commit to savuor/opencv that referenced this pull request Nov 8, 2024
…enaming

Rename remaining float16_t for future proof opencv#25387

Resolves comment: opencv#25217 (comment).

`std::float16_t` and `std::bfloat16_t` are introduced since c++23: https://en.cppreference.com/w/cpp/types/floating-point.

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
savuor pushed a commit to savuor/opencv that referenced this pull request Nov 21, 2024
* rename cv::float16_t to cv::fp16_t

* add typedef fp16_t float16_t

* remove zero(), bits() from fp16_t class

* fp16_t -> hfloat

* remove cv::float16_t::fromBits; add hfloatFromBits

* undo changes in conv_winograd_f63.simd.hpp and conv_block.simd.hpp

* undo some changes in dnn
savuor pushed a commit to savuor/opencv that referenced this pull request Nov 21, 2024
…enaming

Rename remaining float16_t for future proof opencv#25387

Resolves comment: opencv#25217 (comment).

`std::float16_t` and `std::bfloat16_t` are introduced since c++23: https://en.cppreference.com/w/cpp/types/floating-point.

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

category: core port to 5.x is needed Label for maintainers. Authors of PR can ignore this

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Renaming cv::float16_t and cv::bfloat16_t on 5.x to avoid redefinition

3 participants