Releases: tiny-dnn/tiny-dnn
Bug fix
Now we are announcing v1.0.0a3. Thanks for all great contributors! This release includes the following changes from v1.0.0a2:
Bug fix
- Convolutional layer with
padding::same
mode doesn't work #332 fixed by @nyanp - Segmentation fault at MinGW build #203 #281 fixed by @nyanp
- NNPACK backend doesn't work #398 fixed by @azsane
Improvements
- Remove compiler warnings & improve CMakeLists #387 by @beru
- Improve memory consumption #410 by @beru
- Improve unit tests #408 by @Randl
- Subtle speed optimization #419 by @beru
- Refactor serialization type & size type #407, #422 by @Randl and @edgarriba
- Improve compilation time by splitting serialiation/deserialization #421 by @beru
Docs&Comments
- Add comments to layer class #424 by @edgarriba
- Fix typo in comments #404 by @MikalaiDrabovich
Toward v1.0.0
The first version of tensor
is merged into tiny-dnn (#411 #417 #418 by @pansk @Randl and @edgarriba). It isn't integrated with tiny-dnn layers yet, but it's the starting point of the GPU tiny-dnn.
version 1.0.0 alpha2 - bug fix & minor changes
Bug Fixes
- Fix SEGV errors on AVX Optimized code (#353) by @nyanp
- Fix compiler error on msvc2013 (#320) by @nyanp
- Fix AVX backend slowdown on convolutional layer (#322) by @nyanp
- Fix throwing error when we load weights manually (#330) by @nyanp
- Fix returning infinity in tan_h (#347) by @nyanp
- Fix portability issues on serialization (#377) by @nyanp
Features
- Provides compile option to disable serialization support to speedup compilation time (#316) by @nyanp
- Adds
set_trainable
method to freeze layers (#346) by @nyanp - Adds power layer to caffe converter by @goranrauker
- double precision support (#332) by @nyanp
- Provides pad_type and non-square input to pooling layers (#374) by @nyanp
- Adds public predict method for vector of tensors (#396) by @reunanen
- Adds Auto engine selection (#339) by @edgarriba
- Adds basic image utilities, and remove OpenCV dependencies (#337) by @nyanp
Others
- Sync with latest NNPACK by @azsane
- Improves compiler warnings around type-cast by @pansk @reunanen @edgarriba
- Improves CMakelist by @syoyo @edgarriba @beru
- Replaces picotest with gtest by @Randl
- Adds "layer catalogue" into official documentation by @nyanp
- Adds tests for GPU environment by @Randl
- Adds cpplint.py by @edgarriba @Randl
- Adds a document for building iOS app by @wangyida
- Adds coverall checking by @edgarriba
- Adds CI builds for Win32 by @nyanp
- Updates & improves readme by @edgarriba @zhangqianhui
version 1.0.0 alpha1 - the first major version for tiny-dnn
🎉 This release contains a major refactoring & many bugfixes. Thanks a lot for all great contributors! 🎉
This release is alpha version. We need more helps and feedbacks toward v1.0.0. Please submit your bug-report at Github issue. Many thanks :)
-
Major updates
- Merge successful results from GSoC 2016
These features are still experimental, so PRs and bug reports are very welcome!
- Model serialization by @nyanp
-
Minor bug fix
-
Other
- A nice project logo by @KonfrareAlbert
- Launch official documents at http://tiny-dnn.readthedocs.io/
some APIs are changed from v0.1.1 .
- changed its namespace from
tiny_cnn
totiny_dnn
- changed API header from
tiny_cnn.h
totiny_dnn.h
Minor Fix & Add New Layers
Major refactoring and bug fixes
This release contains a major refactoring around fundamental architecture of tiny-cnn and fixes many problems. We had the help of 20 comitters for this release. Thanks!
- Now we can handle non-sequential model as
network<graph>
#108
#153 - Catch up the latest format of caffe's proto #162
- Improve the default behaviour of re-init weight #136
- Add more tests and documents #73
- Remove dependency of OpenCV in MNIST example
Some API have changed from the previous release. see change list