English | 简体中文
OpenOCR is an open-source toolkit developed by the OCR team from FVL Lab, Fudan University, under the guidance of Prof. Yu-Gang Jiang and Prof. Zhineng Chen. It focuses on 「General-OCR」 tasks, including Text Detection and Recognition, Formula and Table Recognition, as well as Document Parsing and Understanding. The toolkit integrates a unified training and evaluation benchmark, commercial-grade OCR and Document Parsing systems, and faithful reproductions of the core implementations from a wide range of academic papers.
OpenOCR aims to build a comprehensive open-source ecosystem for General-OCR, bridging academic research and real-world applications, and fostering the collaborative development and widespread deployment of OCR technologies across both research frontiers and industrial scenarios. We welcome researchers, developers, and industry partners to explore the toolkit and share feedback.
-
🔥OpenDoc-0.1B: Ultra-Lightweight Document Parsing System with 0.1B Parameters
-
⚡[Quick Start]
[Local Demo]
- An ultra-lightweight document parsing system with only 0.1B parameters.
- Two-stage pipeline:
- Layout analysis via PP-DocLayoutV2.
- Unified recognition of text, formulas, and tables using the in-house model UniRec-0.1B
- In the original version of UniRec-0.1B, only text and formula recognition were supported. In OpenDoc-0.1B, we rebuilt UniRec-0.1B to enable unified recognition of text, formulas, and tables.
- Supports document parsing for Chinese and English.
- Achieves 90.57% on OmniDocBench (v1.5), outperforming many document parsing models based on multimodal large language models.
-
-
🔥UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters
- [Doc]
[Local Demo] [Hugging Face Model] [ModelScope Model]
- Recognizing plain text (words, lines, paragraphs), formulas (single-line, multi-line), and mixed text-and-formulas content.
- 0.1B parameters.
- Trained from scratch on 40M data without pre-training.
- Supporting both Chinese and English text/formulas recognition.
- [Doc]
-
🔥OpenOCR: A general OCR system with accuracy and efficiency
- ⚡[Quick Start]
[Local Demo] [Model] [PaddleOCR Implementation]
- Introduction
- A practical OCR system building on SVTRv2.
- Outperforms PP-OCRv4 baseline by 4.5% on the OCR competition leaderboard in terms of accuracy, while preserving quite similar inference speed.
- Supports Chinese and English text detection and recognition.
- Provides server model and mobile model.
- Fine-tunes OpenOCR on a custom dataset: Fine-tuning Det, Fine-tuning Rec.
- ONNX model export for wider compatibility.
- ⚡[Quick Start]
-
🔥SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition (ICCV 2025)
- [Doc]
[Model] [Datasets] [Config, Training and Inference] [Benchmark]
- Introduction
- A unified training and evaluation benchmark (on top of Union14M) for Scene Text Recognition
- Supports 24 Scene Text Recognition methods trained from scratch on the large-scale real dataset Union14M-L-Filter, and will continue to add the latest methods.
- Improves accuracy by 20-30% compared to models trained based on synthetic datasets.
- Towards Arbitrary-Shaped Text Recognition and Language modeling with a Single Visual Model.
- Surpasses Attention-based Encoder-Decoder Methods across challenging scenarios in terms of accuracy and speed
- Get Started with training a SOTA Scene Text Recognition model from scratch.
- [Doc]
- UniRec-0.1B (Yongkun Du, Zhineng Chen, Yazhen Xie, Weikang Bai, Hao Feng, Wei Shi, Yuchen Su, Can Huang, Yu-Gang Jiang. UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters, Preprint. Doc, Paper)
- MDiff4STR (Yongkun Du, Miaomiao Zhao, Songlin Fan, Zhineng Chen*, Caiyan Jia, Yu-Gang Jiang. MDiff4STR: Mask Diffusion Model for Scene Text Recognition, AAAI 2026 Oral. Doc, Paper)
- CMER (Weikang Bai, Yongkun Du, Yuchen Su, Yazhen Xie, Zhineng Chen*. Complex Mathematical Expression Recognition: Benchmark, Large-Scale Dataset and Strong Baseline, AAAI 2026. Paper, Code is coming soon.)
- TextSSR (Xingsong Ye, Yongkun Du, Yunbo Tao, Zhineng Chen*. TextSSR: Diffusion-based Data Synthesis for Scene Text Recognition, ICCV 2025. Paper, Code)
- SVTRv2 (Yongkun Du, Zhineng Chen*, Hongtao Xie, Caiyan Jia, Yu-Gang Jiang. SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition, ICCV 2025. Doc, Paper)
- IGTR (Yongkun Du, Zhineng Chen*, Yuchen Su, Caiyan Jia, Yu-Gang Jiang. Instruction-Guided Scene Text Recognition, TPAMI 2025. Doc, Paper)
- CPPD (Yongkun Du, Zhineng Chen*, Caiyan Jia, Xiaoting Yin, Chenxia Li, Yuning Du, Yu-Gang Jiang. Context Perception Parallel Decoder for Scene Text Recognition, TPAMI 2025. PaddleOCR Doc, Paper)
- SMTR&FocalSVTR (Yongkun Du, Zhineng Chen*, Caiyan Jia, Xieping Gao, Yu-Gang Jiang. Out of Length Text Recognition with Sub-String Matching, AAAI 2025. Doc, Paper)
- DPTR (Shuai Zhao, Yongkun Du, Zhineng Chen*, Yu-Gang Jiang. Decoder Pre-Training with only Text for Scene Text Recognition, ACM MM 2024. Paper)
- CDistNet (Tianlun Zheng, Zhineng Chen*, Shancheng Fang, Hongtao Xie, Yu-Gang Jiang. CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition, IJCV 2024. Paper)
- MRN (Tianlun Zheng, Zhineng Chen*, Bingchen Huang, Wei Zhang, Yu-Gang Jiang. MRN: Multiplexed Routing Network for Incremental Multilingual Text Recognition, ICCV 2023. Paper, Code)
- TPS++ (Tianlun Zheng, Zhineng Chen*, Jinfeng Bai, Hongtao Xie, Yu-Gang Jiang. TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition, IJCAI 2023. Paper, Code)
- SVTR (Yongkun Du, Zhineng Chen*, Caiyan Jia, Xiaoting Yin, Tianlun Zheng, Chenxia Li, Yuning Du, Yu-Gang Jiang. SVTR: Scene Text Recognition with a Single Visual Model, IJCAI 2022 (Long). PaddleOCR Doc, Paper)
- NRTR (Fenfen Sheng, Zhineng Chen, Bo Xu. NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition, ICDAR 2019. Paper)
- 2025.12.25: 🔥 Releasing OpenDoc-0.1B: Ultra-Lightweight Document Parsing System with 0.1B Parameters
- 2025.11.08: Our paper MDiff4STR is accepted by AAAI 2026 (Oral). Accessible in Doc.
- 2025.11.08: Our paper CMER is accepted by AAAI 2026. Code is coming soon.
- 2025.08.20: 🔥 Releasing UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters
- 2025.07.10: Our paper SVTRv2 is accepted by ICCV 2025. Accessible in Doc.
- 2025.07.10: Our paper TextSSR is accepted by ICCV 2025. Accessible in Code.
- 2025.03.24: 🔥 Releasing the feature of fine-tuning OpenOCR on a custom dataset: Fine-tuning Det, Fine-tuning Rec
- 2025.03.23: 🔥 Releasing the feature of ONNX model export for wider compatibility.
- 2025.02.22: Our paper CPPD is accepted by TPAMI. Accessible in Doc and PaddleOCR Doc.
- 2024.12.31: Our paper IGTR is accepted by TPAMI. Accessible in Doc.
- 2024.12.16: Our paper SMTR is accepted by AAAI 2025. Accessible in Doc.
- 2024.12.03: The pre-training code for DPTR is merged.
- 🔥 2024.11.23 release notes:
- OpenOCR: A general OCR system with accuracy and efficiency
- SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition
- [Paper] [Doc] [Model] [Datasets] [Config, Training and Inference] [Benchmark]
- Introduction
- Get Started with training a SOTA Scene Text Recognition model from scratch.
| Method | Venue | Training | Evaluation | Contributor |
|---|---|---|---|---|
| CRNN | TPAMI 2016 | ✅ | ✅ | |
| ASTER | TPAMI 2019 | ✅ | ✅ | pretto0 |
| NRTR | ICDAR 2019 | ✅ | ✅ | |
| SAR | AAAI 2019 | ✅ | ✅ | pretto0 |
| MORAN | PR 2019 | ✅ | ✅ | |
| DAN | AAAI 2020 | ✅ | ✅ | |
| RobustScanner | ECCV 2020 | ✅ | ✅ | pretto0 |
| AutoSTR | ECCV 2020 | ✅ | ✅ | |
| SRN | CVPR 2020 | ✅ | ✅ | pretto0 |
| SEED | CVPR 2020 | ✅ | ✅ | |
| ABINet | CVPR 2021 | ✅ | ✅ | YesianRohn |
| VisionLAN | ICCV 2021 | ✅ | ✅ | YesianRohn |
| PIMNet | ACM MM 2021 | TODO | ||
| SVTR | IJCAI 2022 | ✅ | ✅ | |
| PARSeq | ECCV 2022 | ✅ | ✅ | |
| MATRN | ECCV 2022 | ✅ | ✅ | |
| MGP-STR | ECCV 2022 | ✅ | ✅ | |
| LPV | IJCAI 2023 | ✅ | ✅ | |
| MAERec(Union14M) | ICCV 2023 | ✅ | ✅ | |
| LISTER | ICCV 2023 | ✅ | ✅ | |
| CDistNet | IJCV 2024 | ✅ | ✅ | YesianRohn |
| BUSNet | AAAI 2024 | ✅ | ✅ | |
| DCTC | AAAI 2024 | TODO | ||
| CAM | PR 2024 | ✅ | ✅ | |
| OTE | CVPR 2024 | ✅ | ✅ | |
| CFF | IJCAI 2024 | TODO | ||
| DPTR | ACM MM 2024 | fd-zs | ||
| VIPTR | ACM CIKM 2024 | TODO | ||
| IGTR | TPAMI 2025 | ✅ | ✅ | |
| SMTR | AAAI 2025 | ✅ | ✅ | |
| CPPD | TPAMI 2025 | ✅ | ✅ | |
| FocalSVTR-CTC | AAAI 2025 | ✅ | ✅ | |
| SVTRv2 | ICCV 2025 | ✅ | ✅ | |
| ResNet+Trans-CTC | ✅ | ✅ | ||
| ViT-CTC | ✅ | ✅ | ||
| MDiff4STR | AAAI 2025 Oral | ✅ | ✅ |
TODO
TODO
If you find our method useful for your reserach, please cite:
@inproceedings{Du2025SVTRv2,
title={SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition},
author={Yongkun Du and Zhineng Chen and Hongtao Xie and Caiyan Jia and Yu-Gang Jiang},
booktitle={ICCV},
year={2025},
pages={20147-20156}
}
@article{du2025unirec,
title={UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters},
author={Yongkun Du and Zhineng Chen and Yazhen Xie and Weikang Bai and Hao Feng and Wei Shi and Yuchen Su and Can Huang and Yu-Gang Jiang},
journal={arXiv preprint arXiv:2512.21095},
year={2025}
}This codebase is built based on the PaddleOCR, PytorchOCR, and MMOCR. Thanks for their awesome work!