Winner of Outstanding Solution Implementation 🎉 at the Ready Tensor Agentic AI Innovation Challenge 2025
PRAGATI automates the academic paper review process using an agentic AI workflow. It analyzes research papers, checks facts, provides critical feedback, and recommends suitable conferences for submission. This tool helps researchers improve their papers before submission, saving time and increasing chances of acceptance.
Note: This is version 2.0 of PRAGATI. To find the earlier work, please visit the "PRAGATI-legacy" branch.
-
Clone the repository:
git clone https://github.com/ShuvraneelMitra/PRAGATI.git
-
Navigate into the project folder:
cd PRAGATI -
Create a virtual environment:
python -m venv venv
-
Activate the virtual environment:
- Linux/macOS:
source venv/bin/activate - Windows (PowerShell):
.\venv\Scripts\activate.ps1
- Windows (Command Prompt):
.\venv\Scripts\activate.bat
- Linux/macOS:
-
Install dependencies:
pip install -r requirements.txt
-
Install older version of timm (may not be needed for latest version):
pip install timm==0.5.4 -t old_pkgs/timm0.5.4
-
Run the application:
uvicorn ui:app --reload --port 8080
-
Open your browser and navigate to:
http://127.0.0.1:8080
The academic peer-review process can be time-consuming and inconsistent. Feedback turnaround times often take weeks, and reviewers may not give papers thorough attention due to high submission volumes. Sometimes, inexperienced reviewers may struggle to evaluate research quality effectively. PRAGATI addresses these issues by providing researchers with automated feedback about potential improvements and overlooked pitfalls before formal submission.
PRAGATI consists of five key components:
- Utilizes the
fitzlibrary (PyMuPDF) to extract text while preserving document structure - Analyzes spatial distribution of text blocks to handle multi-column layouts
- Employs
TableTransformerForObjectDetectionto extract tabular data - Integrates LatexOCR to convert mathematical expressions into LaTeX code
- Checks facts using
Tavilyweb search and PDF document analysis - Utilizes resources like
ArxivandGScholarto verify claims - Scores factual accuracy using a 5-point Likert Scale (1=false, 5=true)
- Considers text chunks with average scores above 3 as factually correct
- Mimics human paper reviewers by asking questions about different sections
- Uses specialized personas for different areas of evaluation
- Iteratively processes questions and answers from the paper
- Generates actionable suggestions for authors
- Evaluates publishability based on fact-checker scores and critic assessments
- Determines whether a paper is ready for submission
- Analyzes responses from the critic to match papers with appropriate conferences
- Provides targeted venue recommendations
- Dataset: 150 research papers (both publishable and non-publishable)
- Publication Venues: CVPR, ICLR, KDD, TMLR, and NeurIPS
- Accuracy: 89% for conference recommendations
- AI may struggle with nuanced aspects of research quality, such as novelty and theoretical impact
- The system cannot evaluate radical new ideas at the same level as human reviewers
This project is licensed under the MIT License - see the LICENSE file for details.
For questions or feedback, please dm us.