Thanks to visit codestin.com
Credit goes to github.com

Skip to content

An agentic AI workflow designed to recommmend research conferences for input research papers or discard them as "not publishable"

Notifications You must be signed in to change notification settings

ShuvraneelMitra/PRAGATI

Repository files navigation

PRAGATI

Paper Review and Guidance for Academic Target Identification

GitHub stars GitHub forks GitHub watchers License Contributor Covenant Python LangGraph

Awards

Winner of Outstanding Solution Implementation 🎉 at the Ready Tensor Agentic AI Innovation Challenge 2025

About

PRAGATI automates the academic paper review process using an agentic AI workflow. It analyzes research papers, checks facts, provides critical feedback, and recommends suitable conferences for submission. This tool helps researchers improve their papers before submission, saving time and increasing chances of acceptance.

Note: This is version 2.0 of PRAGATI. To find the earlier work, please visit the "PRAGATI-legacy" branch.

Installation and Running Instructions

  1. Clone the repository:

    git clone https://github.com/ShuvraneelMitra/PRAGATI.git
  2. Navigate into the project folder:

    cd PRAGATI
  3. Create a virtual environment:

    python -m venv venv
  4. Activate the virtual environment:

    • Linux/macOS:
      source venv/bin/activate
    • Windows (PowerShell):
      .\venv\Scripts\activate.ps1
    • Windows (Command Prompt):
      .\venv\Scripts\activate.bat
  5. Install dependencies:

    pip install -r requirements.txt
  6. Install older version of timm (may not be needed for latest version):

    pip install timm==0.5.4 -t old_pkgs/timm0.5.4
  7. Run the application:

    uvicorn ui:app --reload --port 8080
  8. Open your browser and navigate to:

    http://127.0.0.1:8080
    

Abstract

The academic peer-review process can be time-consuming and inconsistent. Feedback turnaround times often take weeks, and reviewers may not give papers thorough attention due to high submission volumes. Sometimes, inexperienced reviewers may struggle to evaluate research quality effectively. PRAGATI addresses these issues by providing researchers with automated feedback about potential improvements and overlooked pitfalls before formal submission.

Methodology

PRAGATI consists of five key components:

1. The Parser

  • Utilizes the fitz library (PyMuPDF) to extract text while preserving document structure
  • Analyzes spatial distribution of text blocks to handle multi-column layouts
  • Employs TableTransformerForObjectDetection to extract tabular data
  • Integrates LatexOCR to convert mathematical expressions into LaTeX code

2. The Fact Checker

  • Checks facts using Tavily web search and PDF document analysis
  • Utilizes resources like Arxiv and GScholar to verify claims
  • Scores factual accuracy using a 5-point Likert Scale (1=false, 5=true)
  • Considers text chunks with average scores above 3 as factually correct

3. The Critic

  • Mimics human paper reviewers by asking questions about different sections
  • Uses specialized personas for different areas of evaluation
  • Iteratively processes questions and answers from the paper
  • Generates actionable suggestions for authors

4. The Scorer

  • Evaluates publishability based on fact-checker scores and critic assessments
  • Determines whether a paper is ready for submission

5. Conference Recommender

  • Analyzes responses from the critic to match papers with appropriate conferences
  • Provides targeted venue recommendations

Results

  • Dataset: 150 research papers (both publishable and non-publishable)
  • Publication Venues: CVPR, ICLR, KDD, TMLR, and NeurIPS
  • Accuracy: 89% for conference recommendations

Limitations

  • AI may struggle with nuanced aspects of research quality, such as novelty and theoretical impact
  • The system cannot evaluate radical new ideas at the same level as human reviewers

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contact

For questions or feedback, please dm us.

About

An agentic AI workflow designed to recommmend research conferences for input research papers or discard them as "not publishable"

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •