This repository contains machine learning models of Natural Language Inference, designed to be deployed using ONNX and utilized in a Streamlit-based web application. The app provides an interactive interface for performing this task using neural network architectures. For more information about the training process, please check the nli.ipynb file in the training folder. Check here to see other ML tasks.
For more information about the training process, please check the nli.ipynb file in the training folder.
If you encounter message This app has gone to sleep due to inactivity, click Yes, get this app back up! button to wake the app back up.
If the demo page is not working, you can fork or clone this repository and run the application locally by following these steps:
-
Clone the repository:
git clone https://github.com/verneylmavt/st-nli.git cd st-snt-analysis -
Install the required dependencies:
pip install -r requirements.txt
-
Run the Streamlit app:
streamlit run app.py
Alternatively you can run jupyter notebook demo.ipynb for a minimal interface to quickly test the model (implemented w/ ipywidgets).
I acknowledge the use of the Stanford Natural Language Inference (SNLI) Corpus provided by the Stanford Natural Language Processing Group. This dataset has been instrumental in conducting the research and developing this project.
- Dataset Name: Stanford Natural Language Inference (SNLI) Corpus
- Source: https://nlp.stanford.edu/projects/snli/
- License: Creative Commons Attribution-ShareAlike 4.0 International License
- Description: This corpus contains 570,000 human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI).
I deeply appreciate the efforts of the Stanford Natural Language Processing Group in making this dataset available.