简体中文|English
This project provides an intelligent agent for legal applications, offering knowledge-based Q&A functionality utilizing Faiss vector retrieval and LangChain.
- Install Ollama locally as per the instructions on the official website (https://ollama.com/).
- Pull the desired LLM model and Embedding model using the following commands in your terminal:
ollama pull <llm_model_name> # e.g., ollama pull llama3
ollama pull <embedding_model_name> # e.g., ollama pull nomic-embed-text - Navigate to the config directory of your project and edit the configuration file (e.g., config.yaml or config.json). Update the following settings:
- Set the API base URL to the local Ollama server (default: http://localhost:11434)
- Set the LLM model name to the one you pulled (e.g., "llama3")
- Set the Embedding model name to the one you pulled (e.g., "nomic-embed-text")
- Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh- Install git lfs and pull large files
git lfs install
git lfs pull- Run the program
uv run run.py
# install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
# restart bash
nvm install
cd web/frontend
npm install
npm run dev- Install Python 3.12
- Install dependencies
pip install -r requirements.txt- Run the program
python run.py
# install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
# restart bash
nvm install
cd web/frontend
npm install
npm run dev