ChatWeb can crawl any webpage or extract text from PDF, DOCX, TXT files, and generate an embedded summary. It can also answer your questions based on the content of the text. It is implemented using the chatAPI and embeddingAPI based on gpt3.5, as well as a vector database.
The basic principle is similar to existing projects such as chatPDF and automated customer service AI.
Crawl web pages Extract text content Use GPT3.5's embedding API to generate vectors for each paragraph Calculate the similarity score between each paragraph's vector and the entire text's vector to generate a summary Store the vector-text mapping in a vector database Generate keywords from user input Generate a vector from the keywords Use the vector database to perform a nearest neighbor search and return a list of the most similar texts Use GPT3.5's chat API to design a prompt that answers the user's question based on the most similar texts in the list. The idea is to extract relevant content from a large amount of text and then answer questions based on that content, which can achieve a similar effect to breaking through token limits.
An improvement was made to generate vectors based on keywords rather than the user's question, which increases the accuracy of searching for relevant texts.
- Install Python3
- Download this repository by running
git clone https://github.com/SkywalkerDarren/chatWeb.git - Navigate to the directory by running
cd chatWeb - Copy
config.example.jsontoconfig.json - Edit
config.jsonand setopen_ai_keyto your OpenAI API key - Install dependencies by running
pip3 install -r requirements.txt - Start the application by running
python3 main.py
if you prefer, you can also run this project using docker:
- build the container using
docker-compose build(only needed once when you are not planning to contibute to this repo) - copy
config.example.jsontoconfig.jsonand set all the needed stuff. The example config is already fine for running with docker, no need to change anything there, if you don't have the OPEN_AI_KEY in your env variables you can set it here too, or later if you run this app. - run the container: `docker-compose up"
- open the application in browser:
http://localhost:7860
- Edit
config.json, setlanguagetoEnglishor other language
- Edit
config.jsonand setmodetoconsole,api, orwebuito choose the startup mode. - In
consolemode, type/helpto view commands. - In
apimode, an API service can be provided to the outside world.api_portandapi_hostcan be set inconfig.json. - In
webuimode, a web user interface service can be provided.webui_portcan be set inconfig.json, defaulting tohttp://127.0.0.1:7860.
- Edit
config.jsonand setuse_streamtotrue.
- Edit
config.jsonand settemperatureto a value between 0 and 1. - The smaller the value, the more conservative and stable the response will be. The larger the value, the more daring the response may be, possibly resulting in "hallucinations."
- Edit
config.jsonand addopen_ai_proxyfor your proxy address, for example:
"open_ai_proxy": {
"http": "socks5://127.0.0.1:1081",
"https": "socks5://127.0.0.1:1081"
}
- Edit
config.jsonand setuse_postgrestotrue. - Install PostgreSQL.
- The default SQL address is
postgresql://localhost:5432/mydb, or you can set it inconfig.json.
- The default SQL address is
- Install the pgvector plugin.
Compile and install the extension (support Postgres 11+).
git clone --branch v0.4.0 https://github.com/pgvector/pgvector.git
cd pgvector
make
make install # may need sudoThen load it in the database you want to use it in
CREATE EXTENSION vector;
- Install dependency with pip:
pip3 install psycopg2
Please enter the link to the article or the file path of the PDF/TXT/DOCX document: https://gutenberg.ca/ebooks/hemingwaye-oldmanandthesea/hemingwaye-oldmanandthesea-00-e.html
Please wait for 10 seconds until the webpage finishes loading.
The article has been retrieved, and the number of text fragments is: 663
...
=====================================
Query fragments used tokens: 7219, cost: $0.0028876
Query fragments used tokens: 7250, cost: $0.0029000000000000002
Query fragments used tokens: 7188, cost: $0.0028752
Query fragments used tokens: 7177, cost: $0.0028708
Query fragments used tokens: 2378, cost: $0.0009512000000000001
Embeddings have been created with 663 embeddings, using 31212 tokens, costing $0.0124848
The embeddings have been saved.
=====================================
Please enter your query (/help to view commands):- Support for pdf/txt/docx files
- Support for in-memory storage without a database (faiss)
- Support for Stream
- Support for API
- Support for proxies
- Add Colab support
- Add language support
- Support for temperature
- Support for webui
- Other features that have not been thought of yet