Welcome! The GeuAI is a virtual assistant designed to answer your questions in a friendly, fun, and interactive way. Created to be your digital companion, it provides engaging responses to make your experience more enjoyable.
- Model Used: By default, we use the meta-llama/Llama-3.2-3B-Instruct model, but you can easily change it to any other model of your choice.
- Python 3.11.0 or higher
- A Hugging Face account to obtain your API token
-
Clone the repository:
git clone https://github.com/Codes-LUYI/GeuAI cd GeuAI -
Install the dependencies:
pip install -r requirements.txt
-
Configure the
.envfile:- Rename the
.env-examplefile to.env:mv .env-example .env
- Edit the
.envfile and add your Hugging Face token:HUGGINGFACEHUB_API_TOKEN=your_token_here
Note: You can get your Hugging Face API token at Hugging Face Tokens.
Note: If you're using a graphical interface with
Python GeuAI.pyyou don't need the Hugging Face token; it works without it. - Rename the
To start the server, run:
python server.pyOpen your browser and go to http://localhost:7860 to interact with the AI chatbot.
To start the server and open the graphical interface, simply run the following command:
python GeuAI.pyThis will launch the application with the virtual assistant interface, where you can interact using voice or buttons.
The virtual assistant uses speech synthesis to respond to the user. jWe recommend using the Letícia voice, a high-quality Brazilian voice, for the best experience.
We recommend using the Letícia voice. jTo set it up, follow these steps:
- Visit the Louderpages - Letícia website.
- Github Rhvoices
- Follow the instructions to configure the Letícia voice.
If you prefer, you can also use other speech synthesis options:
- Espeak: jAn open-source alternative.
- SAPI5 (Windows): jThe native speech synthesis API for Windows.
from gradio_client import Client
# ==========TEST API==========
def response_GeuAI(input_text):
client = Client("wendellast/GeuAI")
result = client.predict(
message=input_text,
max_tokens=512,
temperature=0.7,
top_p=0.95,
api_name="/chat",
)
return result
# Example call:
input_text = "Hello, how are you?"
response = response_GeuAI(input_text)
print("AI Response:", response)You can also use the model directly via LangChain:
- Define the model you want to use, such as
meta-llama/Llama-3.2-3B-Instruct. - Configure your access token in the
.envfile. - Instantiate the wrapper for the model using the
GeuAIChatclass.
temperature: Controls the randomness of the response.top_p: Controls the diversity of the responses.repetition_penalty: Penalizes repetitions for more varied answers.max_new_tokens: Maximum number of tokens generated in the response.
Example usage:
from util.token_access import load_token
from your_package import GeuAIChat
token = load_token()
chatbot = GeuAIChat(auth_token=token)
while True:
question = input("Ask here: ")
answer = chatbot._call(question)
print(f"Response: {answer}")