Fun Fact : - LLMs can sometimes “hallucinate” information, which means they might generate details that sound plausible but are made up. It’s like when someone tells a creative but fictional story and you’re unsure if they’re joking or serious!
- Please refer 'Control Flow Diagram' of Application before moving ahead 👇
-
What Does this application actually do ❔
-
First, it initiates the LLaMa 3.1 8B LLM Model using
ollama. You can also change the LLM model if you want to by editing the pathconfig/config.json([For Using Model withinPython Code]) andentrypoint.sh([For PullingModelFiles]). -
Then, the LLM model chooses:
Dimension,Shape,Color, andArea. It then generates thePython codefor drawing anImagewith the same specifications. The generated code will be stored at the path:oLLaMa_generated_code_dir/generated_code.py. -
Then, it will install all the dependencies required before executing
generated_code.py. After the successful installation of the necessary modules, it will executegenerated_code.py. -
The
Generated Codeis also sent back to theLLM Modelfor verification to ensure allSpecificationsmentioned in the image are met. If the verification is successful, the execution willstopwithExit Code: 0. Otherwise, it willRectify the Generated Codeand repeat all the steps fromStep No. iiimentioned above.
-
Just a small catch 🙈 : This application is totally
Asynchronousin nature.
-
Dependency Installations
- Make an virtual enviorment and install all the necessary components to run the 'Application'
virtualenv venv- Activate your virtual enviorment
- For MacOS: source venv/bin/activate - For Windows: venv\Scripts\activate.bat- Install all the necessary ingredients from
requirements.txtfile:
pip3 install -r requirements.txt
To Learn More About
virtualenv: Click Here
-
Project setup and Containarization
- Start 'Docker Desktop' application and open terminal in your 'Working directory' [Where
docker-compose.yamlandDockerfileis located]. Then run below given command to 'Build' anoLLaMa Image
docker compose build- After completion of the 'Image Building' process. It's time to
compose upLLM in our Local System using:
docker compose upNOTE:Compose up process might upto 20 - 25 Mins. first time. Because it will download all the respectiveModelFiles.
- Start 'Docker Desktop' application and open terminal in your 'Working directory' [Where
For Download guide of
Docker DesktopFor MacOS: Click Here
For Download guide of
Docker DesktopFor Windows: Click Here
For Download guide of
Docker Desktopfor Linux: Click Here
-
Congratulations 😎, If you reach this step, you're just one step away from running 'Adaptive Image Code Gen LLM' on your local system
- Let's Press the trigger of an LLM Application by running below command:
python3 main.py
-
We can add a new layer of
Image Verificationby integrating theLLaVaLLM Model. This model can verify the image with respect to itsContextsandPrompts, which will enhance theAccuracyof the currentWorkFlow. -
After achieving the results mentioned above, if we obtain satisfactory accuracy, we might integrate
Multiple ShapesandMultiple Colorsplotting.
- Choose any open issue from here.
- Comment on the Issue:
Can I work on this?and Start Exploring it. - Make changes to your Fork and Send a PR.
For Creating Valid PR Successfully. Kindly follow Guide: https://help.github.com/articles/creating-a-pull-request/
Otherwise your PR will be Closed:
- For Appropriate PR, follow Title Format:
Fixes #IssueNo : Name of the Issue
For any Doubts related to the Issues, such as understanding Issue better etc., Comment Down your Queries on the Respective Issue.
