Thanks to visit codestin.com
Credit goes to github.com

Skip to content
View abdul-basit-ai's full-sized avatar
🎯
Focusing
🎯
Focusing
  • Paris Saclay
  • Paris
  • 20:56 (UTC -12:00)

Block or report abdul-basit-ai

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. JAX_for_LLMs JAX_for_LLMs Public

    JAX for training LLM

    Jupyter Notebook 1

  2. LLaMa2-from-Scratch LLaMa2-from-Scratch Public

    Implemented LLaMa 2 from scratch (Multi Query Head Attention, RoPe etc)

    Python 1

  3. Mistral-7B_Q-Lora-Finetuning Mistral-7B_Q-Lora-Finetuning Public

    This project is about fine-tuning the Mistral-7B language model using QLoRA (Quantized Low-Rank Adaptation). As a student, I wanted to explore how LLM can be fine-tuned efficiently on limited hardw…

    Python 1

  4. Multimodal_Vision_Language_Model-from-SCRATCH Multimodal_Vision_Language_Model-from-SCRATCH Public

    I implemented Googles Vision language model (PaliGemma) using siglip and gemma2

    Python 1

  5. Transformer_From_Scratch Transformer_From_Scratch Public

    Hello, I have implemented the original "Attention is all you need " paper, coding a transformer from scratch using Pytorch. I have coded this under the guaidance of tutorial provided by UMAR JAMIL/

    Python 1

  6. ould-amine/Team-2_Mistral_AI_MCP_Hackathon ould-amine/Team-2_Mistral_AI_MCP_Hackathon Public

    Python