Thanks to visit codestin.com
Credit goes to github.com

Skip to content
View psycoplankton's full-sized avatar
🚩
🚩
  • Indian Institute of Technology Varanasi (BHU)
  • Varanasi

Block or report psycoplankton

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
psycoplankton/README.md
Light mode Recommended

Hi there 👋, I'm
Vansh Gupta

Aspiring Researcher | Coder | Problem Solver

psycoplankton


GITHUB TROPHIES 🏆

Trophies


💫 About Me

  • ⁠🎓 Undergraduate at IIT BHU, pursuing Engineering Physics
  • ⁠🤖 Passionate about Representation Learning, Natural Language Processing, Probabilistic ML, and Reinforcement Learning
  • ⁠👯 Open to research collaborations in LLMs, Generative Models, and Quantum Machine Learning
  • ⁠📫 Reach me at:

🧠 Experience & Research

🔬 Research Intern — Language and Speech Lab, NTU Singapore

Oct 2024 – Present
Exposure: LLM Fine-Tuning, PEFT, LoRA, LLaMA Adapters, Emotion Extraction, Prompt Tuning
•⁠ ⁠Researched low-resource LLM fine-tuning methods and emotion extraction from text.
•⁠ ⁠Working on synthetic interviews for depression detection using the DAIC-WOZ dataset, fine-tuned on Reddit data and integrated with emotion detection.


🧠 Research Intern — Visual Computing and Data Analytics Lab, IIT BHU

Aug 2023 – Apr 2024
Exposure: GANs, Graph Neural Networks, Fuzzy Logic, DeepWalk, Node2Vec, Struc2Vec
•⁠ ⁠Re-designed GraphGAN with Wasserstein Loss, improving accuracy from *84.7% → 88.57%.
•⁠ ⁠Implemented DeepWalk, Node2Vec, and Struc2Vec for node embeddings.
•⁠ ⁠Developed a Fuzzy Pre-processing Layer using a modified K-Means algorithm to boost accuracy to 88.95%.


⚙️ Machine Learning Engineer Intern — BingeClip.AI

Sep 2024 – Present
Exposure: Super Resolution, Quantization, Knowledge Distillation, LipSync, Mixed Precision Training
•⁠ ⁠Worked on inference optimization with Quantization, Knowledge Distillation, and Batch Inference.
•⁠ ⁠Applied Mixed Precision Training and Post-training Quantization on CodeFormer.
•⁠ ⁠Reduced inference time by 25%, and total forward pass time by 50%.


🧪 IBM Research Intern [AI 4 Code Team]

May 2025 – Present

  • ⁠Contributing to AI for Code tooling and research problems.
  • ⁠Exploring techniques for intelligent code understanding and generation.

🧠 Publications

Enriching Pre-Training Using Fuzzy Logic - Vansh Gupta, Vandana Bharti, Abhinav Kumar, Anshul Sharma, Sanjay Kumar Singh. IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2025)

  • ⁠Focused on enhancing language representation through fuzzy logic integration into the pre-training phase.

🌐 Socials


💻 Tech Stack


📊 GitHub Stats

Vansh's GitHub Stats


Vansh's GitHub Activity Graph


🔝 Top Contributed Repo

Pinned Loading

  1. GPT-Decoded GPT-Decoded Public

    An implementation of the GPT(generative pretrained transformer) model, from scratch, which produces Shakespearean text by training on the dialogues written by Shakespeare along with the GPT Encoder.

    Jupyter Notebook

  2. Rupee-vs-Dollar-Time-Series-Forecasting Rupee-vs-Dollar-Time-Series-Forecasting Public

    An autoregressive forecasting implementation of a LSTM network, NBEATS architecture, ARIMA and SARIMAX regressions, and Autoformer architecture on rupee dollar exchange rates using pytorch, pytorch…

    Jupyter Notebook 1