🔭 I am a PhD student in Artificial Intelligence at Northwestern University, advised by Professor Larry Birnbaum. My primary research areas are Natural Language Processing and Conversational AI. Our goal is to build practical conversational AI systems and we hope to do this by making LLMs more reliable and robust to tasks, domains, and conversational behaviors.
⚡ I’m searching for AI/ML research internships during my PhD. If you're hiring, please reach out!
The ideal role:
- Focuses on solving clear problems, i.e., What problem are we solving? Can you give me an example? Why is this important?
- Requires creative problem solving. I enjoy reading research papers to find new ideas to implement.
📫 How to reach me:
🏫 I'm always trying to learn. Here are some of my recent personal projects:
- CalendarBot: A RAG chatbot for calendar event retrieval, using a hybrid approach (date-based and vector-based retrieval). It runs on Google Gemini and can be set up to work with your Google Calendar data.
- LearningToAsk: We tested LLM planning and reasoning capabilities by playing 20 questions, leveraging datasets from prior work by Mazzaccara et. al 2024 - EMNLP: We developed a custom backend to simulate the game, measuring LLM performance (win percentage, turns per win, information gain per question). To enhance reasoning, we applied test-time scaling using the budget forcing technique from the paper S1: Simple test-time scaling.
- NU-NLP: Evaluating Summarizers: A framework to train and evaluate neural abstractive (BART, Pegasus, T5) and optimization based summarizers (e.g. TextRank) on a variety of datasets. This code supported the experiments for the paper Multi-domain Summarization from Leaderboards to Practice: Re-examining Automatic and Human Evaluation (Demeter et al. 2023).
- Listening to elephants: Collaborated with FruitPunch.ai to train and deploy models detecting elephant rumbles and gunshots from rainforest audio. We focused on translating a research project to the real-world, optimizing audio processing and inference, and deploying deep learning models on edge devices.
- Dope Image Classifier: An ML engineering project on image classification (CIFAR-10) using tools like PyTorch Lightning, Optuna, Weights & Biases, RedisAI, ONNX, and Streamlit.
- Deep Q Trading Agent: A project using deep reinforcement learning to trade stocks from past data.
- LSTM Language Model: Built and trained a word-level language model with LSTM on Wikitext-2 and NY Times Covid-19 articles. Explored hyperparameter effects like dropout and embedding tying.
- Low Precision Machine Learning: Experimented with quantization error and stochastic rounding using low-precision floating-point representations.