-
Student at University of San Francisco
Stars
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…
Repository for Programming Assignment 2 for R Programming on Coursera
This repository contains information about the projects and assignments done as part of the Buck Institute Project