Thanks to visit codestin.com
Credit goes to github.com

Skip to content
View fuvty's full-sized avatar
:octocat:
:octocat:

Highlights

  • Pro

Organizations

@thu-nics @FatnessClub @dgSPARSE

Block or report fuvty

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
fuvty/README.md

Hi 👋

I’m Tianyu Fu (傅天予), a Ph.D. student supervised by Prof. Yu Wang. I’m a member of NICS-EFC Lab. My research interest lies in efficient large language models and graph neural networks.

Welcome to new friends, please don’t hesitate to reach out 🤗

Want to know more about me? Check out my recent events and profile

Pinned Loading

  1. thu-nics/C2C thu-nics/C2C Public

    The official code implementation for "Cache-to-Cache: Direct Semantic Communication Between Large Language Models"

    Python 259 23

  2. thu-nics/R2R thu-nics/R2R Public

    [NeurIPS'25] The official code implementation for paper "R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Token Routing"

    Python 57 7

  3. thu-nics/FrameFusion thu-nics/FrameFusion Public

    [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"

    Python 66 1

  4. thu-nics/MoA thu-nics/MoA Public

    [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>

    Python 150 8

  5. DeSCo DeSCo Public

    [WSDM'24 Oral] The official implementation of paper <DeSCo: Towards Generalizable and Scalable Deep Subgraph Counting>

    Python 21

  6. thu-nics/TaH thu-nics/TaH Public

    Official implementation of paper "Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models"

    Python 4 1