-
Intel
- ShangHai
Stars
Pruning Filter in Filter(NeurIPS2020)
Code for Neural Architecture Search without Training (ICML 2021)
This is originally a collection of papers on neural network accelerators. Now it's more like my selection of research on deep learning and computer architecture.
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Python interface for MLIR - the Multi-Level Intermediate Representation
人像卡通化探索项目 (photo-to-cartoon translation project)
A repository to curate and summarise research papers related to fashion and e-commerce
Code for TianChi 2018 FashionAI Cloth KeyPoint Detection Challenge
Pytorch Code for CVPR2019 paper "Fast Human Pose Estimation" https://arxiv.org/abs/1811.05419
ML model optimization product to accelerate inference.
Quantization of Convolutional Neural networks.
Open Source Compiler Framework using ONNX as Frontend and IR
Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours
[MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration
Enhance your application with the ability to see and interact with humans using any RGB camera.
Unofficial PyTorch Implementation of Unsupervised Data Augmentation.
Google QUEST Q&A Labeling. Improving automated understanding of complex question answer content
State-of-the-Art Text Embeddings
[ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment
An Open Source Deep Learning Inference Engine Based on FPGA
PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.