Thanks to visit codestin.com
Credit goes to github.com

Skip to content

yunfei-teng/SAGD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SAGD: Sampling-Accelerated Gradient Descent

This repository contains code implementations for several research papers, including [LSAM], [ESGD], [LSGD], [DP-SAM] and [DP-SGD].

These methods can be interpreted from a sampling-based perspective, depending on whether they modify the loss landscape or not. The repository is currently under active development.

Requirements

The project only requires naive PyTorch. Install PyTorch from official website.

Instruction

  1. On the first GPU node, use command ifconfig in terminal to check its IP address;

  2. Open the bash file train.sh: fill in ip_addr with the IP address you obtained from step 1 and num_groups with the total number of GPU nodes you have (i.e. num_groups=n);

  3. On i-th GPU note, in terminal type bash train.sh [i-1] (i.e. index starts from 0) to run the codes.

Possible Issues

  1. Type nvidia-smi in terminal to check nvidia driver and cuda compatibility;

  2. Check consistency of PyTorch versions across the machines.

About

An asynchronous distributed training library for Sampling-Accelerated Gradient Descent.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published