Thanks to visit codestin.com
Credit goes to github.com

Skip to content
/ lpd Public

Accelerate autoregressive image generation with Locality-aware Parallel Decoding (LPD). Explore our code and models on GitHub! πŸš€πŸŒŸ

License

Notifications You must be signed in to change notification settings

chunyu0208/lpd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Locality-aware Parallel Decoding for Efficient Image Generation

GitHub release GitHub

Table of Contents

Overview

The Locality-aware Parallel Decoding (LPD) project focuses on improving the efficiency of autoregressive image generation. By leveraging locality-aware techniques, we can significantly speed up the decoding process while maintaining high-quality output. This repository includes implementations and benchmarks to showcase the effectiveness of our approach.

For the latest releases, visit Releases.

Features

  • Acceleration: Optimized for fast decoding.
  • Autoregressive: Implements state-of-the-art autoregressive models.
  • Efficient Algorithm: Utilizes locality-aware strategies for better performance.
  • Image Generation: Capable of generating high-quality images.
  • ImageNet Compatibility: Works seamlessly with ImageNet datasets.
  • Parallel Decoding: Supports parallel processing to enhance speed.

Installation

To get started with LPD, clone the repository and install the required dependencies.

git clone https://github.com/chunyu0208/lpd.git
cd lpd
pip install -r requirements.txt

Make sure you have Python 3.7 or higher installed on your machine.

Usage

After installation, you can start using LPD for your image generation tasks. The main script is located in the src directory.

To generate images, run the following command:

python src/generate.py --config config.yaml

Make sure to modify the config.yaml file according to your requirements. You can specify parameters such as the number of images to generate, output directory, and model checkpoints.

For detailed examples, refer to the Examples section.

Architecture

The architecture of LPD is designed for efficiency and scalability. It consists of the following components:

  1. Data Loader: Handles loading and preprocessing of image datasets.
  2. Model: Implements the autoregressive model with locality-aware features.
  3. Decoder: Responsible for the parallel decoding process.
  4. Evaluator: Measures the quality of generated images.

Each component is modular, allowing for easy customization and extension.

Diagram

Architecture Diagram

Examples

Here are a few examples of how to use LPD for image generation.

Example 1: Generate a Single Image

To generate a single image, you can use the following command:

python src/generate.py --config config_single.yaml

Example 2: Generate Multiple Images

To generate multiple images at once, modify the config_multiple.yaml file:

python src/generate.py --config config_multiple.yaml

Example 3: Customizing Output

You can customize the output size and format by adjusting parameters in the configuration file.

Refer to the documentation for more examples and detailed explanations.

Contributing

We welcome contributions to improve LPD. To contribute, follow these steps:

  1. Fork the repository.
  2. Create a new branch (git checkout -b feature-branch).
  3. Make your changes and commit them (git commit -m 'Add new feature').
  4. Push to the branch (git push origin feature-branch).
  5. Create a pull request.

Please ensure that your code adheres to our coding standards and includes appropriate tests.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Contact

For questions or feedback, feel free to reach out:

For the latest releases, visit Releases.