Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Implementation of 'Common Representation Learning Using Step-basedCorrelation Multi-Modal CNN' paper.

akshayxml/CorrMCNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Common Representation Learning using Step-based Correlation Multi-Modal CNN

Implementation of the paper Common Representation Learning Using Step-based Correlation Multi-Modal CNN using both Keras and Pytorch.

Aim

To make a novel step-based correlation multi-modal CNN(CorrMCNN) which reconstructs one view of the data given the other while increasing the interaction between the representations at each hidden layer or every intermediate step.

Dataset

  • MNIST handwritten digits dataset -60,000 images for training and 10,000 for testing.
  • Each image is split vertically into two halves so as to obtain an image of 28 x 14 = 392 features Dataset

Technique: Deep Autoencoder based Approach

Multi-Modal Autoencoder is used which is two channeled AE that performs 2 types of reconstructions which provide the ability to adapt towards transfer learning tasks:

  • Self-reconstruction of view from itself.
  • Cross-reconstruction where one view is reconstructed given the other.

Implementation

This research paper is an improvement over the Correlational Neural Networks paper with the following additions:

  • Introduced convolution layer in the encoding phase and deconvolution layer in the decoding stage of the Correlation multi-modal CNN(CorrMCNN)
  • Batch Normalization in the intermediate layers
  • Instead of using final hidden representations in the correlation loss, correlation is computed at each intermediate layer.

Architecture

CorrMCNN Architecture

Results

CorrMCNN Architecture CorrMCNN Architecture

About

Implementation of 'Common Representation Learning Using Step-basedCorrelation Multi-Modal CNN' paper.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •