Thanks to visit codestin.com
Credit goes to github.com

Skip to content

why the loss in wgan.py is different with the original paper? #10

@guojting

Description

@guojting

It makes me confused that which one is correct?
As implemented in wgan.py, we have
self.g_loss = tf.reduce_mean(self.d_)
self.d_loss = tf.reduce_mean(self.d) - tf.reduce_mean(self.d_)
however, according to the original paper of wgan, it seems that we should minimize (-1)*self.g_loss, instead of self.g_loss. Could you tell me why the losses are implemented in the above form? Anyway, it seems that using the implementation in wgan.py or wgan_v2.py, I can still get some results. This makes me more confused.

How about the losses as follows
self.g_loss = tf.reduce_mean(tf.scalar_mul(-1,self.d_))
self.d_loss = tf.reduce_mean(self.d_) - tf.reduce_mean(self.d)
?

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions