-
Notifications
You must be signed in to change notification settings - Fork 17
Description
I have spent a lot of time studying this paper and this code. However, similar to this thread: #4, I cannot replicate the results of the paper. I am training on kitti dataset. The pixel loss is fluctuating between 2-4; the edge loss quickly converged to 2.6e-3. Is this to be expected? In the paper, it is mentioned that the lambda terms are chosen to make sure that the losses are of similar scale. However, the pixel loss is of scale 1e0, while all the other losses are of scale 1e-3.
When I try to visualize the results using the provided visualization code, I get the pictures as attached, which do not look anything like the examples shown in the paper. In fact, I get similar results without any training.
Finally, the paper states that equations 6 and 7 are used to compute the depth and normal smoothness losses. However, after reading the code, it seems that only equation 2 is used. Most of the functions added relative to the original SFMLearner, seem to be different ways of computing the smoothness loss, and only one of them is used: compute_smooth_loss_wedge(), the implementation of equation 2 from the paper. Am I mis-reading the code?
Can @zhenheny please comment on this? I think that the ideas presented in this paper are fascinating and would like to learn more.
Thank you.