Perceptual image quality using Dual generative adversarial network
journal contributionposted on 27.06.2019, 15:18 by Masoumeh Zareapoor, Huiyu Zhou, Jie Yang
Generative adversarial networks have received a remarkable success in many computer vision applications for their ability to learn from complex data distribution. In particular, they are capable to generate realistic images from latent space with a simple and intuitive structure. The main focus of existing models has been improving the performance; however, there is a little attention to make a robust model. In this paper, we investigate solutions to the super-resolution problems—in particular perceptual quality—by proposing a robust GAN. The proposed model unlike the standard GAN employs two generators and two discriminators in which, a discriminator determines that the samples are from real data or generated one, while another discriminator acts as classifier to return the wrong samples to its corresponding generators. Generators learn a mixture of many distributions from prior to the complex distribution. This new methodology is trained with the feature matching loss and allows us to return the wrong samples to the corresponding generators, in order to regenerate the real-look samples. Experimental results in various datasets show the superiority of the proposed model compared to the state of the art methods.