Please use this identifier to cite or link to this item:
Title: A Fast-Dehazing Technique using Generative Adversarial Network model for Illumination Adjustment in Hazy Videos
Authors: Naidu, T M Praneeth
Sekhar, P Chandra
Keywords: Depth estimation;Discriminator model;Generative adversarial networks;Generator model;ResNet
Issue Date: Mar-2023
Publisher: NIScPR-CSIR, India
Abstract: Haze significantly lowers the quality of the photos and videos that are taken. This might potentially be dangerous in addition to having an impact on the monitoring equipment' dependability. Recent years have seen an increase in issues brought on by foggy settings, necessitating the development of real-time dehazing techniques. Intelligent vision systems, such as surveillance and monitoring systems, rely fundamentally on the characteristics of the input pictures having a significant impact on the accuracy of the object detection. This paper presents a fast video dehazing technique using Generative Adversarial Network (GAN) model. The haze in the input video is estimated using depth in the scene extracted using a pre trained monocular depth ResNet model. Based on the amount of haze, an appropriate model is selected which is trained for specific haze conditions. The novelty of the proposed work is that the generator model is kept simple to get faster results in real-time. The discriminator is kept complex to make the generator more efficient. The traditional loss function is replaced with Visual Geometry Group (VGG) feature loss for better dehazing. The proposed model produced better results when compared to existing models. The Peak Signal to Noise Ratio (PSNR) obtained for most of the frames is above 32. The execution time is less than 60 milli seconds which makes the proposed model suited for video dehazing.
Page(s): 328-337
ISSN: 0022-4456 (Print); 0975-1084 (Online)
Appears in Collections:JSIR Vol.82(03) [March 2023]

Files in This Item:
File Description SizeFormat 
JSIR 82(03) 328-337.pdf1.98 MBAdobe PDFView/Open

Items in NOPR are protected by copyright, with all rights reserved, unless otherwise indicated.