Enhancement of Underwater Images Using Neural Style Transfer |
||
![]() |
![]() |
|
© 2025 by IJCTT Journal | ||
Volume-73 Issue-4 |
||
Year of Publication : 2025 | ||
Authors : Apeksha Jain, D.A. Mehta | ||
DOI : 10.14445/22312803/IJCTT-V73I4P104 |
How to Cite?
Apeksha Jain, D.A. Mehta, "Enhancement of Underwater Images Using Neural Style Transfer," International Journal of Computer Trends and Technology, vol. 73, no. 4, pp. 28-34, 2025. Crossref, https://doi.org/10.14445/22312803/IJCTT-V73I4P104
Abstract
The quality of underwater images is considerably significant in the computer vision arena to understand sea life and assess the geological environment and archaeology beneath the water. Owing to the physical properties of underwater environments, capturing sharp underwater images becomes a challenging task. These images mostly undergo color distortion and visibility deprivation because of light absorption and scattering. The current approaches are time-consuming and require large datasets to achieve reasonable results in enhancing underwater images. This paper presents a technique for enhancing underwater images using neural style transfer. The resultant output image is less hazy, and the content loss is very less. A comparison has also been made between the output image obtained with and without segmentation. In addition, the content loss has been reduced, and the loss percentage and histogram showing the haze difference between the input and generated output image have been displayed.
Keywords
Underwater image enhancement, Neural style transfer, Color correction, Haze removal.
Reference
[1] Mark Shortis, Euan Harvey, and Dave Abdo, A Review of Underwater Stereo-Image Measurement for Marine Biology and Ecology Applications, Oceanography and Marine Biology, 1st ed., pp. 1-36, CRC Press, 2009.
[Google Scholar] [Publisher Link]
[2] Chongyi Li, Jichang Guo, and Chunle Guo, “Emerging from Water: Undewater Image Color Correction Based on Weakly Supervised Color Transfer,” IEEE Signal Processing Letters, vol. 25, no. 3, pp. 323-327, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge, “Image Style Transfer Using Convolutional Neural Networks,” 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, pp. 2414-2423, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Zia-ur Rahman, Daniel J. Jobson, and Glenn A. Woodell, “Retinex Processing for Automatic Image Enhancement,” Journal of Electronic Imaging, vol. 13, no. 1, pp. 100-111, 2004.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Yang Wang et al., “A Deep CNN Method for Underwater Image Enhancement,” IEEE International Conference on Image Processing, Beijing, China, pp. 1382-1386, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Karen Simonyan, and Andrew Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv, pp. 1-14, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Ian J. Goodfellow et al., “Generative Adversarial Networks,” arXiv, pp. 1-9, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Md. Jahidul Islam, Youya Xia, and Junaed Sattar, “Fast Underwater Image Enhancement for Improved Visual Perception,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3227-3234, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Ming Lu et al., “A Closed-Form Solution to Universal Style Transfer,” 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), pp. 5951-5960, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Md Jahidul Islam et al., “Semantic Segmentation of Underwater Imagery: Dataset and Benchmark,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, pp. 1769-1776, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Tsung-Yi Lin et al., “Microsoft COCO: Common Objects in Context,” Proceedings, Part V 13th European Conference, Computer Vision – ECCV, Zurich, Switzerland, pp. 740-755, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Travis Williams, and Robert Li, “Wavelet Pooling for Convolutional Neural Networks,” ICLR Conference Blind Submission, pp. 1-12, 2018.
[Google Scholar] [Publisher Link]
[13] Yijun Li et al., “Universal Style Transfer via Feature Transforms,” 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, pp. 1-11, 2017.
[Google Scholar] [Publisher Link]
[14] Phillip Isola et al., “Image-to-Image Translation with Conditional Adversarial Networks,” 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 5967-5976, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Jun-Yan Zhu et al., “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,” 2017 IEEE International Conference on Computer Vision, Venice, Italy, pp. 2242-2251, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Florian Shkurti et al., “Multi-domain Monitoring of Marine Environments using a Heterogeneous Robot Team,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, pp. 1747-1753, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Brian Bingham et al., “Robotic Tools for Deep Water Archaeology: Surveying an Ancient Shipwreck with an Autonomous Underwater Vehicle,” Journal of Field Robotics, vol. 27, no. 6, pp. 702-717, 2010.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Md Jahidul Islam, Marc Ho, and Junaed Sattar, “Understanding Human Motion and Gestures for Underwater Human-Robot Collaboration,” Journal of Field Robotics, vol. 36, no. 5, pp. 851-873, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Shu Zhang et al., “Underwater Image Enhancement via Extended MultiScale Retinex,” Neuro Computing, vol. 245, pp. 1-9, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Cameron Fabbri, Md Jahidul Islam, and Junaed Sattar, “Enhancing Underwater Imagery using Generative Adversarial Networks,” IEEE International Conference on Robotics and Automation, Brisbane, QLD, Australia, pp. 7159-7165, 2018.
[CrossRef] [Google Scholar] [Publisher Link]