Citation: | Yexin Liu, Ben Xu, Mengmeng Zhang, Wei Li, Ran Tao. Sub-Regional Infrared-Visible Image Fusion Using Multi-Scale Transformation[J].JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2022, 31(6): 535-550.doi:10.15918/j.jbit1004-0579.2021.096 |
[1] |
J. Dong, D. Zhuang, Y. Huang, and J. Fu,“Advances in multisensor data fusion: algorithms and applications,”
Sensors, vol. 9, no. 10, pp. 7771-7784, 2009.
doi:10.3390/s91007771
|
[2] |
J. A. Sobrino, F. Del Frate, M. Drusch, J. C. Jimenez-Munoz, P. Manunta, and A. Regan,“Review of thermal infrared applications and requirements for future high-resolution sensors,”
IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 5, pp. 2963-2972, 2016.
doi:10.1109/TGRS.2015.2509179
|
[3] |
M. Zhao, L. Li, W. Li, R. Tao, L. Li, and W. Zhang,“Infrared small-target detection based on multiple morphological profiles,”
IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 7, pp. 6077-6091, 2020.
|
[4] |
M. Zhao, W. Li, L. Li, P. Ma, Z. Cai, and R. Tao,“Threeorder tensor creation and tucker decomposition for infrared small-target detection,”
IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-16, 2022.
|
[5] |
J. Ma, Y. Ma, and C. Li,“Infrared and visible image fusion methods and applications: a survey,”
Information Fusion, vol. 45, pp. 153-178, 2019.
doi:10.1016/j.inffus.2018.02.004
|
[6] |
D. P. Bavirisetti and R. Dhuli,“Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen-loeve transform,”
IEEE Sensors Journal, vol. 16, no. 1, pp. 203-209, 2016.
doi:10.1109/JSEN.2015.2478655
|
[7] |
S. Li, X. Kang, and J. Hu,“Image fusion with guided filtering,”
IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2864-2875, 2013.
doi:10.1109/TIP.2013.2244222
|
[8] |
Z. Zhou, B. Wang, S. Li, and M. Dong,“Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters,”
Information Fusion, vol. 30, pp. 15-26, 2016.
doi:10.1016/j.inffus.2015.11.003
|
[9] |
M. J. Tan, S. B. Gao, W. Z. Xu, and S. C. Han,“Visibleinfrared image fusion based on early visual information processing mechanisms,”
IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 11, pp. 4357-4369, 2020.
|
[10] |
H. Sun, Q. Liu, J. Wang, J. Ren, Y. Wu, H. Zhao, and H. Li,“Fusion of infrared and visible images for remote detection of low-altitude slow-speed small targets,”
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 2971-2983, 2021.
doi:10.1109/JSTARS.2021.3061496
|
[11] |
X. Huang, G. Qi, H. Wei, Y. Chai, and J. Sim,“A novel infrared and visible image information fusion method based on phase congruency and image entropy,”
Entropy, vol. 21, no. 12, pp. 1135, 2019.
doi:10.3390/e21121135
|
[12] |
X. Lu, B. Zhang, Y. Zhao, H. Liu, and H. Pei,“The infrared and visible image fusion algorithm based on target separation and sparse representation,”
Infrared Physics & Technology, vol. 67, pp. 397-407, 2014.
|
[13] |
J. Wang, J. Peng, X. Feng, G. He, and J. Fan,“Fusion method for infrared and visible images by using non-negative sparse representation,”
Infrared Physics & Technology, vol. 67, pp. 477-489, 2014.
|
[14] |
J. Cai, Q. Cheng, M. Peng, and Y. Song,“Fusion of infrared and visible images based on nonsubsampled contourlet transform and sparse k-SVD dictionary learning,”
Infrared Physics & Technology, vol. 82, pp. 85-95, 2017.
|
[15] |
Y. Yang, Y. Zhang, S. Huang, Y. Zuo, and J. Sun,“Infrared and visible image fusion using visual saliency sparse representation and detail injection model,”
IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-15, 2021.
|
[16] |
Z . Q. Zhu, H. P. Yin, Y. Chai, Y. X. Li, and G. Q. Qi,“A novel multi-modality image fusion method based on image decomposition and sparse representation,”
Information Sciences An International Journal, vol. 432, pp. 516-529, 2018.
doi:10.1016/j.ins.2017.09.010
|
[17] |
D. P. Bavirisetti, G. Xiao, and G. Liu, “Multi-sensor image fusion based on fourth order partial differential equations, ” in 2017 20th International Conference on Information Fusion (Fusion), 2017, pp. 1–9.
|
[18] |
W. Kong, Y. Lei, and H. Zhao,“Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization,”
Infrared Physics & Technology, vol. 67, pp. 161-172, 2014.
|
[19] |
D. P. Bavirisetti and R. Dhuli,“Two-scale image fusion of visible and infrared images using saliency detection,”
Infrared Physics & Technology, vol. 76, pp. 52-64, 2016.
|
[20] |
J. Ma, Z. Zhou, B. Wang, and H. Zong,“Infrared and visible image fusion based on visual saliency map and weighted least square optimization,”
Infrared Physics & Technology, vol. 82, pp. 8-17, 2017.
|
[21] |
Y. Chen and N. Sang,“Attention-based hierarchical fusion of visible and infrared images,”
Optik, vol. 126, no. 23, pp. 4243-4248, 2015.
doi:10.1016/j.ijleo.2015.08.120
|
[22] |
J. Zhao, X. Gao, Y. Chen, H. Feng, and D. Wang,“Multiwindow visual saliency extraction for fusion of visible and infrared images,”
Infrared Physics & Technology, vol. 76, pp. 295-302, 2016.
|
[23] |
Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang,“Infrared and visible image fusion with convolutional neural networks,”
International Journal of Wavelets, Multiresolution and Information Processing, vol. 16, no. 3, pp. 1 850 018–1-1 850 018–20, 2018.
|
[24] |
H. Li and X. J. Wu,“Densefuse: A fusion approach to infrared and visible images,”
IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2614-2623, 2018.
|
[25] |
J. Ma, H. Zhang, Z. Shao, P. Liang, and H. Xu,“Ganmcc: A generative adversarial network with multiclassification constraints for infrared and visible image fusion,”
IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-14, 2021.
|
[26] |
L. Jian, X. Yang, Z. Liu, G. Jeon, M. Gao, and D. Chisholm,“Sedrfuse: A symmetric encoder¨cdecoder with residual block network for infrared and visible image fusion,”
IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-15, 2021.
|
[27] |
A. Raza, J. Liu, Y. Liu, J. Liu, Z. Li, X. Chen, H. Huo, and T. Fang,“Ir-msdnet: Infrared and visible image fusion based on infrared features and multiscale dense network,”
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 3426-3437, 2021.
doi:10.1109/JSTARS.2021.3065121
|
[28] |
Q. Li, L. Lu, Z. Li, W. Wu, Z. Liu, G. Jeon, and X. Yang,“Coupled gan with relativistic discriminators for infrared and visible images fusion,”
IEEE Sensors Journal, vol. 21, pp. 7458-7467, 2021.
|
[29] |
M. Haspelmath and A. D. Sims,
Understanding morphology. 2002.
http://www.researchgate.net/publication/271018639_Understanding _Morphology.
|
[30] |
S. G. Mallat,“A theory for multiresolution signal decomposition: the wavelet representation,”
IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 11, no. 7, pp. 674-693, 1989.
|
[31] |
B. Ashalatha and B. Reddy, “Enhanced pyramid image fusion on visible and infrared images at pixel and feature levels, ” in
2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), 2017, pp. 613–618.
|
[32] |
J. Chen, X. Li, L. Luo, X. Mei, and J. Ma,“Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,”
Information Sciences, vol. 508, pp. 64-78, 2020.
doi:10.1016/j.ins.2019.08.066
|
[33] |
V. Naidu,“Image fusion technique using multi-resolution singular value decomposition,”
Defence Science Journal, vol. 61, no. 5, pp. 479, 2011.
doi:10.14429/dsj.61.705
|
[34] |
N. Otsu,“A threshold selection method from gray-level histograms,”
IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979.
doi:10.1109/TSMC.1979.4310076
|
[35] |
P. Perona and J. Malik,“Scale-space and edge detection using anisotropic diffusion,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629-639, 1990.
doi:10.1109/34.56205
|
[36] |
D. P. Bavirisetti and R. Dhuli,“Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen-loeve transform,”
IEEE Sensors Journal, vol. 16, no. 1, pp. 203, 2015.
|
[37] |
S. Rajiv and K. Ashish,“Multiscale medical image fusion in wavelet domain,”
The Scientific World Journal, vol. 2013, pp. 521034, 2013.
|
[38] |
M. K. Vairalkar and S. Nimbhorkar,“Edge detection of images using sobel operator,”
International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 1, pp. 291-293, 2012.
|
[39] |
C. Hao and P. K. Varshney,“A human perception inspired quality metric for image fusion based on regional information,”
Information Fusion, vol. 8, no. 2, pp. 193-207, 2007.
doi:10.1016/j.inffus.2005.10.001
|
[40] |
A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” in IEEE Transactions on Communications, vol. 43, no. 12, pp. 2959-2965, 1995. doi:
10.1109/26.477498.
|
[41] |
X. Guo, R. Nie, J. Cao, D. Zhou, L. Mei, and K. He,“Fusegan: Learning to fuse multi-focus image via conditional generative adversarial network,”
IEEE Transactions on Multimedia, vol. 21, no. 8, pp. 1982-1996, 2019.
doi:10.1109/TMM.2019.2895292
|
[42] |
J. Ma, C. Chen, C. Li, and J. Huang,“Infrared and visible image fusion via gradient transfer and total variation minimization,”
Information Fusion, vol. 31, pp. 100-109, 2016.
doi:10.1016/j.inffus.2016.02.001
|
[43] |
Y. Zhang, L. Zhang, X. Bai, and L. Zhang,“Infrared and visual image fusion through infrared feature extraction and visual information preservation,”
Infrared Physics & Technology, vol. 83, pp. 227-237, 2017.
|