Welcome to Journal of Beijing Institute of Technology
Volume 31Issue 6
Dec. 2022
Turn off MathJax
Article Contents
Yexin Liu, Ben Xu, Mengmeng Zhang, Wei Li, Ran Tao. Sub-Regional Infrared-Visible Image Fusion Using Multi-Scale Transformation[J]. JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2022, 31(6): 535-550. doi: 10.15918/j.jbit1004-0579.2021.096
Citation: Yexin Liu, Ben Xu, Mengmeng Zhang, Wei Li, Ran Tao. Sub-Regional Infrared-Visible Image Fusion Using Multi-Scale Transformation[J].JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 2022, 31(6): 535-550.doi:10.15918/j.jbit1004-0579.2021.096

Sub-Regional Infrared-Visible Image Fusion Using Multi-Scale Transformation

doi:10.15918/j.jbit1004-0579.2021.096
Funds:This work was supported by the China Postdoctoral Science Foundation Funded Project (No. 2021M690385), and the National Natural Science Foundation of China (No. 62101045).
More Information
  • Author Bio:

    Yexin Liureceived the B.S. degree in electronics and information engineering from Beijing Institute of Technology, Beijing, China, in 2020, respectively. He is currently pursuing M.S. degree in with Beijing Institute of Technology, Beijing, under the supervision of Dr. W. Li. His research interests include hyperspectral imagery reconstructing and restoring

    Ben Xureceived the B.S. degree in Communication Engineering and the M.S. degree in Information and Communication Engineering from Beijing Institute of Technology, Beijing, China, in 2019 and 2022, respectively. His research interests include imagery processing and pattern recognition

    Mengmeng Zhangreceived the B.S. degree in computer science and technology from Qingdao University of Science and Technology, Qingdao,China, in 2014, and the Ph.D. degree in control science and engineering from Beijing University of Chemical Technology, Beijing, China, in 2019. She is currently an Assistant Professor with the School of Information and Electronics, Beijing Institute of Technology, Beijing. Her research interests include remote sensing image processing and pattern recognition

    Wei Li(Senior Member, IEEE) received the B.E.degree in telecommunications engineering from Xidian University, Xi’an, China, in 2007, the M.S. degree in information science and technology from Sun Yat-Sen University, Guangzhou, China, in 2009, and the Ph.D. degree in electrical and computer engineering from Mississippi State University, Starkville, MS, USA, in 2012. Subsequently, he spent one year as a Postdoctoral Researcher with the University of California at Davis, Davis, CA, USA. He is a Professor with the School of Information and Electronics, Beijing Institute of Technology, Beijing, China. His research interests include hyperspectral image analysis, pattern recognition, and data reconstruction. Dr. Li is currently an Associate Editor of the IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, and was an Associate Editor of the IEEE SIGNAL PROCESSING LETTERS and the IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING

    Ran Tao(Senior Member, IEEE) was born in 1964. He received the B.S. degree from the Electronic Engineering Institute of PLA, Hefei, China, in 1985, and the M.S. and Ph.D. degrees from the Harbin Institute of Technology, Harbin, China, in 1990 and 1993, respectively. In 2001, he was a Senior Visiting Scholar with the University of Michigan at Ann Arbor, Ann Arbor, MI, USA. He is a Professor with the School of Information and Electronics, Beijing Institute of Technology, Beijing, China. He has three books and over 100 peer-reviewed journal articles. His research interests include fractional Fourier transform and its applications, theory, and technology for radar and communication systems. Dr. Tao was a recipient of the National Science Foundation of China for Distinguished Young Scholars in 2006, and the First Prize of Science and Technology Progress in 2006 and 2007, and the First Prize of Natural Science in 2013, both awarded by the Ministry of Education. He was a Distinguished Professor of the Changjiang Scholars Program in 2009. He was a Chief Professor of the Program for Changjiang Scholars and Innovative Research Team in University from 2010 to 2012. He has been a Chief Professor of the Creative Research Groups of the National Natural Science Foundation of China since 2014. He is a member of the Wireless Communication and Signal Processing Commission of International Union of Radio Science (URSI). He is the Vice Chair of the IEEE China Council and the URSI China Council. He is an Associate Editor of the IEEE SIGNAL PROCESSING LETTERS

  • Corresponding author:mengmengzhang@bit.edu.cn
  • Received Date:2021-12-16
  • Rev Recd Date:2022-03-20
  • Accepted Date:2022-03-27
  • Publish Date:2022-12-25
  • Infrared-visible image fusion plays an important role in multi-source data fusion, which has the advantage of integrating useful information from multi-source sensors. However, there are still challenges in target enhancement and visual improvement. To deal with these problems, a sub-regional infrared-visible image fusion method (SRF) is proposed. First, morphology and threshold segmentation is applied to extract targets interested in infrared images. Second, the infrared background is reconstructed based on extracted targets and the visible image. Finally, target and background regions are fused using a multi-scale transform. Experimental results are obtained using public data for comparison and evaluation, which demonstrate that the proposed SRF has potential benefits over other methods.
  • loading
  • [1]
    J. Dong, D. Zhuang, Y. Huang, and J. Fu,“Advances in multisensor data fusion: algorithms and applications,” Sensors, vol. 9, no. 10, pp. 7771-7784, 2009. doi:10.3390/s91007771
    [2]
    J. A. Sobrino, F. Del Frate, M. Drusch, J. C. Jimenez-Munoz, P. Manunta, and A. Regan,“Review of thermal infrared applications and requirements for future high-resolution sensors,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 5, pp. 2963-2972, 2016. doi:10.1109/TGRS.2015.2509179
    [3]
    M. Zhao, L. Li, W. Li, R. Tao, L. Li, and W. Zhang,“Infrared small-target detection based on multiple morphological profiles,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 7, pp. 6077-6091, 2020.
    [4]
    M. Zhao, W. Li, L. Li, P. Ma, Z. Cai, and R. Tao,“Threeorder tensor creation and tucker decomposition for infrared small-target detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-16, 2022.
    [5]
    J. Ma, Y. Ma, and C. Li,“Infrared and visible image fusion methods and applications: a survey,” Information Fusion, vol. 45, pp. 153-178, 2019. doi:10.1016/j.inffus.2018.02.004
    [6]
    D. P. Bavirisetti and R. Dhuli,“Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen-loeve transform,” IEEE Sensors Journal, vol. 16, no. 1, pp. 203-209, 2016. doi:10.1109/JSEN.2015.2478655
    [7]
    S. Li, X. Kang, and J. Hu,“Image fusion with guided filtering,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2864-2875, 2013. doi:10.1109/TIP.2013.2244222
    [8]
    Z. Zhou, B. Wang, S. Li, and M. Dong,“Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters,” Information Fusion, vol. 30, pp. 15-26, 2016. doi:10.1016/j.inffus.2015.11.003
    [9]
    M. J. Tan, S. B. Gao, W. Z. Xu, and S. C. Han,“Visibleinfrared image fusion based on early visual information processing mechanisms,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 11, pp. 4357-4369, 2020.
    [10]
    H. Sun, Q. Liu, J. Wang, J. Ren, Y. Wu, H. Zhao, and H. Li,“Fusion of infrared and visible images for remote detection of low-altitude slow-speed small targets,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 2971-2983, 2021. doi:10.1109/JSTARS.2021.3061496
    [11]
    X. Huang, G. Qi, H. Wei, Y. Chai, and J. Sim,“A novel infrared and visible image information fusion method based on phase congruency and image entropy,” Entropy, vol. 21, no. 12, pp. 1135, 2019. doi:10.3390/e21121135
    [12]
    X. Lu, B. Zhang, Y. Zhao, H. Liu, and H. Pei,“The infrared and visible image fusion algorithm based on target separation and sparse representation,” Infrared Physics & Technology, vol. 67, pp. 397-407, 2014.
    [13]
    J. Wang, J. Peng, X. Feng, G. He, and J. Fan,“Fusion method for infrared and visible images by using non-negative sparse representation,” Infrared Physics & Technology, vol. 67, pp. 477-489, 2014.
    [14]
    J. Cai, Q. Cheng, M. Peng, and Y. Song,“Fusion of infrared and visible images based on nonsubsampled contourlet transform and sparse k-SVD dictionary learning,” Infrared Physics & Technology, vol. 82, pp. 85-95, 2017.
    [15]
    Y. Yang, Y. Zhang, S. Huang, Y. Zuo, and J. Sun,“Infrared and visible image fusion using visual saliency sparse representation and detail injection model,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-15, 2021.
    [16]
    Z . Q. Zhu, H. P. Yin, Y. Chai, Y. X. Li, and G. Q. Qi,“A novel multi-modality image fusion method based on image decomposition and sparse representation,” Information Sciences An International Journal, vol. 432, pp. 516-529, 2018. doi:10.1016/j.ins.2017.09.010
    [17]
    D. P. Bavirisetti, G. Xiao, and G. Liu, “Multi-sensor image fusion based on fourth order partial differential equations, ” in 2017 20th International Conference on Information Fusion (Fusion), 2017, pp. 1–9.
    [18]
    W. Kong, Y. Lei, and H. Zhao,“Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization,” Infrared Physics & Technology, vol. 67, pp. 161-172, 2014.
    [19]
    D. P. Bavirisetti and R. Dhuli,“Two-scale image fusion of visible and infrared images using saliency detection,” Infrared Physics & Technology, vol. 76, pp. 52-64, 2016.
    [20]
    J. Ma, Z. Zhou, B. Wang, and H. Zong,“Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infrared Physics & Technology, vol. 82, pp. 8-17, 2017.
    [21]
    Y. Chen and N. Sang,“Attention-based hierarchical fusion of visible and infrared images,” Optik, vol. 126, no. 23, pp. 4243-4248, 2015. doi:10.1016/j.ijleo.2015.08.120
    [22]
    J. Zhao, X. Gao, Y. Chen, H. Feng, and D. Wang,“Multiwindow visual saliency extraction for fusion of visible and infrared images,” Infrared Physics & Technology, vol. 76, pp. 295-302, 2016.
    [23]
    Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang,“Infrared and visible image fusion with convolutional neural networks,” International Journal of Wavelets, Multiresolution and Information Processing, vol. 16, no. 3, pp. 1 850 018–1-1 850 018–20, 2018.
    [24]
    H. Li and X. J. Wu,“Densefuse: A fusion approach to infrared and visible images,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2614-2623, 2018.
    [25]
    J. Ma, H. Zhang, Z. Shao, P. Liang, and H. Xu,“Ganmcc: A generative adversarial network with multiclassification constraints for infrared and visible image fusion,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-14, 2021.
    [26]
    L. Jian, X. Yang, Z. Liu, G. Jeon, M. Gao, and D. Chisholm,“Sedrfuse: A symmetric encoder¨cdecoder with residual block network for infrared and visible image fusion,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-15, 2021.
    [27]
    A. Raza, J. Liu, Y. Liu, J. Liu, Z. Li, X. Chen, H. Huo, and T. Fang,“Ir-msdnet: Infrared and visible image fusion based on infrared features and multiscale dense network,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 3426-3437, 2021. doi:10.1109/JSTARS.2021.3065121
    [28]
    Q. Li, L. Lu, Z. Li, W. Wu, Z. Liu, G. Jeon, and X. Yang,“Coupled gan with relativistic discriminators for infrared and visible images fusion,” IEEE Sensors Journal, vol. 21, pp. 7458-7467, 2021.
    [29]
    M. Haspelmath and A. D. Sims, Understanding morphology. 2002. http://www.researchgate.net/publication/271018639_Understanding _Morphology.
    [30]
    S. G. Mallat,“A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 11, no. 7, pp. 674-693, 1989.
    [31]
    B. Ashalatha and B. Reddy, “Enhanced pyramid image fusion on visible and infrared images at pixel and feature levels, ” in 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), 2017, pp. 613–618.
    [32]
    J. Chen, X. Li, L. Luo, X. Mei, and J. Ma,“Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Information Sciences, vol. 508, pp. 64-78, 2020. doi:10.1016/j.ins.2019.08.066
    [33]
    V. Naidu,“Image fusion technique using multi-resolution singular value decomposition,” Defence Science Journal, vol. 61, no. 5, pp. 479, 2011. doi:10.14429/dsj.61.705
    [34]
    N. Otsu,“A threshold selection method from gray-level histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979. doi:10.1109/TSMC.1979.4310076
    [35]
    P. Perona and J. Malik,“Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629-639, 1990. doi:10.1109/34.56205
    [36]
    D. P. Bavirisetti and R. Dhuli,“Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen-loeve transform,” IEEE Sensors Journal, vol. 16, no. 1, pp. 203, 2015.
    [37]
    S. Rajiv and K. Ashish,“Multiscale medical image fusion in wavelet domain,” The Scientific World Journal, vol. 2013, pp. 521034, 2013.
    [38]
    M. K. Vairalkar and S. Nimbhorkar,“Edge detection of images using sobel operator,” International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 1, pp. 291-293, 2012.
    [39]
    C. Hao and P. K. Varshney,“A human perception inspired quality metric for image fusion based on regional information,” Information Fusion, vol. 8, no. 2, pp. 193-207, 2007. doi:10.1016/j.inffus.2005.10.001
    [40]
    A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” in IEEE Transactions on Communications, vol. 43, no. 12, pp. 2959-2965, 1995. doi: 10.1109/26.477498.
    [41]
    X. Guo, R. Nie, J. Cao, D. Zhou, L. Mei, and K. He,“Fusegan: Learning to fuse multi-focus image via conditional generative adversarial network,” IEEE Transactions on Multimedia, vol. 21, no. 8, pp. 1982-1996, 2019. doi:10.1109/TMM.2019.2895292
    [42]
    J. Ma, C. Chen, C. Li, and J. Huang,“Infrared and visible image fusion via gradient transfer and total variation minimization,” Information Fusion, vol. 31, pp. 100-109, 2016. doi:10.1016/j.inffus.2016.02.001
    [43]
    Y. Zhang, L. Zhang, X. Bai, and L. Zhang,“Infrared and visual image fusion through infrared feature extraction and visual information preservation,” Infrared Physics & Technology, vol. 83, pp. 227-237, 2017.
  • 加载中

Catalog

    通讯作者:陈斌, bchen63@163.com
    • 1.

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(11)/Tables(6)

    Article Metrics

    Article views (147) PDF downloads(21) Cited by()
    Proportional views
    Related

    /

    Return
    Return
      Baidu
      map