scholarly journals Feature Point Matching Method Based on Consistent Edge Structures for Infrared and Visible Images

2020 ◽  
Vol 10 (7) ◽  
pp. 2302
Author(s):  
Qi Wang ◽  
Xiang Gao ◽  
Fan Wang ◽  
Zhihang Ji ◽  
Xiaopeng Hu

Infrared and visible image match is an important research topic in the field of multi-modality image processing. Due to the difference of image contents like pixel intensities and gradients caused by disparate spectrums, it is a great challenge for infrared and visible image match in terms of the detection repeatability and the matching accuracy. To improve the matching performance, a feature detection and description method based on consistent edge structures of images (DDCE) is proposed in this paper. First, consistent edge structures are detected to obtain similar contents of infrared and visible images. Second, common feature points of infrared and visible images are extracted based on the consistent edge structures. Third, feature descriptions are established according to the edge structure attributes including edge length and edge orientation. Lastly, feature correspondences are calculated according to the distance of feature descriptions. Due to the utilization of consistent edge structures of infrared and visible images, the proposed DDCE method can improve the detection repeatability and the matching accuracy. DDCE is evaluated on two public datasets and are compared with several state-of-the-art methods. Experimental results demonstrate that DDCE can achieve superior performance against other methods for infrared and visible image match.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Yuqing Zhao ◽  
Guangyuan Fu ◽  
Hongqiao Wang ◽  
Shaolei Zhang

Visible images contain clear texture information and high spatial resolution but are unreliable under nighttime or ambient occlusion conditions. Infrared images can display target thermal radiation information under day, night, alternative weather, and ambient occlusion conditions. However, infrared images often lack good contour and texture information. Therefore, an increasing number of researchers are fusing visible and infrared images to obtain more information from them, which requires two completely matched images. However, it is difficult to obtain perfectly matched visible and infrared images in practice. In view of the above issues, we propose a new network model based on generative adversarial networks (GANs) to fuse unmatched infrared and visible images. Our method generates the corresponding infrared image from a visible image and fuses the two images together to obtain more information. The effectiveness of the proposed method is verified qualitatively and quantitatively through experimentation on public datasets. In addition, the generated fused images of the proposed method contain more abundant texture and thermal radiation information than other methods.



Author(s):  
Michael K. Kundmann ◽  
Ondrej L. Krivanek

Parallel detection has greatly improved the elemental detection sensitivities attainable with EELS. An important element of this advance has been the development of differencing techniques which circumvent limitations imposed by the channel-to-channel gain variation of parallel detectors. The gain variation problem is particularly severe for detection of the subtle post-threshold structure comprising the EXELFS signal. Although correction techniques such as gain averaging or normalization can yield useful EXELFS signals, these are not ideal solutions. The former is a partial throwback to serial detection and the latter can only achieve partial correction because of detector cell inhomogeneities. We consider here the feasibility of using the difference method to efficiently and accurately measure the EXELFS signal.An important distinction between the edge-detection and EXELFS cases lies in the energy-space periodicities which comprise the two signals. Edge detection involves the near-edge structure and its well-defined, shortperiod (5-10 eV) oscillations. On the other hand, EXELFS has continuously changing long-period oscillations (∼10-100 eV).



1988 ◽  
Vol 55 (4) ◽  
pp. 579-583 ◽  
Author(s):  
Lucas Dominguez ◽  
José Francisco Fernández ◽  
Victor Briones ◽  
José Luis Blanco ◽  
Guillermo Suárez

SummaryDifferent selective agar media were compared for the recovery and isolation of five species ofListeriafrom raw milk and cheese. The selective media examined were Beerens medium, MacBride medium and that described by Dominguezet al.(1984) with 6 mg/1 acriflavine, listeria selective agar medium (LSAM), and LSAM with 12 mg/1 acriflavine (LSAM × 2A); a non-selective yeast glucose Lemco agar was included for comparison. When the difference between listeria and the natural microflora of raw milk and cheese was 102cfu/ml, listeria could be isolated by direct plating on all media tested. When it was lower than 103–104cfu/ml, listeria were isolated by direct plating only on LSAM and LSAM × 2A. When the difference was greater than 104cfu/ml, a previous enrichment was necessary to isolate them. LSAM and LSAM × 2A media performed better than the other media tested for isolating listeria by direct plating and improved their isolation from dairy products. This superior performance was evaluated by the ability of these media to support colony formation of different species ofListeriatested, the easy recognition of these colonies from those formed by other microorganisms and by their capacity to inhibit the natural microflora of these foods.



2012 ◽  
Vol 500 ◽  
pp. 383-389 ◽  
Author(s):  
Kai Wei Yang ◽  
Tian Hua Chen ◽  
Su Xia Xing ◽  
Jing Xian Li

In the System of Target Tracking Recognition, infrared sensors and visible light sensors are two kinds of the most commonly used sensors; fusion effectively for these two images can greatly enhance the accuracy and reliability of identification. Improving the accuracy of registration in infrared light and visible light images by modifying the SIFT algorithm, allowing infrared images and visible images more quickly and accurately register. The method can produce good results for registration by infrared image histogram equa-lization, reasonable to reduce the level of Gaussian blur in the pyramid establishment process of sift algorithm, appropriate adjustments to thresholds and limits the scope of direction of sub-gradient descriptor. The features are invariant to rotation, image scale and change in illumination.



Author(s):  
Han Xu ◽  
Pengwei Liang ◽  
Wei Yu ◽  
Junjun Jiang ◽  
Jiayi Ma

In this paper, we propose a new end-to-end model, called dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Unlike the pixel-level methods and existing deep learning-based methods, the fusion task is accomplished through the adversarial process between a generator and two discriminators, in addition to the specially designed content loss. The generator is trained to generate real-like fused images to fool discriminators. The two discriminators are trained to calculate the JS divergence between the probability distribution of downsampled fused images and infrared images, and the JS divergence between the probability distribution of gradients of fused images and gradients of visible images, respectively. Thus, the fused images can compensate for the features that are not constrained by the single content loss. Consequently, the prominence of thermal targets in the infrared image and the texture details in the visible image can be preserved or even enhanced in the fused image simultaneously. Moreover, by constraining and distinguishing between the downsampled fused image and the low-resolution infrared image, DDcGAN can be preferably applied to the fusion of different resolution images. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our method over the state-of-the-art.



2006 ◽  
Vol 22 (4) ◽  
pp. 1081-1101 ◽  
Author(s):  
Bruce F. Maison ◽  
Kazuhiko Kasai ◽  
Yoji Ooki

Seismic behaviors of a five-story welded steel moment-frame (WSMF) office building in Kobe, Japan, and a six-story WSMF office building in Northridge, California, are compared. Both experienced earthquake damage (1995 Kobe and 1994 Northridge earthquakes, respectively). Computer models of the buildings are formulated, having the ability to simulate damage in terms of fractured moment connections. Analyses are conducted to assess building response during the earthquakes. The calibrated models are then analyzed using a suite of earthquake records to compare building performance under consistent demands. The Kobe building is found to be more rugged than the Northridge building. Analysis suggests it would experience much less damage than the Northridge building from shaking equivalent to 2,500-year earthquake for a generic Los Angeles site. Superior performance of the Kobe building is attributed to its relatively greater stiffness and strength. The results provide insight into the difference in seismic fragility expected for this class of mid-rise WSMF buildings in Japan and the United States.



2020 ◽  
Vol 39 (3) ◽  
pp. 4617-4629
Author(s):  
Chengrui Gao ◽  
Feiqiang Liu ◽  
Hua Yan

Infrared and visible image fusion refers to the technology that merges the visual details of visible images and thermal feature information of infrared images; it has been extensively adopted in numerous image processing fields. In this study, a dual-tree complex wavelet transform (DTCWT) and convolutional sparse representation (CSR)-based image fusion method was proposed. In the proposed method, the infrared images and visible images were first decomposed by dual-tree complex wavelet transform to characterize their high-frequency bands and low-frequency band. Subsequently, the high-frequency bands were enhanced by guided filtering (GF), while the low-frequency band was merged through convolutional sparse representation and choose-max strategy. Lastly, the fused images were reconstructed by inverse DTCWT. In the experiment, the objective and subjective comparisons with other typical methods proved the advantage of the proposed method. To be specific, the results achieved using the proposed method were more consistent with the human vision system and contained more texture detail information.



2018 ◽  
Vol 25 (4) ◽  
pp. 1129-1134 ◽  
Author(s):  
Tsubasa Tobase ◽  
Akira Yoshiasa ◽  
Tatsuya Hiratoko ◽  
Akihiko Nakatsuka

Pre-edge peaks in 3d transition-metal element (Sc, Ti, V, Cr and Mn) K-edge XANES (X-ray absorption near-edge structure) spectra in AO2 (A = Ti and V), A 2O3 (A = Sc, Cr and Mn) and AO (A = Mn) are measured at various temperatures. Quantitative comparisons for the XANES spectra were investigated by using absorption intensity invariant point normalization. The energy position of the difference peak (D peak) is obtained from the difference between the low- and high-temperature XANES spectra. There are two kinds of temperature dependence for pre-edge peak intensity: rutile- and anatase-type. The true temperature dependence of a transition to each orbital is obtained from the difference spectrum. In both anatase and rutile, the pre-edge peak positions of A2 and A3 are clearly different from the D1- and D2-peak positions. The A1 peak-top energies in both phases of VO2 differ from the D1 peak-top energies. The D-peak energy position determined by the difference spectrum should represent one of the true energies for the transition to an independent orbital. The peak-top positions for pre-edge peaks in XANES do not always represent the true energy for independent transitions to orbitals because several orbital transitions overlap with similar energies. This work suggests that deformation vibration (bending mode) is effective in determining the temperature dependence for the D-peak intensity.



2012 ◽  
Vol 04 (02) ◽  
pp. 1250024 ◽  
Author(s):  
MIRELA DAMIAN ◽  
KRISTIN RAUDONIS

Yao and Theta graphs are defined for a given point set and a fixed integer k > 0. The space around each point is divided into k cones of equal angle, and each point is connected to a nearest neighbor in each cone. The difference between Yao and Theta graphs is in the way the nearest neighbor is defined: Yao graphs minimize the Euclidean distance between a point and its neighbor, and Theta graphs minimize the Euclidean distance between a point and the orthogonal projection of its neighbor on the bisector of the hosting cone. We prove that, corresponding to each edge of the Theta graph Θ6, there is a path in the Yao graph Y6 whose length is at most 8.82 times the edge length. Combined with the result of Bonichon et al., who prove an upper bound of 2 on the stretch factor of Θ6, we obtain an upper bound of 17.64 on the stretch factor of Y6.



2020 ◽  
Author(s):  
Xiaoxue XING ◽  
Cheng LIU ◽  
Cong LUO ◽  
Tingfa XU

Abstract In Multi-scale Geometric Analysis (MGA)-based fusion methods for infrared and visible images, adopting the same representation for the two types of the images will result in the non-obvious thermal radiation target in the fused image, which can hardly be distinguished from the background. To solve the problem, a novel fusion algorithm based on nonlinear enhancement and Non-Subsampled Shearlet Transform (NSST) decomposition is proposed. Firstly, NSST is used to decompose the two source images into low- and high-frequency sub-bands. Then, the Wavelet Transform (WT) is used to decompose high-frequency sub-bands into obtain approximate sub-bands and directional detail sub-bands. The “average” fusion rule is performed for fusion for approximate sub-bands. And the “max-absolute” fusion rule is performed for fusion for directional detail sub-bands. The inverse WT is used to reconstruct the high-frequency sub-bands. To highlight the thermal radiation target, we construct a non-linear transform function to determine the fusion weight of low-frequency sub-bands, and whose parameters can be further adjusted to meet different fusion requirements. Finally, the inverse NSST is used to reconstruct the fused image. The experimental results show that the proposed method can simultaneously enhance the thermal target in infrared images and preserve the texture details in visible images, and which is competitive with or even superior to the state-of-the-art fusion methods in terms of both visual and quantitative evaluations.



Sign in / Sign up

Export Citation Format

Share Document