image pairs
Recently Published Documents


TOTAL DOCUMENTS

587
(FIVE YEARS 226)

H-INDEX

25
(FIVE YEARS 6)

2022 ◽  
Author(s):  
Lisa Sophie Kölln ◽  
Omar Salem ◽  
Jessica Valli ◽  
Carsten Gram Hansen ◽  
Gail McConnell

Immunofluorescence (IF) microscopy is routinely used to visualise the spatial distribution of proteins that dictates their cellular function. However, unspecific antibody binding often results in high cytosolic background signals, decreasing the image contrast of a target structure. Recently, convolutional neural networks (CNNs) were successfully employed for image restoration in IF microscopy, but current methods cannot correct for those background signals. We report a new method that trains a CNN to reduce unspecific signals in IF images; we name this method label2label (L2L). In L2L, a CNN is trained with image pairs of two non-identical labels that target the same cellular structure. We show that after L2L training a network predicts images with significantly increased contrast of a target structure, which is further improved after implementing a multi-scale structural similarity loss function. Here, our results suggest that sample differences in the training data decrease hallucination effects that are observed with other methods. We further assess the performance of a cycle generative adversarial network, and show that a CNN can be trained to separate structures in superposed IF images of two targets.


2022 ◽  
Vol 88 (1) ◽  
pp. 65-72
Author(s):  
Wanxuan Geng ◽  
Weixun Zhou ◽  
Shuanggen Jin

Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that it is an effective model for learning complementary information and thus improving urban scene classification.


2022 ◽  
Vol 88 (1) ◽  
pp. 39-46
Author(s):  
Xinyu Ding ◽  
Qunming Wang

Recently, the method of spatiotemporal spectral unmixing (STSU ) was developed to fully explore multi-scale temporal information (e.g., MODIS –Landsat image pairs) for spectral unmixing of coarse time series (e.g., MODIS data). To further enhance the application for timely monitoring, the real-time STSU( RSTSU) method was developed for real-time data. In RSTSU, we usually choose a spatially complete MODIS–Landsat image pair as auxiliary data. Due to cloud contamination, the temporal distance between the required effective auxiliary data and the real-time data to be unmixed can be large, causing great land cover changes and uncertainty in the extracted unchanged pixels (i.e., training samples). In this article, to extract more reliable training samples, we propose choosing the auxiliary MODIS–Landsat data temporally closest to the prediction time. To deal with the cloud contamination in the auxiliary data, we propose an augmented sample-based RSTSU( ARSTSU) method. ARSTSU selects and augments the training samples extracted from the valid (i.e., non-cloud) area to synthesize more training samples, and then trains an effective learning model to predict the proportions. ARSTSU was validated using two MODIS data sets in the experiments. ARSTSU expands the applicability of RSTSU by solving the problem of cloud contamination in temporal neighbors in actual situations.


2021 ◽  
Vol 11 (1) ◽  
pp. 169
Author(s):  
Franziska Schollemann ◽  
Janosch Kunczik ◽  
Henriette Dohmeier ◽  
Carina Barbosa Pereira ◽  
Andreas Follmann ◽  
...  

The number of people suffering from chronic wounds is increasing due to demographic changes and the global epidemics of obesity and diabetes. Innovative imaging techniques within the field of chronic wound diagnostics are required to improve wound care by predicting and detecting wound infections to accelerate the application of treatments. For this reason, the infection probability index (IPI) is introduced as a novel infection marker based on thermal wound imaging. To improve usability, the IPI was implemented to automate scoring. Visual and thermal image pairs of 60 wounds were acquired to test the implemented algorithms on clinical data. The proposed process consists of (1) determining various parameters of the IPI based on medical hypotheses, (2) acquiring data, (3) extracting camera distortions using camera calibration, and (4) preprocessing and (5) automating segmentation of the wound to calculate (6) the IPI. Wound segmentation is reviewed by user input, whereas the segmented area can be refined manually. Furthermore, in addition to proof of concept, IPIs’ correlation with C-reactive protein (CRP) levels as a clinical infection marker was evaluated. Based on average CRP levels, the patients were clustered into two groups, on the basis of the separation value of an averaged CRP level of 100. We calculated the IPIs of the 60 wound images based on automated wound segmentation. Average runtime was less than a minute. In the group with lower average CRP, a correlation between IPI and CRP was evident.


Author(s):  
J. P. M. Hoefnagels ◽  
K. van Dam ◽  
N. Vonk ◽  
L. Jacobs

Abstract Background 95% Of all metals and alloys are processed using strip rolling, explaining the great number of existing strip rolling optimization models. Yet, an accurate in-situ full-field experimental measurement method of the deformation, velocity and strain fields of the strip in the deformation zone is lacking. Objective Here, a novel time-Integrated Digital Image Correlation (t-IDIC) framework is proposed and validated that fully exploits the notion of continuous, recurring material motion during strip rolling. Methods High strain accuracy and robustness against unavoidable light reflections and missing speckles is achieved by simultaneously correlating many (e.g. 200) image pairs in a single optimization step, i.e. each image pair is correlated with the same average global displacement field but is multiplied by a unique velocity corrector to account for differences in material velocity between image pairs. Results Demonstration on two different strip rolling experiments revealed previously inaccessible subtle changes in the deformation and strain fields due to minor variations in pre-deformation, elastic recovery, and geometrical irregularities. The influence of the work roll force and entry/exit strip tension has been investigated for strip rolling with an industrial pilot mill, which revealed unexpected non-horizontal material feed. This asymmetry was reduced by increasing the entry strip tension and rolling force, resulting in a more symmetric strain distribution, while increased distance between the neutral and entry point was found for a larger rolling force. Conclusions The proposed t-IDIC method allows for robust and accurate characterization of the strip’s full-field behavior of the deformation zone during rolling, revealing novel insights in the material behavior.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Di Wang ◽  
Hongying Zhang ◽  
Yanhua Shao

The precise evaluation of camera position and orientation is a momentous procedure of most machine vision tasks, especially visual localization. Aiming at the shortcomings of local features of dealing with changing scenes and the problem of realizing a robust end-to-end network that worked from feature detection to matching, an invariant local feature matching method for changing scene image pairs is proposed, which is a network that integrates feature detection, descriptor constitution, and feature matching. In the feature point detection and descriptor construction stage, joint training is carried out based on a neural network. In the feature point extraction and descriptor construction stage, joint training is carried out based on a neural network. To obtain local features with solid robustness to viewpoint and illumination changes, the Vector of Locally Aggregated Descriptors based on Neural Network (NetVLAD) module is introduced to compute the degree of correlation of description vectors from one image to another counterpart. Then, to enhance the relationship between relevant local features of image pairs, the attentional graph neural network (AGNN) is introduced, and the Sinkhorn algorithm is used to match them; finally, the local feature matching results between image pairs are output. The experimental results show that, compared with the existed algorithms, the proposed method enhances the robustness of local features of varying sights, performs better in terms of homography estimation, matching precision, and recall, and when meeting the requirements of the visual localization system to the environment, the end-to-end network tasks can be realized.


2021 ◽  
Author(s):  
Ohsung Oh ◽  
Youngju Kim ◽  
Daeseung Kim ◽  
Daniel. S. Hussey ◽  
Seung Wook Lee

Abstract Grating interferometry is a promising technique to obtain differential phase contrast images with illumination source of low intrinsic transverse coherence. However, retrieving the phase contrast image from the differential phase contrast image is difficult due to the accumulated noise and artifacts from the differential phase contrast image (DPCI) reconstruction. In this paper, we implemented a deep learning-based phase retrieval method to suppress these artifacts. Conventional deep learning based denoising requires noisy-clean image pair, but it is not feasible to obtain sufficient number of clean images for grating interferometry. In this paper, we apply a recently developed neural network called Noise2Noise (N2N) that uses noise-noise image pairs for training. We obtained many differential phase contrast images through combination of phase stepping images, and these were used as noise input/target pairs for N2N training. The application of the N2N network to simulated and measured DPCI showed that the phase contrast images were retrieved with strongly suppressed phase retrieval artifacts. These results can be used in grating interferometer applications which uses phase stepping method.


2021 ◽  
Vol 14 (1) ◽  
pp. 87
Author(s):  
Yeping Peng ◽  
Zhen Tang ◽  
Genping Zhao ◽  
Guangzhong Cao ◽  
Chao Wu

Unmanned air vehicle (UAV) based imaging has been an attractive technology to be used for wind turbine blades (WTBs) monitoring. In such applications, image motion blur is a challenging problem which means that motion deblurring is of great significance in the monitoring of running WTBs. However, an embarrassing fact for these applications is the lack of sufficient WTB images, which should include better pairs of sharp images and blurred images captured under the same conditions for network model training. To overcome the challenge of image pair acquisition, a training sample synthesis method is proposed. Sharp images of static WTBs were first captured, and then video sequences were prepared by running WTBs at different speeds. The blurred images were identified from the video sequences and matched to the sharp images using image difference. To expand the sample dataset, rotational motion blurs were simulated on different WTBs. Synthetic image pairs were then produced by fusing sharp images and images of simulated blurs. Finally, a total of 4000 image pairs were obtained. To conduct motion deblurring, a hybrid deblurring network integrated with DeblurGAN and DeblurGANv2 was deployed. The results show that the integration of DeblurGANv2 and Inception-ResNet-v2 provides better deblurred images, in terms of both metrics of signal-to-noise ratio (80.138) and structural similarity (0.950) than those obtained from the comparable networks of DeblurGAN and MobileNet-DeblurGANv2.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 40
Author(s):  
Chaowei Duan ◽  
Changda Xing ◽  
Yiliu Liu ◽  
Zhisheng Wang

As a powerful technique to merge complementary information of original images, infrared (IR) and visible image fusion approaches are widely used in surveillance, target detecting, tracking, and biological recognition, etc. In this paper, an efficient IR and visible image fusion method is proposed to simultaneously enhance the significant targets/regions in all source images and preserve rich background details in visible images. The multi-scale representation based on the fast global smoother is firstly used to decompose source images into the base and detail layers, aiming to extract the salient structure information and suppress the halos around the edges. Then, a target-enhanced parallel Gaussian fuzzy logic-based fusion rule is proposed to merge the base layers, which can avoid the brightness loss and highlight significant targets/regions. In addition, the visual saliency map-based fusion rule is designed to merge the detail layers with the purpose of obtaining rich details. Finally, the fused image is reconstructed. Extensive experiments are conducted on 21 image pairs and a Nato-camp sequence (32 image pairs) to verify the effectiveness and superiority of the proposed method. Compared with several state-of-the-art methods, experimental results demonstrate that the proposed method can achieve more competitive or superior performances according to both the visual results and objective evaluation.


Sign in / Sign up

Export Citation Format

Share Document