composite image
Recently Published Documents


TOTAL DOCUMENTS

169
(FIVE YEARS 48)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Chandler Dean Gatenbee ◽  
Ann-Marie Baker ◽  
Sandhya Prabhakaran ◽  
Robbert J.C. Slebos ◽  
Gunjan Mandal ◽  
...  

Spatial analyses can reveal important interactions between and among cells and their microenvironment. However, most existing staining methods are limited to a handful of markers per slice, thereby limiting the number of interactions that can be studied. This limitation is frequently overcome by registering multiple images to create a single composite image containing many markers. While there are several existing image registration methods for whole slide images (WSI), most have specific use cases. Here, we present the Virtual Alignment of pathoLogy Image Series (VALIS), a fully automated pipeline that opens, registers (rigid and/or non-rigid), and saves aligned slides in the ome.tiff format. VALIS has been tested with 273 immunohistochemistry (IHC) samples and 340 immunofluorescence (IF) samples, each of which contained between 2-69 images per sample. The registered WSI tend to have low error and are completed within a matter of minutes. In addition to registering slides, VALIS can also using the registration parameters to warp point data, such as cell centroids previously determined via cell segmentation and phenotyping. VALIS is written in Python and requires only few lines of code for execution. VALIS therefore provides a free, opensource, flexible, and simple pipeline for rigid and non-rigid registration of IF and/or IHC that can facilitate spatial analyses of WSI from novel and existing datasets.


2021 ◽  
Author(s):  
Eliana Lima da Fonseca ◽  
Edvan Casagrande dos Santos ◽  
Anderson Ribeiro de Figueiredo ◽  
Jefferson Cardia Simoes

The Antarctic vegetation maps are usually made using very high-resolution images collected by orbital sensors or unmanned aerial vehicles, generating isolated maps with information valid only for the time of image acquisition. In the context of global environmental change, mapping the current Antarctic vegetation distribution on a regular basis is necessary for a better understanding of the changes in this fragile environment. This work aimed to generate validated vegetation maps for the North Antarctic Peninsula and South Shetlands Islands based on Sentinel-2 images using cloud processing. Sentinel-2 imagery level 1C, acquired between 2016 and 2021 (January-April), were used. Land pixels were masked with the minimum value composite image for the "water vapor" band. The NDVI maximum value composite image was sliced, and its classes were associated with the occurrence of algae (0.15 - 0.20), lichens (0.20 - 0.50), and mosses (0.50 - 0.80). The vegetation map was validated by comparing it with those from the literature. The present study showed that Sentinel-2 images allow building a validated vegetation type distribution map for Antarctica Peninsula and South Shetlands Islands.


2021 ◽  
Author(s):  
Hamidullah Binol ◽  
M. Khalid Khan Niazi ◽  
Charles Elmaraghy ◽  
Aaron C Moberly ◽  
Metin N Gurcan

Background: The lack of an objective method to evaluate the eardrum is a critical barrier to an accurate diagnosis. Eardrum images are classified into normal or abnormal categories with machine learning techniques. If the input is an otoscopy video, a traditional approach requires great effort and expertise to manually determine the representative frame(s). Methods: In this paper, we propose a novel deep learning-based method, called OtoXNet, which automatically learns features for eardrum classification from otoscope video clips. We utilized multiple composite image generation methods to construct a highly representative version of otoscopy videos to diagnose three major eardrum diseases, i.e., otitis media with effusion, eardrum perforation, and tympanosclerosis versus normal (healthy). We compared the performance of OtoXNet against methods with that either use a single composite image or a keyframe selected by an experienced human. Our dataset consists of 394 otoscopy videos from 312 patients and 765 composite images before augmentation. Results: OtoXNet with multiple composite images achieved 84.8% class-weighted accuracy with 3.8% standard deviation, whereas with the human-selected keyframes and single composite images, the accuracies were respectively, 81.8% ± 5.0% and 80.1% ± 4.8% on multi-class eardrum video classification task using an 8-fold cross-validation scheme. A paired t-test shows that there is a statistically significant difference (p-value of 1.3 × 10-2) between the performance values of OtoXNet (multiple composite images) and the human-selected keyframes. Contrarily, the difference in means of keyframe and single composites was not significant (p = 5.49 × 10-1). OtoXNet surpasses the baseline approaches in qualitative results. Conclusion: The use of multiple composite images in analyzing eardrum abnormalities is advantageous compared to using single composite images or manual keyframe selection.


2021 ◽  
Vol 11 (8) ◽  
pp. 2071-2079
Author(s):  
Kiranmai Bellam ◽  
N. Krishnaraj ◽  
T. Jayasankar ◽  
N. B. Prakash ◽  
G. R. Hemalakshmi

Multimodal medical imaging is an indispensable requirement in the treatment of various pathologies to accelerate care. Rather than discrete images, a composite image combining complementary features from multimodal images is highly informative for clinical examinations, surgical planning, and progress monitoring. In this paper, a deep learning fusion model is proposed for the fusion of medical multimodal images. Based on pyramidal and residual learning units, the proposed model, strengthened with adaptive fusion rules, is tested on image pairs from a standard dataset. The potential of the proposed model for enhanced image exams is shown by fusion studies with deep network images and quantitative output metrics of magnetic resonance imaging and positron emission tomography (MRI/PET) and magnetic resonance imaging and single-photon emission computed tomography (MRI/SPECT). The proposed fusion model achieves the Structural Similarity Index Measure (SSIM) values of 0.9502 and 0.8103 for the MRI/SPECT and MRI/PET MRI/SPECT image sets, signifying the perceptual visual consistency of the fused images. Testing is performed on 20 pairs of MRI/SPECT and MRI/PET images. Similarly, the Mutual Information (MI) values of 2.7455 and 2.7776 obtained for the MRI/SPECT and MRI/PET image sets, indicating the model’s ability to capture the information content from the source images to the composite image. Further, the proposed model allows deploying its variants, introducing refinements on the basic model suitable for the fusion of low and high-resolution medical images of diverse modalities.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Pouyan Vakili-Gilani ◽  
Saeid Tavanafar ◽  
Abdul Rahman Mohammad Saleh ◽  
Hamideh Karimpour

Abstract Background Shaping ability of a file plays an important role during instrumentation in an endodontic treatment. This study aimed to compare the shaping ability of OneShape (OS), Hero Shaper (HS), and Revo-S (RS) instruments in simulated L-shaped canals. Methods Forty-eight simulated L-shaped canals were prepared to an apical size of 25 using OS, HS, and RS (all from Micro-Mega SA, Besançon, France), (n = 16 canals/group) systems. The amount of resin removed after each canal's preparation was measured and compared after producing a composite image made from the superimposition of pre and post-instrumented canals. Canal aberrations and the preparation time were also recorded. The data were statistically analysed by using ANOVA, Tukey, and Chi-square tests. Results One file fractured during instrumentation in the RS group. A significant difference was found at the apical end of the prepared simulated canal between the groups, with RS showing the least amount of resin removal from the inner side of the canals and HS showing the highest amount of resin removal from the outer side (P < 0.05). Regarding the total width of the canals after preparation, a significant difference was found between the groups at the apical end and the straight portion of the canals, and RS removed the least amount of resin at the straight portion of the canals (P < 0.05). No statistically significant differences were found between the different instruments regarding canal aberrations' incidence (P > 0.05). Conclusions All of the files showed a tendency to straighten the canals, whereas OS files maintained the original canal curvatures well.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Wen Liu ◽  
Feng Qiu ◽  
Xi Zeng

Prior to the advent of digital image processing technology, image composition primarily used human vision to identify colors and artificially convert them. However, manually synthesizing and transforming graphics images will not only consume a lot of manpower, time, and energy but also due to manual limitations in the process of synthesizing and coloring the pictures, the resulting pictures cannot meet people’s needs. In order to improve the speed and quality of image synthesis, and to synthesize the pictures people need more quickly and accurately, this article synthesizes the image based on the movement calculation across the selected area of the image and analyzes the photographic darkroom special effects of the synthesized image to simulate the artistic effect. Using case analysis method, literature analysis method, and other methods, the database was collected and a model of photographic darkroom stunt simulation artistic effect recognition was built. The results of the study found that the composite image based on the movement calculation across the selected area of the image is better than the composite image of other algorithms, and the quality of hue and saturation is more than 30% higher than other synthesis methods. It should be verified by experiments. The results are significantly different. This shows that the composite image based on moving calculation across the selected area of the image can achieve good results in the photographic darkroom stunt simulation artistic effect.


Author(s):  
Adam H Sprott ◽  
Joseph M Piwowar

In order to understand how a forest may respond to environmental changes or develop over time, it is necessary to examine broad, landscape level factors. With the arrival of unmanned aerial vehicles (UAVs), the combination of both spaceborne data with high resolution UAV data can provide foresters and biologists with powerful tools to classify canopies to the species level, which we illustrate here. We combine imagery from the Operational Land Imager (OLI) of the Landsat 8 satellite with aerial imagery from a Phantom 4 UAV to map canopy composition of three tree species. We manually delineated dense stands of each tree species in the UAV imagery to extract training samples from an OLI true colour composite image to perform a fuzzy membership analysis and calculate the maximum likelihood that an individual pixel represented a particular species. We verified the accuracy of our analysis finding an overall accuracy of 0.796 and a kappa statistic of 0.728. We consider these results to be a strong demonstration of the value of using UAV and satellite imagery in tandem to investigate forest-wide effects at an individual tree level.


2021 ◽  
Author(s):  
Pooyan Vakili-Gilani ◽  
Saeid Tavanafar ◽  
Abdulrahman Mohammad Saleh

Abstract Background This study aimed to compare the shaping ability of OneShape, Hero Shaper, and Revo-S instruments in simulated L-shaped canals. Methods forty-eight simulated L-shaped canals were prepared to an apical size of 25 using OneShape, Hero Shaper, and Revo-S (all from Micro-Mega SA, Besançon, France), (n = 16 canals/group) systems. The amount of resin removed after each canal's preparation was measured and compared after producing a composite image made from the superimposition of pre and post-instrumented canals. Canal aberrations and the preparation time were also recorded. The data were statistically analyzed by using ANOVA, Tukey, and Chi-square tests. Results One file fractured during instrumentation in Revo-S group. A significant difference was found at the apical end of the prepared simulated canal between the groups, with Revo-S showing the least amount of resin removal from the inner side of the canals and Hero Shaper showing the highest amount of resin removal from the outer side (P < 0.05). Regarding the total width of the canals after preparation, a significant difference was found between the groups at the apical end and the straight portion of the canals, and Revo-S removed the least amount of resin at the straight portion of the canals (P < 0.05). No statistically significant differences were found between the different instruments regarding canal aberrations' incidence (P > 0.05). Conclusions all of the files showed a tendency to straighten the canals, whereas OneShape files maintained the original canal curvatures well.


2021 ◽  
pp. 1-14
Author(s):  
Feiqiang Liu ◽  
Lihui Chen ◽  
Lu Lu ◽  
Gwanggil Jeon ◽  
Xiaomin Yang

Infrared (IR) and visible (VIS) image fusion technology combines the complementary information of the same scene from IR and VIS imaging sensors to generate a composite image, which is beneficial to post image-processing tasks. In order to achieve good fusion performance, a method by combining rolling guidance filter (RGF) and convolutional sparse representation (CSR) is proposed. In the proposed method, RGF is performed on every pre-registered IR and VIS source images to obtain their detail layers and base layer. Then, the detail layers are fused with a serious of weighted coefficients produced by joint bilateral filer (JBF). The base layer is decomposed into a sub-detail-layer and a sub-base-layer. CSR is applied to fuse the sub-detail-layer and averaging strategy is used to fuse the sub-base-layer. Finally, the fused image is reconstructed by adding the fused detail layer and base layer. Experimental results demonstrate the superiority of our proposed method both in subjective and objective assessment.


2021 ◽  
Vol 5 ◽  
pp. 44-51
Author(s):  
Georgi Jelev

This study presents the possibilities for using different colour models for visual interpretation of satellite imagery. Using the RGB model to visualise different spectral bands as a false colour composite image make it possible for different types of objects and features on the Earth surface to be highlighted and easily discerned based on their specific colour. Examples are shown based on satellite imagery from several free sources, e.g. the USGS’s Earth Explorer, the ESA’s Copernicus Open Access Hub, etc.


Sign in / Sign up

Export Citation Format

Share Document