thermal images
Recently Published Documents


TOTAL DOCUMENTS

878
(FIVE YEARS 356)

H-INDEX

27
(FIVE YEARS 8)

Animals ◽  
2022 ◽  
Vol 12 (2) ◽  
pp. 195
Author(s):  
Małgorzata Domino ◽  
Marta Borowska ◽  
Anna Trojakowska ◽  
Natalia Kozłowska ◽  
Łukasz Zdrojkowski ◽  
...  

Appropriate matching of rider–horse sizes is becoming an increasingly important issue of riding horses’ care, as the human population becomes heavier. Recently, infrared thermography (IRT) was considered to be effective in differing the effect of 10.6% and 21.3% of the rider:horse bodyweight ratio, but not 10.1% and 15.3%. As IRT images contain many pixels reflecting the complexity of the body’s surface, the pixel relations were assessed by image texture analysis using histogram statistics (HS), gray-level run-length matrix (GLRLM), and gray level co-occurrence matrix (GLCM) approaches. The study aimed to determine differences in texture features of thermal images under the impact of 10–12%, >12 ≤15%, >15 <18% rider:horse bodyweight ratios, respectively. Twelve horses were ridden by each of six riders assigned to light (L), moderate (M), and heavy (H) groups. Thermal images were taken pre- and post-standard exercise and underwent conventional and texture analysis. Texture analysis required image decomposition into red, green, and blue components. Among 372 returned features, 95 HS features, 48 GLRLM features, and 96 GLCH features differed dependent on exercise; whereas 29 HS features, 16 GLRLM features, and 30 GLCH features differed dependent on bodyweight ratio. Contrary to conventional thermal features, the texture heterogeneity measures, InvDefMom, SumEntrp, Entropy, DifVarnc, and DifEntrp, expressed consistent measurable differences when the red component was considered.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 119
Author(s):  
Gang Mao ◽  
Zhongzheng Zhang ◽  
Bin Qiao ◽  
Yongbo Li

The vibration signal of gearboxes contains abundant fault information, which can be used for condition monitoring. However, vibration signal is ineffective for some non-structural failures. In order to resolve this dilemma, infrared thermal images are introduced to combine with vibration signals via fusion domain-adaptation convolutional neural network (FDACNN), which can diagnose both structural and non-structural failures under various working conditions. First, the measured raw signals are converted into frequency and squared envelope spectrum to characterize the health states of the gearbox. Second, the sequences of the frequency and squared envelope spectrum are arranged into two-dimensional format, which are combined with infrared thermal images to form fusion data. Finally, the adversarial network is introduced to realize the state recognition of structural and non-structural faults in the unlabeled target domain. An experiment of gearbox test rigs was used for effectiveness validation by measuring both vibration and infrared thermal images. The results suggest that the proposed FDACNN method performs best in cross-domain fault diagnosis of gearboxes via multi-source heterogeneous data compared with the other four methods.


Author(s):  
M. Z. Dahiru ◽  
M. Hashim ◽  
N. Hassan

Abstract. Measuring high spatial/temporal industrial heat emission (IHE) is an important step in industrial climate studies. The availability of MODIS data products provides up endless possibilities for both large-area and long-term study. nevertheless, inadequate for monitoring industrial areas. Thus, Thermal sharpening is a common method for obtaining thermal images with higher spatial resolution regularly. In this study, the efficiency of the TsHARP technique for improving the low resolution of the MODIS data product was investigated using Landsat-8 TIR images over the Klang Industrial area in Peninsular Malaysia (PM). When compared to UAV TIR fine thermal images, sharpening resulted in mean absolute differences of about 25 °C, with discrepancies increasing as the difference between the ambient and target resolutions increased. To estimate IHE, the related factors (normalized) industrial area index as NDBI, NDSI, and NDVI were examined. The results indicate that IHE has a substantial positive correlation with NDBI and NDSI (R2 = 0.88 and 0.95, respectively), but IHE and NDVI have a strong negative correlation (R2 = 0.87). The results showed that MODIS LST at 1000 m resolution can be improved to 100 m with a significant correlation R2 = 0.84 and RMSE of 2.38 °C using Landsat 8 TIR images at 30 m, and MODIS LST at 1000 m resolution can still be improved to 100 m with significant correlation R2 = 0.89 and RMSE of 2.06 °C using aggregated Landsat-8 TIR at 100 m resolution. Similarly, Landsat-8 TIR at 100 m resolution was still improved to 30 m and used with aggregate UAV TIR at 5 m resolution with a significant correlation R2 = 0.92 and RMSE of 1.38 °C. Variation has been proven to have a significant impact on the accuracy of the model used. This result is consistent with earlier studies that utilized NDBI as a downscaling factor in addition to NDVI and other spectral indices and achieved lower RMSE than techniques that simply used NDVI. As a result, it is suggested that the derived IHE map is suitable for analyzing industrial thermal environments at 1:10,000 50,000 scales, and may therefore be used to assess the environmental effect.


2022 ◽  
Vol 12 (1) ◽  
pp. 497
Author(s):  
Vicente Pavez ◽  
Gabriel Hermosilla ◽  
Francisco Pizarro ◽  
Sebastián Fingerhuth ◽  
Daniel Yunge

This article shows how to create a robust thermal face recognition system based on the FaceNet architecture. We propose a method for generating thermal images to create a thermal face database with six different attributes (frown, glasses, rotation, normal, vocal, and smile) based on various deep learning models. First, we use StyleCLIP, which oversees manipulating the latent space of the input visible image to add the desired attributes to the visible face. Second, we use the GANs N’ Roses (GNR) model, a multimodal image-to-image framework. It uses maps of style and content to generate thermal imaging from visible images, using generative adversarial approaches. Using the proposed generator system, we create a database of synthetic thermal faces composed of more than 100k images corresponding to 3227 individuals. When trained and tested using the synthetic database, the Thermal-FaceNet model obtained a 99.98% accuracy. Furthermore, when tested with a real database, the accuracy was more than 98%, validating the proposed thermal images generator system.


2022 ◽  
pp. 129-143
Author(s):  
Catalina Luca ◽  
Doru Andritoi ◽  
Calin Corciova

2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Hidetsugu Asano ◽  
Eiji Hirakawa ◽  
Hayato Hayashi ◽  
Keisuke Hamada ◽  
Yuto Asayama ◽  
...  

Abstract Background Regulation of temperature is clinically important in the care of neonates because it has a significant impact on prognosis. Although probes that make contact with the skin are widely used to monitor temperature and provide spot central and peripheral temperature information, they do not provide details of the temperature distribution around the body. Although it is possible to obtain detailed temperature distributions using multiple probes, this is not clinically practical. Thermographic techniques have been reported for measurement of temperature distribution in infants. However, as these methods require manual selection of the regions of interest (ROIs), they are not suitable for introduction into clinical settings in hospitals. Here, we describe a method for segmentation of thermal images that enables continuous quantitative contactless monitoring of the temperature distribution over the whole body of neonates. Methods The semantic segmentation method, U-Net, was applied to thermal images of infants. The optimal combination of Weight Normalization, Group Normalization, and Flexible Rectified Linear Unit (FReLU) was evaluated. U-Net Generative Adversarial Network (U-Net GAN) was applied to thermal images, and a Self-Attention (SA) module was finally applied to U-Net GAN (U-Net GAN + SA) to improve precision. The semantic segmentation performance of these methods was evaluated. Results The optimal semantic segmentation performance was obtained with application of FReLU and Group Normalization to U-Net, showing accuracy of 92.9% and Mean Intersection over Union (mIoU) of 64.5%. U-Net GAN improved the performance, yielding accuracy of 93.3% and mIoU of 66.9%, and U-Net GAN + SA showed further improvement with accuracy of 93.5% and mIoU of 70.4%. Conclusions FReLU and Group Normalization are appropriate semantic segmentation methods for application to neonatal thermal images. U-Net GAN and U-Net GAN + SA significantly improved the mIoU of segmentation.


2021 ◽  
Vol 38 (6) ◽  
pp. 1699-1711
Author(s):  
Devanshu Tiwari ◽  
Manish Dixit ◽  
Kamlesh Gupta

This paper simply presents a fully automated breast cancer detection system as “Deep Multi-view Breast cancer Detection” based on deep transfer learning. The deep transfer learning model i.e., Visual Geometry Group 16 (VGG 16) is used in this approach for the correct classification of Breast thermal images into either normal or abnormal. This VGG 16 model is trained with the help of Static as well as Dynamic breast thermal images dataset consisting of multi-view, single view breast thermal images. These Multi-view breast thermal images are generated in this approach by concatenating the conventional left, frontal and right view breast thermal images taken from the Database for Mastology Research with Infrared image for the first time in order to generate a more informative and complete thermal temperature map of breast for enhancing the accuracy of the overall system. For the sake of genuine comparison, three other popular deep transfer learning models like Residual Network 50 (ResNet50V2), InceptionV3 network and Visual Geometry Group 19 (VGG 19) are also trained with the same augmented dataset consisting of multi-view as well as single view breast thermal images. The VGG 16 based Deep Multi-view Breast cancer Detect system delivers the best training, validation as well as testing accuracies as compared to their other deep transfer learning models. The VGG 16 achieves an encouraging testing accuracy of 99% on the Dynamic breast thermal images testing dataset utilizing the multi-view breast thermal images as input. Whereas the testing accuracies of 95%, 94% and 89% are achieved by the VGG 19, ResNet50V2, InceptionV3 models respectively over the Dynamic breast thermal images testing dataset utilizing the same multi-view breast thermal images as input.


Sign in / Sign up

Export Citation Format

Share Document