regions of interest
Recently Published Documents


TOTAL DOCUMENTS

1149
(FIVE YEARS 310)

H-INDEX

45
(FIVE YEARS 7)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 627
Author(s):  
Fan Yang ◽  
Shan He ◽  
Siddharth Sadanand ◽  
Aroon Yusuf ◽  
Miodrag Bolic

In this study, a contactless vital signs monitoring system was proposed, which can measure body temperature (BT), heart rate (HR) and respiration rate (RR) for people with and without face masks using a thermal and an RGB camera. The convolution neural network (CNN) based face detector was applied and three regions of interest (ROIs) were located based on facial landmarks for vital sign estimation. Ten healthy subjects from a variety of ethnic backgrounds with skin colors from pale white to darker brown participated in several different experiments. The absolute error (AE) between the estimated HR using the proposed method and the reference HR from all experiments is 2.70±2.28 beats/min (mean ± std), and the AE between the estimated RR and the reference RR from all experiments is 1.47±1.33 breaths/min (mean ± std) at a distance of 0.6–1.2 m.


2022 ◽  
pp. 1-22
Author(s):  
Xiaoqi Shen ◽  
Wenzhong Shi ◽  
Pengfei Chen ◽  
Zhewei Liu ◽  
Lukang Wang

BME Frontiers ◽  
2022 ◽  
Vol 2022 ◽  
pp. 1-13
Author(s):  
Angela Zhang ◽  
Amil Khan ◽  
Saisidharth Majeti ◽  
Judy Pham ◽  
Christopher Nguyen ◽  
...  

Objective and Impact Statement. We propose an automated method of predicting Normal Pressure Hydrocephalus (NPH) from CT scans. A deep convolutional network segments regions of interest from the scans. These regions are then combined with MRI information to predict NPH. To our knowledge, this is the first method which automatically predicts NPH from CT scans and incorporates diffusion tractography information for prediction. Introduction. Due to their low cost and high versatility, CT scans are often used in NPH diagnosis. No well-defined and effective protocol currently exists for analysis of CT scans for NPH. Evans’ index, an approximation of the ventricle to brain volume using one 2D image slice, has been proposed but is not robust. The proposed approach is an effective way to quantify regions of interest and offers a computational method for predicting NPH. Methods. We propose a novel method to predict NPH by combining regions of interest segmented from CT scans with connectome data to compute features which capture the impact of enlarged ventricles by excluding fiber tracts passing through these regions. The segmentation and network features are used to train a model for NPH prediction. Results. Our method outperforms the current state-of-the-art by 9 precision points and 29 recall points. Our segmentation model outperforms the current state-of-the-art in segmenting the ventricle, gray-white matter, and subarachnoid space in CT scans. Conclusion. Our experimental results demonstrate that fast and accurate volumetric segmentation of CT brain scans can help improve the NPH diagnosis process, and network properties can increase NPH prediction accuracy.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 452
Author(s):  
Qun Yang ◽  
Dejian Shen

Natural hazards have caused damages to structures and economic losses worldwide. Post-hazard responses require accurate and fast damage detection and assessment. In many studies, the development of data-driven damage detection within the research community of structural health monitoring has emerged due to the advances in deep learning models. Most data-driven models for damage detection focus on classifying different damage states and hence damage states cannot be effectively quantified. To address such a deficiency in data-driven damage detection, we propose a sequence-to-sequence (Seq2Seq) model to quantify a probability of damage. The model was trained to learn damage representations with only undamaged signals and then quantify the probability of damage by feeding damaged signals into models. We tested the validity of our proposed Seq2Seq model with a signal dataset which was collected from a two-story timber building subjected to shake table tests. Our results show that our Seq2Seq model has a strong capability of distinguishing damage representations and quantifying the probability of damage in terms of highlighting the regions of interest.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Evelyn Rute Carneiro Maciel ◽  
Eduarda Helena Leandro Nascimento ◽  
Hugo Gaêta-Araujo ◽  
Maria Luiza dos Anjos Pontual ◽  
Andrea dos Anjos Pontual ◽  
...  

Abstract Background This study aimed to investigate the effect of automatic exposure compensation (AEC) of intraoral radiographic systems on the gray values of dental tissues in images acquired with or without high-density material in the exposed region using different exposure times and kilovoltages. The influence of the distance of the high-density material was also investigated. Methods Radiographs from the molar region of two mandibles were obtained using the RVG 6100 and the Express systems, operating at 60 and 70 kV and 0.06, 0.10, and 0.16 s. Subsequently, a titanium implant was inserted in the premolar’s socket and other images were acquired. Using the ImageJ software, two regions of interest were determined on the enamel, coronary dentine, root dentine, and pulp of the first and second molars to obtain their gray values. Results In the RVG 6100, the implant did not affect the gray values (p > 0.05); the increase in kV decreased it in all tissues (p < 0.05), and the exposure time affected only the root dentine and pulp. In the Express, only enamel and coronary dentine values changed (p < 0.05), decreasing with the implant presence and/or with the increase in exposure factors. The distance of the implant did not affect the results (p > 0.05). Conclusions AEC’s performance varies between the radiographic systems. Its effect on the gray values depends not only on the presence or absence of high-density material but also on the kV and exposure time used.


Animals ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 108
Author(s):  
Kirsten D. Gillette ◽  
Erin M. Phillips ◽  
Daniel D. Dilks ◽  
Gregory S. Berns

Previous research to localize face areas in dogs’ brains has generally relied on static images or videos. However, most dogs do not naturally engage with two-dimensional images, raising the question of whether dogs perceive such images as representations of real faces and objects. To measure the equivalency of live and two-dimensional stimuli in the dog’s brain, during functional magnetic resonance imaging (fMRI) we presented dogs and humans with live-action stimuli (actors and objects) as well as videos of the same actors and objects. The dogs (n = 7) and humans (n = 5) were presented with 20 s blocks of faces and objects in random order. In dogs, we found significant areas of increased activation in the putative dog face area, and in humans, we found significant areas of increased activation in the fusiform face area to both live and video stimuli. In both dogs and humans, we found areas of significant activation in the posterior superior temporal sulcus (ectosylvian fissure in dogs) and the lateral occipital complex (entolateral gyrus in dogs) to both live and video stimuli. Of these regions of interest, only the area along the ectosylvian fissure in dogs showed significantly more activation to live faces than to video faces, whereas, in humans, both the fusiform face area and posterior superior temporal sulcus responded significantly more to live conditions than video conditions. However, using the video conditions alone, we were able to localize all regions of interest in both dogs and humans. Therefore, videos can be used to localize these regions of interest, though live conditions may be more salient.


2021 ◽  
Vol 7 (1) ◽  
pp. 4
Author(s):  
Francesco Addevico ◽  
Alberto Simoncini ◽  
Giovanni Solitro ◽  
Massimo Max Morandi

Performing MR investigation on patients instrumented with external fixators is still controversial. The aim of this study is to evaluate the quality of MR imaging of the knee structures in the presence of bridging external fixators. Different cadaveric lower limbs were instrumented with the MR-conditional external fixators Hofmann III (Stryker, Kalamazoo, MI, USA), Large external Fixator (DePuy Synthes, Raynham, MA, USA), XtraFix (Zymmer, Warsaw, IN, USA) and a newer implant of Ketron Peek CA30 and ERGAL 7075 pins, Dolphix®, (Citieffe, Bologna, Italy). The specimens were MR scanned before and after the instrumentation. The images were subjectively judged by a pool of blinded radiologists and then quantitatively evaluated calculating signal intensity, signal to noise and contrast to noise in the five regions of interest. The area of distortion due to the presence of metallic pins was calculated. All the images were considered equally useful for diagnosis with no differences between devices (p > 0.05). Only few differences in the quantification of images have been detected between groups while the presence of metallic components was the main limit of the procedure. The mean length of the radius of the area of distortion of the pins were 53.17 ± 8.19 mm, 45.07 ± 4.33 mm, 17 ± 5.4 mm and 37.12 ± 10.17 mm per pins provided by Zimmer, Synthes, Citieffe and Stryker, respectively (p = 0.041). The implant of Ketron Peek CA30 and ERGAL 7075 pins showed the smallest distortion area.


2021 ◽  
Vol 33 (6) ◽  
pp. 1359-1372
Author(s):  
Miho Akiyama ◽  
Takuya Saito ◽  
◽  

In this study, we propose a method for CanSat to recognize and guide a goal using deep learning image classification even 10 m away from the goal, and describe the results of demonstrative evaluation to confirm the effectiveness of the method. We applied deep learning image classification to goal recognition in CanSat for the first time at ARLISS 2019, and succeeded in guiding it almost all the way to the goal in all three races, winning the first place as overall winner. However, the conventional method has a drawback in that the goal recognition rate drops significantly when the CanSat is more than 6–7 m away from the goal, making it difficult to guide the CanSat to the goal when it moves away from the goal because of various factors. To enable goal recognition from a distance of 10 m from the goal, we investigated the number of horizontal regions of interest divisions and the method of vertical shifts during image recognition, and clarified the effective number of divisions and recognition rate using experiments. Although object detection is commonly used to detect the position of an object from an image by deep learning, we confirmed that the proposed method has a higher recognition rate at long distances and a shorter computation time than SSD MobileNet V1. In addition, we participated in the CanSat contest ACTS 2020 to evaluate the effectiveness of the proposed method and achieved the zero-distance goal in all three competitions, demonstrating its effectiveness by winning first place in the comeback category.


Diagnostics ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 2377
Author(s):  
Julius Henning Niehoff ◽  
Matthias Michael Woeltjen ◽  
Kai Roman Laukamp ◽  
Jan Borggrefe ◽  
Jan Robert Kroeger

The present study evaluates the diagnostic reliability of virtual non-contrast (VNC) images acquired with the first photon counting CT scanner that is approved for clinical use by comparing quantitative image properties of VNC and true non-contrast (TNC) images. Seventy-two patients were retrospectively enrolled in this study. VNC images reconstructed from the arterial (VNCa) and the portalvenous (VNCv) phase were compared to TNC images. In addition, consistency between VNCa and VNCv images was evaluated. Regions of interest (ROI) were drawn in the following areas: liver, spleen, kidney, aorta, muscle, fat and bone. Comparison of VNCa and VNCv images revealed a mean offset of less than 4 HU in all tissues. The greatest difference between TNC and VNC images was found in spongious bone (VNCv 86.13 HU ± 28.44, p < 0.001). Excluding measurements in spongious bone, differences between TNC and VNCv of 10 HU or less were found in 40% (VNCa 36%) and differences of 15 HU or less were found in 72% (VNCa 68%) of all measurements. The underlying algorithm for the subtraction of iodine works in principle but requires adjustments. Until then, special caution should be exercised when using VNC images in routine clinical practice.


Sign in / Sign up

Export Citation Format

Share Document