scholarly journals 2D Images of Astrocytes in the Human Principal Olivary Nucleus: Monofractal Analysis of the Morphology

2021 ◽  
Vol 09 (09) ◽  
pp. 38-48
Author(s):  
Damjan Stojić ◽  
Dragana Radošević ◽  
Nemanja Rajković ◽  
Nebojša T. Milošević
Keyword(s):  
2018 ◽  
Vol 23 (6) ◽  
pp. 99-113
Author(s):  
Sha LIU ◽  
Feng YANG ◽  
Shunxi WANG ◽  
Yu CHEN

2021 ◽  
Vol 7 (3) ◽  
pp. 209-219
Author(s):  
Iris J Holzleitner ◽  
Alex L Jones ◽  
Kieran J O’Shea ◽  
Rachel Cassar ◽  
Vanessa Fasolt ◽  
...  

Abstract Objectives A large literature exists investigating the extent to which physical characteristics (e.g., strength, weight, and height) can be accurately assessed from face images. While most of these studies have employed two-dimensional (2D) face images as stimuli, some recent studies have used three-dimensional (3D) face images because they may contain cues not visible in 2D face images. As equipment required for 3D face images is considerably more expensive than that required for 2D face images, we here investigated how perceptual ratings of physical characteristics from 2D and 3D face images compare. Methods We tested whether 3D face images capture cues of strength, weight, and height better than 2D face images do by directly comparing the accuracy of strength, weight, and height ratings of 182 2D and 3D face images taken simultaneously. Strength, height and weight were rated by 66, 59 and 52 raters respectively, who viewed both 2D and 3D images. Results In line with previous studies, we found that weight and height can be judged somewhat accurately from faces; contrary to previous research, we found that people were relatively inaccurate at assessing strength. We found no evidence that physical characteristics could be judged more accurately from 3D than 2D images. Conclusion Our results suggest physical characteristics are perceived with similar accuracy from 2D and 3D face images. They also suggest that the substantial costs associated with collecting 3D face scans may not be justified for research on the accuracy of facial judgments of physical characteristics.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1581
Author(s):  
Xiaolong Chen ◽  
Jian Li ◽  
Shuowen Huang ◽  
Hao Cui ◽  
Peirong Liu ◽  
...  

Cracks are one of the main distresses that occur on concrete surfaces. Traditional methods for detecting cracks based on two-dimensional (2D) images can be hampered by stains, shadows, and other artifacts, while various three-dimensional (3D) crack-detection techniques, using point clouds, are less affected in this regard but are limited by the measurement accuracy of the 3D laser scanner. In this study, we propose an automatic crack-detection method that fuses 3D point clouds and 2D images based on an improved Otsu algorithm, which consists of the following four major procedures. First, a high-precision registration of a depth image projected from 3D point clouds and 2D images is performed. Second, pixel-level image fusion is performed, which fuses the depth and gray information. Third, a rough crack image is obtained from the fusion image using the improved Otsu method. Finally, the connected domain labeling and morphological methods are used to finely extract the cracks. Experimentally, the proposed method was tested at multiple scales and with various types of concrete crack. The results demonstrate that the proposed method can achieve an average precision of 89.0%, recall of 84.8%, and F1 score of 86.7%, performing significantly better than the single image (average F1 score of 67.6%) and single point cloud (average F1 score of 76.0%) methods. Accordingly, the proposed method has high detection accuracy and universality, indicating its wide potential application as an automatic method for concrete-crack detection.


2021 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
G Italiano ◽  
G Tamborini ◽  
V Mantegazza ◽  
V Volpato ◽  
L Fusini ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: None. Objective. Preliminary studies showed the accuracy of machine learning based automated dynamic quantification of left ventricular (LV) and left atrial (LA) volumes. We aimed to evaluate the feasibility and accuracy of machine learning based automated dynamic quantification of LV and LA volumes in an unselected population. Methods. We enrolled 600 unselected patients (12% in atrial fibrillation) clinically referred for transthoracic echocardiography (2DTTE), who also underwent 3D echocardiography (3DE) imaging. LV ejection fraction (EF), LV and LA volumes were obtained from 2D images; 3D images were analysed using Dynamic Heart Model (DHM) software (Philips) resulting in LV and LA volume-time curves. A subgroup of 140 patients underwent also cardiac magnetic resonance (CMR) imaging. Average time of analysis, feasibility, and image quality were recorded and results were compared between 2DTTE, DHM and CMR. Results. The use of DHM was feasible in 522/600 cases (87%). When feasible, the boundary position was considered accurate in 335/522 patients (64%), while major (n = 38) or minor (n = 149) borders corrections were needed. The overall time required for DHM datasets was approximately 40 seconds, resulting in physiologically appearing LV and LA volume–time curves in all cases. As expected, DHM LV volumes were larger than 2D ones (end-diastolic volume: 173 ± 64 vs 142 ± 58 mL, respectively), while no differences were found for LV EF and LA volumes (EF: 55%±12 vs 56%±14; LA volume 89 ± 36 vs 89 ± 38 mL, respectively). The comparison between DHM and CMR values showed a high correlation for LV volumes (r = 0.70 and r = 0.82, p < 0.001 for end-diastolic and end-systolic volume, respectively) and an excellent correlation for EF (r= 0.82, p < 0.001) and LA volumes. Conclusions. The DHM software is feasible, accurate and quick in a large series of unselected patients, including those with suboptimal 2D images or in atrial fibrillation. Table 1 DHM quality Adjustment Feasibility Good Suboptimal Minor Major Total of patients (n, %) 522/600 (87%) 327/522 (62%) 195/522 (28%) 149/522 (29%) 38/522 (6%) Normal subjects (n, %) 39/40 (97%) 23/39 (57%) 16/39 (40%) 9/39 (21%) 1/39 (3%) Atrial Fibrillation (n, %) 59/73 (81%)* 28/59 (47%) 31/59 (53%) 15/59 (25%) 6/59 (10%) Valvular disease (n, %) 271/312 (87%) 120/271 (%) 151/271 (%) 65/271 (24%) 16/271 (6%) Coronary artery disease (n, %) 47/58 (81%)* 26/47 (46%) 21/47 (37%) 16/47 (34%) 5/47 (11%) Miscellaneous (n, %) 24/25 (96%) 18/24 (75%) 6/24 (25%) 5/24 (21%) 3/24 (12%) Feasibility of DHM, image quality and need to adjustments in global population and in each subgroup. Abstract Figure 1


1985 ◽  
Vol 24 (7) ◽  
pp. 645-654 ◽  
Author(s):  
L.A. Barragan ◽  
N. Delhaye-Bouchaud ◽  
P. Laget

2012 ◽  
Vol 726 ◽  
pp. 226-232 ◽  
Author(s):  
Tomasz Giesko

The article presents a dual-camera vision system for fatigue monitoring composed of a vision unit, a camera positioning set and a computer unit. Vision modules are mounted onto the 4DOF positioning sets, which allows for an easy determination of the position of the camera in relation to the sample. The application of motorized measurement lenses with changeable configuration, thanks to the alteration of the distance of observation and the vision angle, enables the adaptation of the system to different scales of observation of the fatigue processes in the specimen surface. Automatic focus setting is realised with the use of the implemented algorithm. The software developed allows for the analysis of fatigue fracture for two 2D images or the 3D stereovision image.


Sign in / Sign up

Export Citation Format

Share Document