Universal architecture of corneal segmental tomography biomarkers for artificial intelligence-driven diagnosis of early keratoconus

2021 ◽  
pp. bjophthalmol-2021-319309
Author(s):  
Gairik Kundu ◽  
Rohit Shetty ◽  
Pooja Khamar ◽  
Ritika Mullick ◽  
Sneha Gupta ◽  
...  

AimsTo develop a comprehensive three-dimensional analyses of segmental tomography (placido and optical coherence tomography) using artificial intelligence (AI).MethodsPreoperative imaging data (MS-39, CSO, Italy) of refractive surgery patients with stable outcomes and diagnosed with asymmetric or bilateral keratoconus (KC) were used. The curvature, wavefront aberrations and thickness distributions were analysed with Zernike polynomials (ZP) and a random forest (RF) AI model. For training and cross-validation, there were groups of healthy (n=527), very asymmetric ectasia (VAE; n=144) and KC (n=454). The VAE eyes were the fellow eyes of KC patients but no further manual segregation of these eyes into subclinical or forme-fruste was performed.ResultsThe AI achieved an excellent area under the curve (0.994), accuracy (95.6%), recall (98.5%) and precision (92.7%) for the healthy eyes. For the KC eyes, the same were 0.997, 99.1%, 98.7% and 99.1%, respectively. For the VAE eyes, the same were 0.976, 95.5%, 71.5% and 91.2%, respectively. Interestingly, the AI reclassified 36 (subclinical) of the VAE eyes as healthy though these eyes were distinct from healthy eyes. Most of the remaining VAE (n=104; forme fruste) eyes retained their classification, and were distinct from both KC and healthy eyes. Further, the posterior surface features were not among the highest ranked variables by the AI model.ConclusionsA universal architecture of combining segmental tomography with ZP and AI was developed. It achieved an excellent classification of healthy and KC eyes. The AI efficiently classified the VAE eyes as ‘subclinical’ and ‘forme-fruste’.

2019 ◽  
Vol 8 (7) ◽  
pp. 986 ◽  
Author(s):  
Owais ◽  
Arsalan ◽  
Choi ◽  
Mahmood ◽  
Park

Various techniques using artificial intelligence (AI) have resulted in a significant contribution to field of medical image and video-based diagnoses, such as radiology, pathology, and endoscopy, including the classification of gastrointestinal (GI) diseases. Most previous studies on the classification of GI diseases use only spatial features, which demonstrate low performance in the classification of multiple GI diseases. Although there are a few previous studies using temporal features based on a three-dimensional convolutional neural network, only a specific part of the GI tract was involved with the limited number of classes. To overcome these problems, we propose a comprehensive AI-based framework for the classification of multiple GI diseases by using endoscopic videos, which can simultaneously extract both spatial and temporal features to achieve better classification performance. Two different residual networks and a long short-term memory model are integrated in a cascaded mode to extract spatial and temporal features, respectively. Experiments were conducted on a combined dataset consisting of one of the largest endoscopic videos with 52,471 frames. The results demonstrate the effectiveness of the proposed classification framework for multi-GI diseases. The experimental results of the proposed model (97.057% area under the curve) demonstrate superior performance over the state-of-the-art methods and indicate its potential for clinical applications.


1999 ◽  
Vol 6 (3) ◽  
pp. E7 ◽  
Author(s):  
Alexander Hartov ◽  
Symma D. Eisner ◽  
W. Roberts ◽  
Keith D. Paulsen ◽  
Leah A. Platenik ◽  
...  

Image-guided neurosurgery that is directed by a preoperative imaging study, such as magnetic resonance (MR) imaging or computerized tomography (CT) scanning, can be very accurate provided no significant changes occur during surgery. A variety of factors known to affect brain tissue movement are not reflected in the preoperative images used for guidance. To update the information on which neuronavigation is based, the authors propose the use of three-dimensional (3-D) ultrasound images in conjunction with a finite-element computational model of the deformation of the brain. The 3-D ultrasound system will provide real-time information on the displacement of deep structures to guide the mathematical model. This paper has two goals: first, to present an outline of steps necessary to compute the location of a feature appearing in an ultrasound image in an arbitrary coordinate system; and second, to present an extensive evaluation of this system's accuracy. The authors have found that by using a stylus rigidly coupled to the 3-D tracker's sensor, they were able to locate a point with an overall error of 1.36 ± 1.67 mm (based on 39 points). When coupling the tracker to an ultrasound scanhead, they found that they could locate features appearing on ultrasound images with an error of 2.96 ± 1.85 mm (total 58 features). They also found that when registering a skull phantom to coordinates that were defined by MR imaging or CT scanning, they could do so with an error of 0.86 ± 0.61 mm (based on 20 coordinates). Based on their previous finding of brain shifts on the order of 1 cm during surgery, the accuracy of their system warrants its use in updating neuronavigation imaging data.


2021 ◽  
Vol 271 ◽  
pp. 03045
Author(s):  
Yinyu Song ◽  
Lihua Fang ◽  
Ruirui Du ◽  
Luchao Lin ◽  
Xingming Tao

The three-dimensional (3D) finite element model of human eye was established, and the intraocular pressure (IOP) was loaded to simulate refractive surgery. The biomechanical properties of human cornea after SMILE and LASIK surgery were studied from the stress, strain and induced wavefront aberration. Our results showed that SMILE had less impact on the biomechanics, having less stress and strain changes than LASIK. However, the stress and strain of the cornea increased with the increase of the diopter and were concentrated in the central region. We also investigated the changes in wavefront aberrations of the cornea after surgery, and the results indicated that the defocus and vertical commotion were significantly affected by SMILE and LASIK surgery, while the remaining aberrations were approximately unchanged. In conclusion, both SMILE and LASIK sergury procedures changed the postoperative corneal biomechanics, but SMILE had less impact on the biomechanics of corneal.


2021 ◽  
Vol 118 (13) ◽  
pp. e2100697118
Author(s):  
Shengze Cai ◽  
He Li ◽  
Fuyin Zheng ◽  
Fang Kong ◽  
Ming Dao ◽  
...  

Understanding the mechanics of blood flow is necessary for developing insights into mechanisms of physiology and vascular diseases in microcirculation. Given the limitations of technologies available for assessing in vivo flow fields, in vitro methods based on traditional microfluidic platforms have been developed to mimic physiological conditions. However, existing methods lack the capability to provide accurate assessment of these flow fields, particularly in vessels with complex geometries. Conventional approaches to quantify flow fields rely either on analyzing only visual images or on enforcing underlying physics without considering visualization data, which could compromise accuracy of predictions. Here, we present artificial-intelligence velocimetry (AIV) to quantify velocity and stress fields of blood flow by integrating the imaging data with underlying physics using physics-informed neural networks. We demonstrate the capability of AIV by quantifying hemodynamics in microchannels designed to mimic saccular-shaped microaneurysms (microaneurysm-on-a-chip, or MAOAC), which signify common manifestations of diabetic retinopathy, a leading cause of vision loss from blood-vessel damage in the retina in diabetic patients. We show that AIV can, without any a priori knowledge of the inlet and outlet boundary conditions, infer the two-dimensional (2D) flow fields from a sequence of 2D images of blood flow in MAOAC, but also can infer three-dimensional (3D) flow fields using only 2D images, thanks to the encoded physics laws. AIV provides a unique paradigm that seamlessly integrates images, experimental data, and underlying physics using neural networks to automatically analyze experimental data and infer key hemodynamic indicators that assess vascular injury.


Author(s):  
Rohit Ghosh ◽  
Omar Smadi

Pavement distresses lead to pavement deterioration and failure. Accurate identification and classification of distresses helps agencies evaluate the condition of their pavement infrastructure and assists in decision-making processes on pavement maintenance and rehabilitation. The state of the art is automated pavement distress detection using vision-based methods. This study implements two deep learning techniques, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLO) v3, for automated distress detection and classification of high resolution (1,800 × 1,200) three-dimensional (3D) asphalt and concrete pavement images. The training and validation dataset contained 625 images that included distresses manually annotated with bounding boxes representing the location and types of distresses and 798 no-distress images. Data augmentation was performed to enable more balanced representation of class labels and prevent overfitting. YOLO and Faster R-CNN achieved 89.8% and 89.6% accuracy respectively. Precision-recall curves were used to determine the average precision (AP), which is the area under the precision-recall curve. The AP values for YOLO and Faster R-CNN were 90.2% and 89.2% respectively, indicating strong performance for both models. Receiver operating characteristic (ROC) curves were also developed to determine the area under the curve, and the resulting area under the curve values of 0.96 for YOLO and 0.95 for Faster R-CNN also indicate robust performance. Finally, the models were evaluated by developing confusion matrices comparing our proposed model with manual quality assurance and quality control (QA/QC) results performed on automated pavement data. A very high level of match to manual QA/QC, namely 97.6% for YOLO and 96.9% for Faster R-CNN, suggest the proposed methodology has potential as a replacement for manual QA/QC.


2021 ◽  
Vol 12 ◽  
Author(s):  
Lan Yu ◽  
Xiaoli Shi ◽  
Xiaoling Liu ◽  
Wen Jin ◽  
Xiaoqing Jia ◽  
...  

Objectives: COVID-19 is highly infectious and has been widely spread worldwide, with more than 159 million confirmed cases and more than 3 million deaths as of May 11, 2021. It has become a serious public health event threatening people’s lives and safety. Due to the rapid transmission and long incubation period, shortage of medical resources would easily occur in the short term of discovering disease cases. Therefore, we aimed to construct an artificial intelligent framework to rapidly distinguish patients with COVID-19 from common pneumonia and non-pneumonia populations based on computed tomography (CT) images. Furthermore, we explored artificial intelligence (AI) algorithms to integrate CT features and laboratory findings on admission to predict the clinical classification of COVID-19. This will ease the burden of doctors in this emergency period and aid them to perform timely and appropriate treatment on patients.Methods: We collected all CT images and clinical data of novel coronavirus pneumonia cases in Inner Mongolia, including domestic cases and those imported from abroad; then, three models based on transfer learning to distinguish COVID-19 from other pneumonia and non-pneumonia population were developed. In addition, CT features and laboratory findings on admission were combined to predict clinical types of COVID-19 using AI algorithms. Lastly, Spearman’s correlation test was applied to study correlations of CT characteristics and laboratory findings.Results: Among three models to distinguish COVID-19 based on CT, vgg19 showed excellent diagnostic performance, with area under the curve (AUC) of the receiver operating characteristic (ROC) curve at 95%. Together with laboratory findings, we were able to predict clinical types of COVID-19 with AUC of the ROC curve at 90%. Furthermore, biochemical markers, such as C-reactive protein (CRP), LYM, and lactic dehydrogenase (LDH) were identified and correlated with CT features.Conclusion: We developed an AI model to identify patients who were positive for COVID-19 according to the results of the first CT examination after admission and predict the progression combined with laboratory findings. In addition, we obtained important clinical characteristics that correlated with the CT image features. Together, our AI system could rapidly diagnose COVID-19 and predict clinical types to assist clinicians perform appropriate clinical management.


2010 ◽  
Vol 3 (2) ◽  
pp. 156-180 ◽  
Author(s):  
Renáta Gregová ◽  
Lívia Körtvélyessy ◽  
Július Zimmermann

Universals Archive (Universal #1926) indicates a universal tendency for sound symbolism in reference to the expression of diminutives and augmentatives. The research ( Štekauer et al. 2009 ) carried out on European languages has not proved the tendency at all. Therefore, our research was extended to cover three language families – Indo-European, Niger-Congo and Austronesian. A three-step analysis examining different aspects of phonetic symbolism was carried out on a core vocabulary of 35 lexical items. A research sample was selected out of 60 languages. The evaluative markers were analyzed according to both phonetic classification of vowels and consonants and Ultan's and Niewenhuis' conclusions on the dominance of palatal and post-alveolar consonants in diminutive markers. Finally, the data obtained in our sample languages was evaluated by means of a three-dimensional model illustrating the place of articulation of the individual segments.


Sign in / Sign up

Export Citation Format

Share Document