scholarly journals A Study on 3D Deep Learning-Based Automatic Diagnosis of Nasal Fractures

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 506
Author(s):  
Yu-Jin Seol ◽  
Young-Jae Kim ◽  
Yoon-Sang Kim ◽  
Young-Woo Cheon ◽  
Kwang-Gi Kim

This paper reported a study on the 3-dimensional deep-learning-based automatic diagnosis of nasal fractures. (1) Background: The nasal bone is the most protuberant feature of the face; therefore, it is highly vulnerable to facial trauma and its fractures are known as the most common facial fractures worldwide. In addition, its adhesion causes rapid deformation, so a clear diagnosis is needed early after fracture onset. (2) Methods: The collected computed tomography images were reconstructed to isotropic voxel data including the whole region of the nasal bone, which are represented in a fixed cubic volume. The configured 3-dimensional input data were then automatically classified by the deep learning of residual neural networks (3D-ResNet34 and ResNet50) with the spatial context information using a single network, whose performance was evaluated by 5-fold cross-validation. (3) Results: The classification of nasal fractures with simple 3D-ResNet34 and ResNet50 networks achieved areas under the receiver operating characteristic curve of 94.5% and 93.4% for binary classification, respectively, both indicating unprecedented high performance in the task. (4) Conclusions: In this paper, it is presented the possibility of automatic nasal bone fracture diagnosis using a 3-dimensional Resnet-based single classification network and it will improve the diagnostic environment with future research.

2021 ◽  
Vol 11 ◽  
Author(s):  
Yubizhuo Wang ◽  
Jiayuan Shao ◽  
Pan Wang ◽  
Lintao Chen ◽  
Mingliang Ying ◽  
...  

BackgroundOur aim was to establish a deep learning radiomics method to preoperatively evaluate regional lymph node (LN) staging for hilar cholangiocarcinoma (HC) patients. Methods and MaterialsOf the 179 enrolled HC patients, 90 were pathologically diagnosed with lymph node metastasis. Quantitative radiomic features and deep learning features were extracted. An LN metastasis status classifier was developed through integrating support vector machine, high-performance deep learning radiomics signature, and three clinical characteristics. An LN metastasis stratification classifier (N1 vs. N2) was also proposed with subgroup analysis.ResultsThe average areas under the receiver operating characteristic curve (AUCs) of the LN metastasis status classifier reached 0.866 in the training cohort and 0.870 in the external test cohorts. Meanwhile, the LN metastasis stratification classifier performed well in predicting the risk of LN metastasis, with an average AUC of 0.946.ConclusionsTwo classifiers derived from computed tomography images performed well in predicting LN staging in HC and will be reliable evaluation tools to improve decision-making.


2019 ◽  
Vol 7 (12) ◽  
Author(s):  
Júlio Leite de Araújo-Júnior ◽  
Elma Mariana Verçosa de Melo-Silva ◽  
Anderson Maikon de Souza-Santos ◽  
Tiburtino José de Lima-Neto ◽  
Murilo Quintão dos Santos ◽  
...  

Introdução: Os ossos nasais são os mais proeminentes do esqueleto facial, tornando esses os mais frequentes nas fraturas faciais, sendo o terceiro osso mais comumente fraturado do esqueleto humano. Objetivo: Apresentar um relato de caso de fratura nasal em um paciente pediatrico tratado com redução incruenta. Método: Estudo descritivo com um paciente que apresentou diagnóstico clínico/imaginológico de fratura nasal. Conclusão: O tratamento através de redução incluenta mostrou-se adequado em pacientes pediatricos. A ocorrência de traumatismos e lesões associadas a fraturas nasais reforça a importância de uma abordagem multidisciplinar.Descritores: Fraturas Ósseas; Osso Nasal; Traumatismos Faciais.ReferênciasMa L, Shen SH, Hu P, Wu ZQ. The observation of curative effect on closed reduction of nasal bone fracture under ultrasound guidance: report of 38 cases. Zhonghua Er Bi Yan Hou Tou Jing Wai Ke Za Zhi. 2017;52(12):933-35.Schoinohoriti O, Igoumenakis D, Rallis G. Fractures of the nasal bones: is external splinting really warranted? J Craniofac Surg. 2017;28(8):e760-e63.Kang CM, Han DG. Correlation between Operation Result and Patient Satisfaction of Nasal Bone Fracture. Arch Craniofac Surg. 2017;18(1):25-9.Kyung H, Choi JI, Song SH, Oh SH, Kang N. Comparison of postoperative outcomes between monitored anesthesia care and general anesthesia in closed reduction of nasal fracture. J Craniofac Surg. 2018;29(2):286-88.Nishioka H, Kondoh S, Yuzuriha S. Convex bone deformity after closed reduction of nasal bone fracture. J Plast Reconstr Aesthet Surg. 2018;71(1):85-9.Lu GN, Humphrey CD, Kriet JD. Correction of Nasal Fractures. Facial Plast Surg Clin North Am. 2017;25(4):537-546.Kim SW, Park B, Lee TG, Kim JY. Olfactory Dysfunction in Nasal Bone Fracture. Arch Craniofac Surg. 2017;18(2):92-6.Davidson J, Nickerson D, Nickerson B. Zygomatic fractures: comparison of methods of internal fixation. Plast Reconstr Surg. 1990;86(1):25-32.Yabe T, Tsuda T, Hirose S, Ozawa T. Comparison of pediatric and adult nasal fractures. J Craniofac Surg. 2012;23(5):1364-6.Murphy RX Jr, Birmingham KL, Okunski WJ, Wasser TE. Influence of restraining devices on patterns of pediatric facial trauma in motor vehicle collisions. Plast Reconstr Surg. 2001;107(1):34-7.


2021 ◽  
pp. 002203452110404
Author(s):  
J. Hao ◽  
W. Liao ◽  
Y.L. Zhang ◽  
J. Peng ◽  
Z. Zhao ◽  
...  

Digital dentistry plays a pivotal role in dental health care. A critical step in many digital dental systems is to accurately delineate individual teeth and the gingiva in the 3-dimension intraoral scanned mesh data. However, previous state-of-the-art methods are either time-consuming or error prone, hence hindering their clinical applicability. This article presents an accurate, efficient, and fully automated deep learning model trained on a data set of 4,000 intraoral scanned data annotated by experienced human experts. On a holdout data set of 200 scans, our model achieves a per-face accuracy, average-area accuracy, and area under the receiver operating characteristic curve of 96.94%, 98.26%, and 0.9991, respectively, significantly outperforming the state-of-the-art baselines. In addition, our model takes only about 24 s to generate segmentation outputs, as opposed to >5 min by the baseline and 15 min by human experts. A clinical performance test of 500 patients with malocclusion and/or abnormal teeth shows that 96.9% of the segmentations are satisfactory for clinical applications, 2.9% automatically trigger alarms for human improvement, and only 0.2% of them need rework. Our research demonstrates the potential for deep learning to improve the efficacy and efficiency of dental treatment and digital dentistry.


2021 ◽  
Vol 11 (12) ◽  
pp. 1248
Author(s):  
Te-Chun Hsieh ◽  
Chiung-Wei Liao ◽  
Yung-Chi Lai ◽  
Kin-Man Law ◽  
Pak-Ki Chan ◽  
...  

Patients with bone metastases have poor prognoses. A bone scan is a commonly applied diagnostic tool for this condition. However, its accuracy is limited by the nonspecific character of radiopharmaceutical accumulation, which indicates all-cause bone remodeling. The current study evaluated deep learning techniques to improve the efficacy of bone metastasis detection on bone scans, retrospectively examining 19,041 patients aged 22 to 92 years who underwent bone scans between May 2011 and December 2019. We developed several functional imaging binary classification deep learning algorithms suitable for bone scans. The presence or absence of bone metastases as a reference standard was determined through a review of image reports by nuclear medicine physicians. Classification was conducted with convolutional neural network-based (CNN-based), residual neural network (ResNet), and densely connected convolutional networks (DenseNet) models, with and without contrastive learning. Each set of bone scans contained anterior and posterior images with resolutions of 1024 × 256 pixels. A total of 37,427 image sets were analyzed. The overall performance of all models improved with contrastive learning. The accuracy, precision, recall, F1 score, area under the receiver operating characteristic curve, and negative predictive value (NPV) for the optimal model were 0.961, 0.878, 0.599, 0.712, 0.92 and 0.965, respectively. In particular, the high NPV may help physicians safely exclude bone metastases, decreasing physician workload, and improving patient care.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Yuta Nakamura ◽  
Shouhei Hanaoka ◽  
Yukihiro Nomura ◽  
Takahiro Nakao ◽  
Soichiro Miki ◽  
...  

Abstract Background It is essential for radiologists to communicate actionable findings to the referring clinicians reliably. Natural language processing (NLP) has been shown to help identify free-text radiology reports including actionable findings. However, the application of recent deep learning techniques to radiology reports, which can improve the detection performance, has not been thoroughly examined. Moreover, free-text that clinicians input in the ordering form (order information) has seldom been used to identify actionable reports. This study aims to evaluate the benefits of two new approaches: (1) bidirectional encoder representations from transformers (BERT), a recent deep learning architecture in NLP, and (2) using order information in addition to radiology reports. Methods We performed a binary classification to distinguish actionable reports (i.e., radiology reports tagged as actionable in actual radiological practice) from non-actionable ones (those without an actionable tag). 90,923 Japanese radiology reports in our hospital were used, of which 788 (0.87%) were actionable. We evaluated four methods, statistical machine learning with logistic regression (LR) and with gradient boosting decision tree (GBDT), and deep learning with a bidirectional long short-term memory (LSTM) model and a publicly available Japanese BERT model. Each method was used with two different inputs, radiology reports alone and pairs of order information and radiology reports. Thus, eight experiments were conducted to examine the performance. Results Without order information, BERT achieved the highest area under the precision-recall curve (AUPRC) of 0.5138, which showed a statistically significant improvement over LR, GBDT, and LSTM, and the highest area under the receiver operating characteristic curve (AUROC) of 0.9516. Simply coupling the order information with the radiology reports slightly increased the AUPRC of BERT but did not lead to a statistically significant improvement. This may be due to the complexity of clinical decisions made by radiologists. Conclusions BERT was assumed to be useful to detect actionable reports. More sophisticated methods are required to use order information effectively.


OTO Open ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 2473974X2092434
Author(s):  
Yong Gi Jung ◽  
Hanaro Park ◽  
Jiwon Seo

Nasal deformities due to trauma are more challenging to correct with rhinoplasty than nasal deformities of nontraumatic causes. Nasal osteotomy is an essential procedure for bone deviations. Preoperative planning is vital in these cases, but it is challenging to comprehend 3-dimensional (3D) structures of the nasal bone on 2-dimensional facial photographs and computed tomography images. We used a 3D-printing technique to fabricate real-size facial bone models with similar physical properties and texture as the actual bone. Furthermore, we established a precise surgical plan using simulated osteotomy on the 3D-printed model. Fused deposition modeling–type desktop 3D printer with polylactic acid filaments was used. A surgical plan was established using simulated osteotomy in 11 cases, and the actual surgery was performed as planned in 10 cases (90.9%). The 3D-printed model and stimulated osteotomy were useful for precise planning of osteotomy to correct nasal deformities due to trauma.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253204
Author(s):  
Juyoung Lee ◽  
Brian Bartholmai ◽  
Tobias Peikert ◽  
Jaehee Chun ◽  
Hojin Kim ◽  
...  

Differentiating the invasiveness of ground-glass nodules (GGN) is clinically important, and several institutions have attempted to develop their own solutions by using computed tomography images. The purpose of this study is to evaluate Computer-Aided Analysis of Risk Yield (CANARY), a validated virtual biopsy and risk-stratification machine-learning tool for lung adenocarcinomas, in a Korean patient population. To this end, a total of 380 GGNs from 360 patients who underwent pulmonary resection in a single institution were reviewed. Based on the Score Indicative of Lung Cancer Aggression (SILA), a quantitative indicator of CANARY analysis results, all of the GGNs were classified as “indolent” (atypical adenomatous hyperplasia, adenocarcinomas in situ, or minimally invasive adenocarcinoma) or “invasive” (invasive adenocarcinoma) and compared with the pathology reports. By considering the possibility of uneven class distribution, statistical analysis was performed on the 1) entire cohort and 2) randomly extracted six sets of class-balanced samples. For each trial, the optimal cutoff SILA was obtained from the receiver operating characteristic curve. The classification results were evaluated using several binary classification metrics. Of a total of 380 GGNs, the mean SILA for 65 (17.1%) indolent and 315 (82.9%) invasive lesions were 0.195±0.124 and 0.391±0.208 (p < 0.0001). The area under the curve (AUC) of each trial was 0.814 and 0.809, with an optimal threshold SILA of 0.229 for both. The macro F1-score and geometric mean were found to be 0.675 and 0.745 for the entire cohort, while both scored 0.741 in the class-equalized dataset. From these results, CANARY could be confirmed acceptable in classifying GGN for Korean patients after the cutoff SILA was calibrated. We found that adjusting the cutoff SILA is needed to use CANARY in other countries or races, and geometric mean could be more objective than F1-score or AUC in the binary classification of imbalanced data.


2020 ◽  
Vol 14 ◽  
Author(s):  
Meghna Dhalaria ◽  
Ekta Gandotra

Purpose: This paper provides the basics of Android malware, its evolution and tools and techniques for malware analysis. Its main aim is to present a review of the literature on Android malware detection using machine learning and deep learning and identify the research gaps. It provides the insights obtained through literature and future research directions which could help researchers to come up with robust and accurate techniques for classification of Android malware. Design/Methodology/Approach: This paper provides a review of the basics of Android malware, its evolution timeline and detection techniques. It includes the tools and techniques for analyzing the Android malware statically and dynamically for extracting features and finally classifying these using machine learning and deep learning algorithms. Findings: The number of Android users is expanding very fast due to the popularity of Android devices. As a result, there are more risks to Android users due to the exponential growth of Android malware. On-going research aims to overcome the constraints of earlier approaches for malware detection. As the evolving malware are complex and sophisticated, earlier approaches like signature based and machine learning based are not able to identify these timely and accurately. The findings from the review shows various limitations of earlier techniques i.e. requires more detection time, high false positive and false negative rate, low accuracy in detecting sophisticated malware and less flexible. Originality/value: This paper provides a systematic and comprehensive review on the tools and techniques being employed for analysis, classification and identification of Android malicious applications. It includes the timeline of Android malware evolution, tools and techniques for analyzing these statically and dynamically for the purpose of extracting features and finally using these features for their detection and classification using machine learning and deep learning algorithms. On the basis of the detailed literature review, various research gaps are listed. The paper also provides future research directions and insights which could help researchers to come up with innovative and robust techniques for detecting and classifying the Android malware.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Manan Binth Taj Noor ◽  
Nusrat Zerin Zenia ◽  
M Shamim Kaiser ◽  
Shamim Al Mamun ◽  
Mufti Mahmud

Abstract Neuroimaging, in particular magnetic resonance imaging (MRI), has been playing an important role in understanding brain functionalities and its disorders during the last couple of decades. These cutting-edge MRI scans, supported by high-performance computational tools and novel ML techniques, have opened up possibilities to unprecedentedly identify neurological disorders. However, similarities in disease phenotypes make it very difficult to detect such disorders accurately from the acquired neuroimaging data. This article critically examines and compares performances of the existing deep learning (DL)-based methods to detect neurological disorders—focusing on Alzheimer’s disease, Parkinson’s disease and schizophrenia—from MRI data acquired using different modalities including functional and structural MRI. The comparative performance analysis of various DL architectures across different disorders and imaging modalities suggests that the Convolutional Neural Network outperforms other methods in detecting neurological disorders. Towards the end, a number of current research challenges are indicated and some possible future research directions are provided.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Ling-Ping Cen ◽  
Jie Ji ◽  
Jian-Wei Lin ◽  
Si-Tong Ju ◽  
Hong-Jie Lin ◽  
...  

AbstractRetinal fundus diseases can lead to irreversible visual impairment without timely diagnoses and appropriate treatments. Single disease-based deep learning algorithms had been developed for the detection of diabetic retinopathy, age-related macular degeneration, and glaucoma. Here, we developed a deep learning platform (DLP) capable of detecting multiple common referable fundus diseases and conditions (39 classes) by using 249,620 fundus images marked with 275,543 labels from heterogenous sources. Our DLP achieved a frequency-weighted average F1 score of 0.923, sensitivity of 0.978, specificity of 0.996 and area under the receiver operating characteristic curve (AUC) of 0.9984 for multi-label classification in the primary test dataset and reached the average level of retina specialists. External multihospital test, public data test and tele-reading application also showed high efficiency for multiple retinal diseases and conditions detection. These results indicate that our DLP can be applied for retinal fundus disease triage, especially in remote areas around the world.


Sign in / Sign up

Export Citation Format

Share Document