Detecting ossification of the posterior longitudinal ligament on plain radiographs using a deep convolutional neural network: A pilot study

Author(s):  
Takahisa Ogawa ◽  
Toshitaka Yoshii ◽  
Jun Oyama ◽  
Nobuhiro Sugimura ◽  
Takashi Akada ◽  
...  
2020 ◽  
Vol 38 (7) ◽  
pp. 1465-1471 ◽  
Author(s):  
Alireza Borjali ◽  
Antonia F. Chen ◽  
Orhun K. Muratoglu ◽  
Mohammad A. Morid ◽  
Kartik M. Varadarajan

2019 ◽  
Vol 45 (1) ◽  
pp. 24-35 ◽  
Author(s):  
Rikiya Yamashita ◽  
Amber Mittendorf ◽  
Zhe Zhu ◽  
Kathryn J. Fowler ◽  
Cynthia S. Santillan ◽  
...  

2020 ◽  
Author(s):  
Alireza Borjali ◽  
Antonia F. Chen ◽  
Hany S. Bedair ◽  
Christopher M. Melnic ◽  
Orhun K. Muratoglu ◽  
...  

ABSTRACTA crucial step in preoperative planning for a revision total hip replacement (THR) surgery is accurate identification of failed implant design, especially if one or more well-fixed/functioning components are to be retained. Manual identification of the implant design from preoperative radiographic images can be time-consuming and inaccurate, which can ultimately lead to increased operating room time, more complex surgery, and increased healthcare costs. No automated system has been developed to accurately and efficiently identify THR implant designs. In this study, we present a novel, fully automatic and interpretable approach to identify the design of nine different THR femoral implants from plain radiographs using deep convolutional neural network (CNN). We also compared the CNN’s performance with three board-certified and fellowship trained orthopaedic surgeons. The CNN achieved on-par accuracy with the orthopaedic surgeons while being significantly faster. The need for additional training data for less distinct designs was also highlighted. Such CNN can be used to automatically identify the design of a failed THR femoral implant preoperatively in just a fraction of a second, saving time and improving identification accuracy.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document