scholarly journals Transfer Learning via Artificial Intelligence for Guiding Implant Placement in the Posterior Mandible: an in vitro Study

Author(s):  
Yun Liu ◽  
Zhi-cong Chen ◽  
Chun-ho Chu ◽  
Fei-Long Deng

Abstract Background: To explore the capacity of a single shot multibox detector (SSD) and Voxel-to-voxel prediction network for pose estimation (V2V-PoseNet) based artificial intelligence (AI) system in automatically designing implant plan. Methods: 2500 and 67 cases were used to develop and pre-train the AI system. After that, 12 patients who missed the mandibular left first molars were selected to test the capacity of the AI in automatically designing implant plan. There were three algorithms-based implant positions. They are Group A, B and C (8, 9 and 10 points dependent implant position, respectively). The AI system was then used to detect the characteristic annotators and determine the implant position. For every group, the actual implant position was compared with the algorithm-determined ideal position. And global, angular, depth and lateral deviation were calculate. One-way ANOVA followed by Tukey’s test was performed for statistical comparisons. The significance value was set at P< 0.05. Results: Group C represented the least coronal (0.6638±0.2651, range: 0.2060 to 1.109 mm) and apical (1.157±0.3350, range: 0.5840 to 1.654 mm) deviation, the same trend was observed in the angular deviation (5.307 ±2.891°, range: 2.049 to 10.90°), and the results are similar with the traditional statistic guide.Conclusion: It can be concluded that the AI system has the capacity of deep learning. And as more characteristic annotators be involved in the algorithm, the AI system can figure out the anatomy of the object region better, then generate the ideal implant plan via deep learning algorithm.

2021 ◽  
Vol 10 (3) ◽  
pp. 391
Author(s):  
Rani D’haese ◽  
Tom Vrombaut ◽  
Geert Hommez ◽  
Hugo De Bruyn ◽  
Stefan Vandeweghe

Purpose: The aim of this in vitro study is to evaluate the accuracy of implant position using mucosal supported surgical guides, produced by a desktop 3D printer. Methods: Ninety implants (Bone Level Roxolid, 4.1 mm × 10 mm, Straumann, Villerat, Switzerland) were placed in fifteen mandibular casts (Bonemodels, Castellón de la Plana, Spain). A mucosa-supported guide was designed and printed for each of the fifteen casts. After placement of the implants, the location was assessed by scanning the cast and scan bodies with an intra-oral scanner (Primescan®, Dentsply Sirona, York, PA, USA). Two comparisons were performed: one with the mucosa as a reference, and one where only the implants were aligned. Angular, coronal and apical deviations were measured. Results: The mean implant angular deviation for tissue and implant alignment were 3.25° (SD 1.69°) and 2.39° (SD 1.42°) respectively, the coronal deviation 0.82 mm (SD 0.43 mm) and 0.45 mm (SD 0.31 mm) and the apical deviation 0.99 mm (SD 0.45 mm) and 0.71 mm (SD 0.43 mm). All three variables were significantly different between the tissue and implant alignment (p < 0.001). Conclusion: Based on the results of this study, we conclude that guided implant surgery using desktop 3D printed mucosa-supported guides has a clinically acceptable level of accuracy. The resilience of the mucosa has a negative effect on the guide stability and increases the deviation in implant position.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2699 ◽  
Author(s):  
Redhwan Algabri ◽  
Mun-Taek Choi

Human following is one of the fundamental functions in human–robot interaction for mobile robots. This paper shows a novel framework with state-machine control in which the robot tracks the target person in occlusion and illumination changes, as well as navigates with obstacle avoidance while following the target to the destination. People are detected and tracked using a deep learning algorithm, called Single Shot MultiBox Detector, and the target person is identified by extracting the color feature using the hue-saturation-value histogram. The robot follows the target safely to the destination using a simultaneous localization and mapping algorithm with the LIDAR sensor for obstacle avoidance. We performed intensive experiments on our human following approach in an indoor environment with multiple people and moderate illumination changes. Experimental results indicated that the robot followed the target well to the destination, showing the effectiveness and practicability of our proposed system in the given environment.


Author(s):  
Shuai Wang ◽  
Bo Kang ◽  
Jinlu Ma ◽  
Xianjun Zeng ◽  
Mingming Xiao ◽  
...  

Abstract Objective The outbreak of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) has caused more than 26 million cases of Corona virus disease (COVID-19) in the world so far. To control the spread of the disease, screening large numbers of suspected cases for appropriate quarantine and treatment are a priority. Pathogenic laboratory testing is typically the gold standard, but it bears the burden of significant false negativity, adding to the urgent need of alternative diagnostic methods to combat the disease. Based on COVID-19 radiographic changes in CT images, this study hypothesized that artificial intelligence methods might be able to extract specific graphical features of COVID-19 and provide a clinical diagnosis ahead of the pathogenic test, thus saving critical time for disease control. Methods We collected 1065 CT images of pathogen-confirmed COVID-19 cases along with those previously diagnosed with typical viral pneumonia. We modified the inception transfer-learning model to establish the algorithm, followed by internal and external validation. Results The internal validation achieved a total accuracy of 89.5% with a specificity of 0.88 and sensitivity of 0.87. The external testing dataset showed a total accuracy of 79.3% with a specificity of 0.83 and sensitivity of 0.67. In addition, in 54 COVID-19 images, the first two nucleic acid test results were negative, and 46 were predicted as COVID-19 positive by the algorithm, with an accuracy of 85.2%. Conclusion These results demonstrate the proof-of-principle for using artificial intelligence to extract radiological features for timely and accurate COVID-19 diagnosis. Key Points • The study evaluated the diagnostic performance of a deep learning algorithm using CT images to screen for COVID-19 during the influenza season. • As a screening method, our model achieved a relatively high sensitivity on internal and external CT image datasets. • The model was used to distinguish between COVID-19 and other typical viral pneumonia, both of which have quite similar radiologic characteristics.


This paper is to present an efficient and fast deep learning algorithm based on neural networks for object detection and pedestrian detection. The technique, called MobileNet Single Shot Detector, is an extension to Convolution Neural Networks. This technique is based on depth-wise distinguishable convolutions in order to build a lightweighted deep convolution network. A single filter is applied to each input and outputs are combined by using pointwise convolution. Single Shot Multibox Detector is a feed forward convolution network that is combined with MobileNets to give efficient and accurate results. MobileNets combined with SSD and Multibox Technique makes it much faster than SSD alone can work. The accuracy for this technique is calculated over colored (RGB images) and also on infrared images and its results are compared with the results of shallow machine learning based feature extraction plus classification technique viz. HOG plus SVM technique. The comparison of performance between proposed deep learning and shallow learning techniques has been conducted over benchmark dataset and validation testing over own dataset in order measure efficiency of both algorithms and find an effective algorithm that can work with speed and accurately to be applied for object detection in real world pedestrian detection application.


Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2557
Author(s):  
Ben Zierdt ◽  
Taichu Shi ◽  
Thomas DeGroat ◽  
Sam Furman ◽  
Nicholas Papas ◽  
...  

Ultraviolet disinfection has been proven to be effective for surface sanitation. Traditional ultraviolet disinfection systems generate omnidirectional radiation, which introduces safety concerns regarding human exposure. Large scale disinfection must be performed without humans present, which limits the time efficiency of disinfection. We propose and experimentally demonstrate a targeted ultraviolet disinfection system using a combination of robotics, lasers, and deep learning. The system uses a laser-galvo and a camera mounted on a two-axis gimbal running a custom deep learning algorithm. This allows ultraviolet radiation to be applied to any surface in the room where it is mounted, and the algorithm ensures that the laser targets the desired surfaces avoids others such as humans. Both the laser-galvo and the deep learning algorithm were tested for targeted disinfection.


Sign in / Sign up

Export Citation Format

Share Document