scholarly journals A Deep Learning System for Recognizing and Recovering Contaminated Slider Serial Numbers in Hard Disk Manufacturing Processes

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6261
Author(s):  
Chousak Chousangsuntorn ◽  
Teerawat Tongloy ◽  
Santhad Chuwongin ◽  
Siridech Boonsang

This paper outlines a system for detecting printing errors and misidentifications on hard disk drive sliders, which may contribute to shipping tracking problems and incorrect product delivery to end users. A deep-learning-based technique is proposed for determining the printed identity of a slider serial number from images captured by a digital camera. Our approach starts with image preprocessing methods that deal with differences in lighting and printing positions and then progresses to deep learning character detection based on the You-Only-Look-Once (YOLO) v4 algorithm and finally character classification. For character classification, four convolutional neural networks (CNN) were compared for accuracy and effectiveness: DarkNet-19, EfficientNet-B0, ResNet-50, and DenseNet-201. Experimenting on almost 15,000 photographs yielded accuracy greater than 99% on four CNN networks, proving the feasibility of the proposed technique. The EfficientNet-B0 network outperformed highly qualified human readers with the best recovery rate (98.4%) and fastest inference time (256.91 ms).

2020 ◽  
pp. bjophthalmol-2020-317825
Author(s):  
Yonghao Li ◽  
Weibo Feng ◽  
Xiujuan Zhao ◽  
Bingqian Liu ◽  
Yan Zhang ◽  
...  

Background/aimsTo apply deep learning technology to develop an artificial intelligence (AI) system that can identify vision-threatening conditions in high myopia patients based on optical coherence tomography (OCT) macular images.MethodsIn this cross-sectional, prospective study, a total of 5505 qualified OCT macular images obtained from 1048 high myopia patients admitted to Zhongshan Ophthalmic Centre (ZOC) from 2012 to 2017 were selected for the development of the AI system. The independent test dataset included 412 images obtained from 91 high myopia patients recruited at ZOC from January 2019 to May 2019. We adopted the InceptionResnetV2 architecture to train four independent convolutional neural network (CNN) models to identify the following four vision-threatening conditions in high myopia: retinoschisis, macular hole, retinal detachment and pathological myopic choroidal neovascularisation. Focal Loss was used to address class imbalance, and optimal operating thresholds were determined according to the Youden Index.ResultsIn the independent test dataset, the areas under the receiver operating characteristic curves were high for all conditions (0.961 to 0.999). Our AI system achieved sensitivities equal to or even better than those of retina specialists as well as high specificities (greater than 90%). Moreover, our AI system provided a transparent and interpretable diagnosis with heatmaps.ConclusionsWe used OCT macular images for the development of CNN models to identify vision-threatening conditions in high myopia patients. Our models achieved reliable sensitivities and high specificities, comparable to those of retina specialists and may be applied for large-scale high myopia screening and patient follow-up.


2020 ◽  
Vol 101 ◽  
pp. 209
Author(s):  
R. Baskaran ◽  
B. Ajay Rajasekaran ◽  
V. Rajinikanth
Keyword(s):  

Endoscopy ◽  
2020 ◽  
Author(s):  
Alanna Ebigbo ◽  
Robert Mendel ◽  
Tobias Rückert ◽  
Laurin Schuster ◽  
Andreas Probst ◽  
...  

Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI.


Bone ◽  
2021 ◽  
pp. 115972
Author(s):  
Abhinav Suri ◽  
Brandon C. Jones ◽  
Grace Ng ◽  
Nancy Anabaraonye ◽  
Patrick Beyrer ◽  
...  

Author(s):  
Yi-Chia Wu ◽  
Po-Yen Shih ◽  
Li-Perng Chen ◽  
Chia-Chin Wang ◽  
Hooman Samani

Sign in / Sign up

Export Citation Format

Share Document