scholarly journals Automated identification of cephalometric landmarks: Part 1—Comparisons between the latest deep-learning methods YOLOV3 and SSD

2019 ◽  
Vol 89 (6) ◽  
pp. 903-909 ◽  
Author(s):  
Ji-Hoon Park ◽  
Hye-Won Hwang ◽  
Jun-Ho Moon ◽  
Youngsung Yu ◽  
Hansuk Kim ◽  
...  

ABSTRACT Objective: To compare the accuracy and computational efficiency of two of the latest deep-learning algorithms for automatic identification of cephalometric landmarks. Materials and Methods: A total of 1028 cephalometric radiographic images were selected as learning data that trained You-Only-Look-Once version 3 (YOLOv3) and Single Shot Multibox Detector (SSD) methods. The number of target labeling was 80 landmarks. After the deep-learning process, the algorithms were tested using a new test data set composed of 283 images. Accuracy was determined by measuring the point-to-point error and success detection rate and was visualized by drawing scattergrams. The computational time of both algorithms was also recorded. Results: The YOLOv3 algorithm outperformed SSD in accuracy for 38 of 80 landmarks. The other 42 of 80 landmarks did not show a statistically significant difference between YOLOv3 and SSD. Error plots of YOLOv3 showed not only a smaller error range but also a more isotropic tendency. The mean computational time spent per image was 0.05 seconds and 2.89 seconds for YOLOv3 and SSD, respectively. YOLOv3 showed approximately 5% higher accuracy compared with the top benchmarks in the literature. Conclusions: Between the two latest deep-learning methods applied, YOLOv3 seemed to be more promising as a fully automated cephalometric landmark identification system for use in clinical practice.

Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 223
Author(s):  
Yen-Ling Tai ◽  
Shin-Jhe Huang ◽  
Chien-Chang Chen ◽  
Henry Horng-Shing Lu

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.


2021 ◽  
Author(s):  
Süleyman UZUN ◽  
Sezgin KAÇAR ◽  
Burak ARICIOĞLU

Abstract In this study, for the first time in the literature, identification of different chaotic systems by classifying graphic images of their time series with deep learning methods is aimed. For this purpose, a data set is generated that consists of the graphic images of time series of the most known three chaotic systems: Lorenz, Chen, and Rossler systems. The time series are obtained for different parameter values, initial conditions, step size and time lengths. After generating the data set, a high-accuracy classification is performed by using transfer learning method. In the study, the most accepted deep learning models of the transfer learning methods are employed. These models are SqueezeNet, VGG-19, AlexNet, ResNet50, ResNet101, DenseNet201, ShuffleNet and GoogLeNet. As a result of the study, classification accuracy is found between 96% and 97% depending on the problem. Thus, this study makes association of real time random signals with a mathematical system possible.


2018 ◽  
Vol 24 (4) ◽  
pp. 225-247 ◽  
Author(s):  
Xavier Warin

Abstract A new method based on nesting Monte Carlo is developed to solve high-dimensional semi-linear PDEs. Depending on the type of non-linearity, different schemes are proposed and theoretically studied: variance error are given and it is shown that the bias of the schemes can be controlled. The limitation of the method is that the maturity or the Lipschitz constants of the non-linearity should not be too high in order to avoid an explosion of the computational time. Many numerical results are given in high dimension for cases where analytical solutions are available or where some solutions can be computed by deep-learning methods.


2021 ◽  
Vol 2 ◽  
Author(s):  
Chengjie Li ◽  
Lidong Zhu ◽  
Zhongqiang Luo ◽  
Zhen Zhang ◽  
Yilun Liu ◽  
...  

In space-based AIS (Automatic Identification System), due to the high orbit and wide coverage of the satellite, there are many self-organizing communities within the observation range of the satellite, and the signals will inevitably conflict, which reduces the probability of ship detection. In this paper, to improve system processing power and security, according to the characteristics of neural network that can efficiently find the optimal solution of a problem, proposes a method that combines the problem of blind source separation with BP neural network, using the generated suitable data set to train the neural network, thereby automatically generating a traditional blind signal separation algorithm with a more stable separation effect. At last, through the simulation results of combining the blind source separation problem with BP neural network, the performance and stability of the space-based AIS can be effectively improved.


2020 ◽  
Vol 10 (23) ◽  
pp. 8625
Author(s):  
Yali Song ◽  
Yinghong Wen

In the positioning process of a high-speed train, cumulative error may result in a reduction in the positioning accuracy. The assisted positioning technology based on kilometer posts can be used as an effective method to correct the cumulative error. However, the traditional detection method of kilometer posts is time-consuming and complex, which greatly affects the correction efficiency. Therefore, in this paper, a kilometer post detection model based on deep learning is proposed. Firstly, the Deep Convolutional Generative Adversarial Networks (DCGAN) algorithm is introduced to construct an effective kilometer post data set. This greatly reduces the cost of real data acquisition and provides a prerequisite for the construction of the detection model. Then, by using the existing optimization as a reference and further simplifying the design of the Single Shot multibox Detector (SSD) model according to the specific application scenario of this paper, the kilometer post detection model based on an improved SSD algorithm is established. Finally, from the analysis of the experimental results, we know that the detection model established in this paper ensures both detection accuracy and efficiency. The accuracy of our model reached 98.92%, while the detection time was only 35.43 ms. Thus, our model realizes the rapid and accurate detection of kilometer posts and improves the assisted positioning technology based on kilometer posts by optimizing the detection method.


2020 ◽  
Vol 10 (11) ◽  
pp. 4010 ◽  
Author(s):  
Kwang-il Kim ◽  
Keon Myung Lee

Marine resources are valuable assets to be protected from illegal, unreported, and unregulated (IUU) fishing and overfishing. IUU and overfishing detections require the identification of fishing gears for the fishing ships in operation. This paper is concerned with automatically identifying fishing gears from AIS (automatic identification system)-based trajectory data of fishing ships. It proposes a deep learning-based fishing gear-type identification method in which the six fishing gear type groups are identified from AIS-based ship movement data and environmental data. The proposed method conducts preprocessing to handle different lengths of messaging intervals, missing messages, and contaminated messages for the trajectory data. For capturing complicated dynamic patterns in trajectories of fishing gear types, a sliding window-based data slicing method is used to generate the training data set. The proposed method uses a CNN (convolutional neural network)-based deep neural network model which consists of the feature extraction module and the prediction module. The feature extraction module contains two CNN submodules followed by a fully connected network. The prediction module is a fully connected network which suggests a putative fishing gear type for the features extracted by the feature extraction module from input trajectory data. The proposed CNN-based model has been trained and tested with a real trajectory data set of 1380 fishing ships collected over a year. A new performance index, DPI (total performance of the day-wise performance index) is proposed to compare the performance of gear type identification techniques. To compare the performance of the proposed model, SVM (support vector machine)-based models have been also developed. In the experiments, the trained CNN-based model showed 0.963 DPI, while the SVM models showed 0.814 DPI on average for the 24-h window. The high value of the DPI index indicates that the trained model is good at identifying the types of fishing gears.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
O. Obulesu ◽  
Suresh Kallam ◽  
Gaurav Dhiman ◽  
Rizwan Patan ◽  
Ramana Kadiyala ◽  
...  

Cancer is a complicated worldwide health issue with an increasing death rate in recent years. With the swift blooming of the high throughput technology and several machine learning methods that have unfolded in recent years, progress in cancer disease diagnosis has been made based on subset features, providing awareness of the efficient and precise disease diagnosis. Hence, progressive machine learning techniques that can, fortunately, differentiate lung cancer patients from healthy persons are of great concern. This paper proposes a novel Wilcoxon Signed-Rank Gain Preprocessing combined with Generative Deep Learning called Wilcoxon Signed Generative Deep Learning (WS-GDL) method for lung cancer disease diagnosis. Firstly, test significance analysis and information gain eliminate redundant and irrelevant attributes and extract many informative and significant attributes. Then, using a generator function, the Generative Deep Learning method is used to learn the deep features. Finally, a minimax game (i.e., minimizing error with maximum accuracy) is proposed to diagnose the disease. Numerical experiments on the Thoracic Surgery Data Set are used to test the WS-GDL method's disease diagnosis performance. The WS-GDL approach may create relevant and significant attributes and adaptively diagnose the disease by selecting optimal learning model parameters. Quantitative experimental results show that the WS-GDL method achieves better diagnosis performance and higher computing efficiency in computational time, computational complexity, and false-positive rate compared to state-of-the-art approaches.


2021 ◽  
Author(s):  
Antoine Bouziat ◽  
Sylvain Desroziers ◽  
Abdoulaye Koroko ◽  
Antoine Lechevallier ◽  
Mathieu Feraille ◽  
...  

<p>Automation and robotics raise growing interests in the mining industry. If not already a reality, it is no more science fiction to imagine autonomous robots routinely participating in the exploration and extraction of mineral raw materials in the near future. Among the various scientific and technical issues to be addressed towards this objective, this study focuses on the automation of real-time characterisation of rock images captured on the field, either to discriminate rock types and mineral species or to detect small elements such as mineral grains or metallic nuggets. To do so, we investigate the potential of methods from the Computer Vision community, a subfield of Artificial Intelligence dedicated to image processing. In particular, we aim at assessing the potential of Deep Learning approaches and convolutional neuronal networks (CNN) for the analysis of field samples pictures, highlighting key challenges before an industrial use in operational contexts.</p><p>In a first initiative, we appraise Deep Learning methods to classify photographs of macroscopic rock samples between 12 lithological families. Using the architecture of reference CNN and a collection of 2700 images, we achieve a prediction accuracy above 90% for new pictures of good photographic quality. Nonetheless we then seek to improve the robustness of the method for on-the-fly field photographs. To do so, we train an additional CNN to automatically separate the rock sample from the background, with a detection algorithm. We also introduce a more sophisticated classification method combining a set of several CNN with a decision tree. The CNN are specifically trained to recognise petrological features such as textures, structures or mineral species, while the decision tree mimics the naturalist methodology for lithological identification.</p><p>In a second initiative, we evaluate Deep Learning techniques to spot and delimitate specific elements in finer-scale images. We use a data set of carbonate thin sections with various species of microfossils. The data comes from a sedimentology study but analogies can be drawn with igneous geology use cases. We train four state-of-the-art Deep Learning methods for object detection with a limited data set of 15 annotated images. The results on 130 other thin section images are then qualitatively assessed by expert geologists, and precisions and inference times quantitatively measured. The four models show good capabilities in detecting and categorising the microfossils. However differences in accuracy and performance are underlined, leading to recommendations for comparable projects in a mining context.</p><p>Altogether, this study illustrates the power of Computer Vision and Deep Learning approaches to automate rock image analysis. However, to make the most of these technologies in mining activities, stimulating research opportunities lies in adapting the algorithms to the geological use cases, embedding as much geological knowledge as possible in the statistical models, and mitigating the number of training data to be manually interpreted beforehand.   </p>


2019 ◽  
Vol 11 (7) ◽  
pp. 786 ◽  
Author(s):  
Yang-Lang Chang ◽  
Amare Anagaw ◽  
Lena Chang ◽  
Yi Wang ◽  
Chih-Yu Hsiao ◽  
...  

Synthetic aperture radar (SAR) imagery has been used as a promising data source for monitoring maritime activities, and its application for oil and ship detection has been the focus of many previous research studies. Many object detection methods ranging from traditional to deep learning approaches have been proposed. However, majority of them are computationally intensive and have accuracy problems. The huge volume of the remote sensing data also brings a challenge for real time object detection. To mitigate this problem a high performance computing (HPC) method has been proposed to accelerate SAR imagery analysis, utilizing the GPU based computing methods. In this paper, we propose an enhanced GPU based deep learning method to detect ship from the SAR images. The You Only Look Once version 2 (YOLOv2) deep learning framework is proposed to model the architecture and training the model. YOLOv2 is a state-of-the-art real-time object detection system, which outperforms Faster Region-Based Convolutional Network (Faster R-CNN) and Single Shot Multibox Detector (SSD) methods. Additionally, in order to reduce computational time with relatively competitive detection accuracy, we develop a new architecture with less number of layers called YOLOv2-reduced. In the experiment, we use two types of datasets: A SAR ship detection dataset (SSDD) dataset and a Diversified SAR Ship Detection Dataset (DSSDD). These two datasets were used for training and testing purposes. YOLOv2 test results showed an increase in accuracy of ship detection as well as a noticeable reduction in computational time compared to Faster R-CNN. From the experimental results, the proposed YOLOv2 architecture achieves an accuracy of 90.05% and 89.13% on the SSDD and DSSDD datasets respectively. The proposed YOLOv2-reduced architecture has a similarly competent detection performance as YOLOv2, but with less computational time on a NVIDIA TITAN X GPU. The experimental results shows that the deep learning can make a big leap forward in improving the performance of SAR image ship detection.


2015 ◽  
Vol 13 ◽  
pp. 263-266
Author(s):  
Mihai Ghiba

This paper makes a comparative analysis of the data recorded and used in inland transport activity utlizand information that is recorded in "electronic ship reporting" and "Automatic Identification System" in the standard River Information Services - Vessel Traking and Tracing combined with those additionally recorded in the logbook and in the seafarer’s service book. RIS recordings largely make the database necessary for keeping the Logbook, and differences can be evaluated through a matrix. The concentrated data set can be used to determine the working time of seafarers and establishing the compliance with promotion requirements based on the training fully recorded, to accumulate data necessary for shaping risk functions, to clarify the conditions under which shipping accidents occurred, with results in the recovery of losses by marine insurance, to a thorough and accurate risk analysis.


Sign in / Sign up

Export Citation Format

Share Document