precision rate
Recently Published Documents


TOTAL DOCUMENTS

98
(FIVE YEARS 62)

H-INDEX

6
(FIVE YEARS 3)

Author(s):  
Sai Wang ◽  
Qi He ◽  
Ping Zhang ◽  
Xin Chen ◽  
Siyang Zuo

In this paper, we compared the performance of several neural networks in the classification of early gastric cancer (EGC) images and proposed a method of converting the output value of the network into a calorific value to locate the lesion. The algorithm was improved using transfer learning and fine-tuning principles. The test set accuracy rate reached 0.72, sensitivity reached 0.67, specificity reached 0.77, and precision rate reached 0.78. The experimental results show the potential to meet clinical demands for automatic detection of gastric lesion.


2021 ◽  
Vol 15 (1) ◽  
pp. 235-248
Author(s):  
Mayank R. Kapadia ◽  
Chirag N. Paunwala

Introduction: Content Based Image Retrieval (CBIR) system is an innovative technology to retrieve images from various media types. One of the CBIR applications is Content Based Medical Image Retrieval (CBMIR). The image retrieval system retrieves the most similar images from the historical cases, and such systems can only support the physician's decision to diagnose a disease. To extract the useful features from the query image for linking similar types of images is the major challenge in the CBIR domain. The Convolution Neural Network (CNN) can overcome the drawbacks of traditional algorithms, dependent on the low-level feature extraction technique. Objective: The objective of the study is to develop a CNN model with a minimum number of convolution layers and to get the maximum possible accuracy for the CBMIR system. The minimum number of convolution layers reduces the number of mathematical operations and the time for the model's training. It also reduces the number of training parameters, like weights and bias. Thus, it reduces the memory requirement for the model storage. This work mainly focused on developing an optimized CNN model for the CBMIR system. Such systems can only support the physicians' decision to diagnose a disease from the images and retrieve the relevant cases to help the doctor decide the precise treatment. Methods: The deep learning-based model is proposed in this paper. The experiment is done with several convolution layers and various optimizers to get the maximum accuracy with a minimum number of convolution layers. Thus, the ten-layer CNN model is developed from scratch and used to derive the training and testing images' features and classify the test image. Once the image class is identified, the most relevant images are determined based on the Euclidean distance between the query features and database features of the identified class. Based on this distance, the most relevant images are displayed from the respective class of images. The general dataset CIFAR10, which has 60,000 images of 10 different classes, and the medical dataset IRMA, which has 2508 images of 9 various classes, have been used to analyze the proposed method. The proposed model is also applied for the medical x-ray image dataset of chest disease and compared with the other pre-trained models. Results: The accuracy and the average precision rate are the measurement parameters utilized to compare the proposed model with different machine learning techniques. The accuracy of the proposed model for the CIFAR10 dataset is 93.9%, which is better than the state-of-the-art methods. After the success for the general dataset, the model is also tested for the medical dataset. For the x-ray images of the IRMA dataset, it is 86.53%, which is better than the different pre-trained model results. The model is also tested for the other x-ray dataset, which is utilized to identify chest-related disease. The average precision rate for such a dataset is 97.25%. Also, the proposed model fulfills the major challenge of the semantic gap. The semantic gap of the proposed model for the chest disease dataset is 2.75%, and for the IRMA dataset, it is 13.47%. Also, only ten convolution layers are utilized in the proposed model, which is very small in number compared to the other pre-trained models. Conclusion: The proposed technique shows remarkable improvement in performance metrics over CNN-based state-of-the-art methods. It also offers a significant improvement in performance metrics over different pre-trained models for the two different medical x-ray image datasets.


2021 ◽  
Vol 15 (1) ◽  
pp. 236-249
Author(s):  
Mayank R. Kapadia ◽  
Chirag N. Paunwala

Introduction: Content Based Image Retrieval (CBIR) system is an innovative technology to retrieve images from various media types. One of the CBIR applications is Content Based Medical Image Retrieval (CBMIR). The image retrieval system retrieves the most similar images from the historical cases, and such systems can only support the physician's decision to diagnose a disease. To extract the useful features from the query image for linking similar types of images is the major challenge in the CBIR domain. The Convolution Neural Network (CNN) can overcome the drawbacks of traditional algorithms, dependent on the low-level feature extraction technique. Objective: The objective of the study is to develop a CNN model with a minimum number of convolution layers and to get the maximum possible accuracy for the CBMIR system. The minimum number of convolution layers reduces the number of mathematical operations and the time for the model's training. It also reduces the number of training parameters, like weights and bias. Thus, it reduces the memory requirement for the model storage. This work mainly focused on developing an optimized CNN model for the CBMIR system. Such systems can only support the physicians' decision to diagnose a disease from the images and retrieve the relevant cases to help the doctor decide the precise treatment. Methods: The deep learning-based model is proposed in this paper. The experiment is done with several convolution layers and various optimizers to get the maximum accuracy with a minimum number of convolution layers. Thus, the ten-layer CNN model is developed from scratch and used to derive the training and testing images' features and classify the test image. Once the image class is identified, the most relevant images are determined based on the Euclidean distance between the query features and database features of the identified class. Based on this distance, the most relevant images are displayed from the respective class of images. The general dataset CIFAR10, which has 60,000 images of 10 different classes, and the medical dataset IRMA, which has 2508 images of 9 various classes, have been used to analyze the proposed method. The proposed model is also applied for the medical x-ray image dataset of chest disease and compared with the other pre-trained models. Results: The accuracy and the average precision rate are the measurement parameters utilized to compare the proposed model with different machine learning techniques. The accuracy of the proposed model for the CIFAR10 dataset is 93.9%, which is better than the state-of-the-art methods. After the success for the general dataset, the model is also tested for the medical dataset. For the x-ray images of the IRMA dataset, it is 86.53%, which is better than the different pre-trained model results. The model is also tested for the other x-ray dataset, which is utilized to identify chest-related disease. The average precision rate for such a dataset is 97.25%. Also, the proposed model fulfills the major challenge of the semantic gap. The semantic gap of the proposed model for the chest disease dataset is 2.75%, and for the IRMA dataset, it is 13.47%. Also, only ten convolution layers are utilized in the proposed model, which is very small in number compared to the other pre-trained models. Conclusion: The proposed technique shows remarkable improvement in performance metrics over CNN-based state-of-the-art methods. It also offers a significant improvement in performance metrics over different pre-trained models for the two different medical x-ray image datasets.


2021 ◽  
Vol 26 (6) ◽  
pp. 533-539
Author(s):  
Krittachai Boonsivanon ◽  
Worawat Sa-Ngiamvibool

The new improvement keypoint description technique of image-based recognition for rotation, viewpoint and non-uniform illumination situations is presented. The technique is relatively simple based on two procedures, i.e., the keypoint detection and the keypoint description procedure. The keypoint detection procedure is based on the SIFT approach, Top-Hat filtering, morphological operations and average filtering approach. Where this keypoint detection procedure can segment the targets from uneven illumination particle images. While the keypoint description procedures are described and implemented using the Hu moment invariants. Where the central moments are being unchanged under image translations. The sensitivity, accuracy and precision rate of data sets were evaluated and compared. The data set are provided by color image database with variants uniform and non-uniform illumination, viewpoint and rotation changes. The evaluative results show that the approach is superior to the other SIFTs in terms of uniform illumination, non-uniform illumination and other situations. Additionally, the paper demonstrates the high sensitivity of 100%, high accuracy of 83.33% and high precision rate of 80.00%. Comparisons to other SIFT approaches are also included.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Keyu Jiang ◽  
Hanyi Zhang ◽  
Weiting Zhang ◽  
Liming Fang ◽  
Chunpeng Ge ◽  
...  

Trigger-action programming (TAP) is an intelligent tool, which makes it easy for users to make intelligent rules for IoT devices and applications. Unfortunately, with the popularization of TAP and more and more rules, the rule chain from multiple rules appears gradually and brings more and more threats. Previous work pays more attention to the construction of the security model, but few people focus on how to accurately identify the rule chain from multiple rules. Inaccurate identification of rule chains will lead to the omission of rule chains with threats. This paper proposes a rule chain recognition model based on multiple features, TapChain, which can more accurately identify the rule chain without source code. We design a correction algorithm for TapChain to help us get the correct NLP analysis results. We extract 12 features from 5 aspects of the rules to make the recognition of the rule chain more accurate. According to the evaluation, compared with the previous work, the accuracy rate of TapChain is increased by 3.1%, the recall rate is increased by 1.4%, and the precision rate can reach 88.2%. More accurate identification of the rule chain can help to better implement the security policies and better balance security and availability. What’s more, according to the rule chain that TapChain can recognize, there is a new kind of rule chain with threats. We give the relevant case studies in the evaluation.


2021 ◽  
Vol 133 (1030) ◽  
pp. 124501
Author(s):  
Yujie Yang ◽  
Bin Jiang

Abstract In this paper, we pioneer a new machine-learning method to search for H ii regions in spectra from The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST). H ii regions are emission nebulae created when young and massive stars ionize nearby gas clouds with high-energy ultraviolet radiation. Having more H ii region samples will help us understand the formation and evolution of stars. Machine-learning methods are often applied to search for special celestial bodies such as H ii regions. LAMOST has conducted spectral surveys and provided a wealth of valuable spectra for the research of special and rare celestial bodies. To overcome the problem of sparse positive samples and diversification of negative samples, a novel method called the self-calibrated convolution network is introduced and implemented for spectral processing. A deep network classifier with a structure called a self-calibrated block provides a high precision rate, and the recall rate is improved by adding the strategy of positive-unlabeled bagging. Experimental results show that this method can achieve better performance than other current methods. Eighty-nine spectra are identified as Galactic H ii regions after cross-matching with the WISE Catalog of Galactic H ii Regions, confirming the effectiveness of the method proposed in this paper.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
S. Agnes Shifani ◽  
M. S. Godwin Premi

The measurement of strain using some contact techniques has some drawbacks like less accuracy and it takes larger computation time for finding each location of subpixels. Thus, a faster noncontact Digital Image Correlation (DIC) mechanism is utilized along with the traditional techniques to measure the strain. The Newton-Raphson (NR) technique is considered to be an accepted mechanism for accurate tracking of different intensity relocation. Generally, the issue regarding the DIC mechanism is its computational cost. In this paper, an interpolation technique is utilized to accomplish a high precision rate and faster image correlation; thereby it reduces the computation time required for finding the matched pixel and viably handles the rehashing relationship process. Hence, the proposed mechanism provides better efficiency along with a reduced number of iterations required for finding the identity. The number of iterations can be reduced using the Sum of Square of Subset Intensity Gradients (SSSIG) method. The evaluation of the projected scheme is tested with different images through various parameters. Finally, the outcome indicates that the projected mechanism takes only a few milliseconds to match the best matching location, whereas the prevailing techniques require 16 seconds for the same operation with the same step size. This demonstrates the effectiveness of the proposed scheme.


2021 ◽  
Vol 13 (20) ◽  
pp. 4027
Author(s):  
Sungwoo Byun ◽  
In-Kyoung Shin ◽  
Jucheol Moon ◽  
Jiyoung Kang ◽  
Sang-Il Choi

In this paper, we propose a deep neural network-based method for estimating speed of vehicles on roads automatically from videos recorded using unmanned aerial vehicle (UAV). The proposed method includes the following; (1) detecting and tracking vehicles by analyzing the videos, (2) calculating the image scales using the distances between lanes on the roads, and (3) estimating the speeds of vehicles on the roads. Our method can automatically measure the speed of the vehicles from the only videos recorded using UAV without additional information in both directions on the roads simultaneously. In our experiments, we evaluate the performance of the proposed method with the visual data at four different locations. The proposed method shows 97.6% recall rate and 94.7% precision rate in detecting vehicles, and it shows error (root mean squared error) of 5.27 km/h in estimating the speeds of vehicles.


Sensor Review ◽  
2021 ◽  
Vol 41 (4) ◽  
pp. 341-349
Author(s):  
Wahyu Rahmaniar ◽  
W.J. Wang ◽  
Chi-Wei Ethan Chiu ◽  
Noorkholis Luthfil Luthfil Hakim

Purpose The purpose of this paper is to propose a new framework and improve a bi-directional people counting technique using an RGB-D camera to obtain accurate results with fast computation time. Therefore, it can be used in real-time applications. Design/methodology/approach First, image calibration is proposed to obtain the ratio and shift values between the depth and the RGB image. In the depth image, a person is detected as foreground by removing the background. Then, the region of interest (ROI) of the detected people is registered based on their location and mapped to an RGB image. Registered people are tracked in RGB images based on the channel and spatial reliability. Finally, people were counted when they crossed the line of interest (LOI) and their displacement distance was more than 2 m. Findings It was found that the proposed people counting method achieves high accuracy with fast computation time to be used in PCs and embedded systems. The precision rate is 99% with a computation time of 35 frames per second (fps) using a PC and 18 fps using the NVIDIA Jetson TX2. Practical implications The precision rate is 99% with a computation time of 35 frames per second (fps) using a PC and 18 fps using the NVIDIA Jetson TX2. Originality/value The proposed method can count the number of people entering and exiting a room at the same time. If the previous systems were limited to only one to two people in a frame, this system can count many people in a frame. In addition, this system can handle some problems in people counting, such as people who are blocked by others, people moving in another direction suddenly, and people who are standing still.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yishan He ◽  
Jiajin Huang ◽  
Gaowei Wu ◽  
Jian Yang

AbstractThe digital reconstruction of a neuron is the most direct and effective way to investigate its morphology. Many automatic neuron tracing methods have been proposed, but without manual check it is difficult to know whether a reconstruction or which substructure in a reconstruction is accurate. For a neuron’s reconstructions generated by multiple automatic tracing methods with different principles or models, their common substructures are highly reliable and named individual motifs. In this work, we propose a Vaa3D-based method called Lamotif to explore individual motifs in automatic reconstructions of a neuron. Lamotif utilizes the local alignment algorithm in BlastNeuron to extract local alignment pairs between a specified objective reconstruction and multiple reference reconstructions, and combines these pairs to generate individual motifs on the objective reconstruction. The proposed Lamotif is evaluated on reconstructions of 163 multiple species neurons, which are generated by four state-of-the-art tracing methods. Experimental results show that individual motifs are almost on corresponding gold standard reconstructions and have much higher precision rate than objective reconstructions themselves. Furthermore, an objective reconstruction is mostly quite accurate if its individual motifs have high recall rate. Individual motifs contain common geometry substructures in multiple reconstructions, and can be used to select some accurate substructures from a reconstruction or some accurate reconstructions from automatic reconstruction dataset of different neurons.


Sign in / Sign up

Export Citation Format

Share Document