incorrect prediction
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 11)

H-INDEX

8
(FIVE YEARS 2)

Materials ◽  
2021 ◽  
Vol 14 (21) ◽  
pp. 6311
Author(s):  
Woldeamanuel Minwuye Mesfin ◽  
Soojin Cho ◽  
Jeongmin Lee ◽  
Hyeong-Ki Kim ◽  
Taehoon Kim

The objective of this study is to evaluate the feasibility of deep-learning-based segmentation of the area covered by fresh and young concrete in the images of construction sites. The RGB images of construction sites under various actual situations were used as an input into several types of convolutional neural network (CNN)–based segmentation models, which were trained using training image sets. Various ranges of threshold values were applied for the classification, and their accuracy and recall capacity were quantified. The trained models could segment the concrete area overall although they were not able to judge the difference between concrete of different ages as professionals can. By increasing the threshold values for the softmax classifier, the cases of incorrect prediction as concrete became almost zero, while some areas of concrete became segmented as not concrete.


2021 ◽  
Author(s):  
Takuma Watanabe ◽  
Hiroyoshi Yamada

In this research, we discuss the possibility of incorrect prediction of the double-bounce scattering (DBS) power in model-based decomposition (MBD) algorithms applied to polarimetric synthetic aperture radar (SAR) images of vegetated terrain. In most of the MBD schemes, the estimation of the DBS component is based on the assumption that the co-polarized phase difference (CPD) of the DBS is similar to those of backscattering from a pair of orthogonal planer conducting surfaces. However, for dielectric surfaces such as soil or vegetation trunks, this assumption is only valid within a certain range of radar incidence angle, which is dictated by the Brewster angles of the dielectric surfaces. If the incidence angle is out of this range, the DBS contribution is incorrectly estimated as the surface scattering. Moreover, because the Brewster angle is a function of surface permittivity, the angular range depends on moisture contents of the surfaces; therefore, correctness of the MBD results also depend on the surface moisture contents. To demonstrate this problem, we provide a simple numerical model of vegetated terrain, and we validate theoretical results by a series of controlled experiments carried out in an anechoic chamber with a simplified vegetation model.


Author(s):  
Solieman Hanadi ◽  
Trong Tuyen Nguyen

Introduction. Ventricular late potentials (VLP) are predictors of cardiac disorders such as sudden death syndrome, myocardial infarction and ventricular tachyarrhythmias. Therefore, VLP assessment allows the severity and possible dangerous consequences of such disorders to be predicted.Aim. To determine errors associated with VLP assessment by high-resolution 12-lead ECG recordings.Materials and methods. VLPs were determined by the modulus of the cardiac electrical vector using signals from orthogonal leads. The conversion error was assessed using synchronous ECG recordings of 12-channel and orthogonal leads, the method of digital filtering (to reduce noise and interference) and the method of identifying characteristic points of the QRS complex and VLPs.Results. The conversion of 12-lead ECG signals into orthogonal signals results in errors associated with the assessment of both the modulus of the cardiac electrical vector and all VLP indicators. The Kors transformation was shown to provide the minimum errors when assessing the cardiac electrical vector modulus in the QRS area, with the errors related to the VRMS assessment not exceeding 0.084 %. The estimation of the QRSd and LAS errors should consider the nature of VLP variations and the zone of uncertainty in their assessment. The ambiguity of the results of assessing the boundaries of violations and the absence of pathologies in cardiac ventricular depolarization indicates the influence of a large number of factors on research accuracy. Errors in the assessment of these factors may result in under- and overestimation of dangerous heart rhythm disturbances and incorrect prediction of the patient' state.Conclusion. The obtained results can be used for reducing errors associated with the assessment of VLP indicators, improving the diagnostic accuracy of dangerous heart rhythm disturbances and predicting disease exacerbation due to structural and morphological disorders of the myocardium.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Israa Mahmood ◽  
Hasanen Abdullah

PurposeTraditional classification algorithms always have an incorrect prediction. As the misclassification rate increases, the usefulness of the learning model decreases. This paper presents the development of a wisdom framework that reduces the error rate to less than 3% without human intervention.Design/methodology/approachThe proposed WisdomModel consists of four stages: build a classifier, isolate the misclassified instances, construct an automated knowledge base for the misclassified instances and rectify incorrect prediction. This approach will identify misclassified instances by comparing them against the knowledge base. If an instance is close to a rule in the knowledge base by a certain threshold, then this instance is considered misclassified.FindingsThe authors have evaluated the WisdomModel using different measures such as accuracy, recall, precision, f-measure, receiver operating characteristics (ROC) curve, area under the curve (AUC) and error rate with various data sets to prove its ability to generalize without human involvement. The results of the proposed model minimize the number of misclassified instances by at least 70% and increase the accuracy of the model minimally by 7%.Originality/valueThis research focuses on defining wisdom in practical applications. Despite of the development in information system, there is still no framework or algorithm that can be used to extract wisdom from data. This research will build a general wisdom framework that can be used in any domain to reach wisdom.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Rohit Bharti ◽  
Aditya Khamparia ◽  
Mohammad Shabaz ◽  
Gaurav Dhiman ◽  
Sagar Pande ◽  
...  

The correct prediction of heart disease can prevent life threats, and incorrect prediction can prove to be fatal at the same time. In this paper different machine learning algorithms and deep learning are applied to compare the results and analysis of the UCI Machine Learning Heart Disease dataset. The dataset consists of 14 main attributes used for performing the analysis. Various promising results are achieved and are validated using accuracy and confusion matrix. The dataset consists of some irrelevant features which are handled using Isolation Forest, and data are also normalized for getting better results. And how this study can be combined with some multimedia technology like mobile devices is also discussed. Using deep learning approach, 94.2% accuracy was obtained.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254030
Author(s):  
Haoming Wang ◽  
Xiangdong Liu

Machine learning models have increasingly been used in bankruptcy prediction. However, the observed historical data of bankrupt companies are often affected by data imbalance, which causes incorrect prediction, resulting in substantial economic losses. Many studies have proposed the insolvency imbalance problem, but little attention has been paid to the effect of the undersampling technology. Therefore, a framework is used to spot-check algorithms quickly and combine which undersampling method and classification model performs best. The results show that Naive Bayes (NB) after Edited Nearest Neighbors (ENN) has the best performance, with an F2-measure of 0.423. In addition, by changing the undersampling rate of the cluster centroid-based method, we find that the performance of the Linear Discriminant Analysis (LDA) and Naive Bayes (NB) are affected by the undersampling rate. Neither of them is uniformly declining, and LDA has higher performance when the undersampling rate is 30%. This study accordingly provides another perspective and a guide for future design.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yawen Lan ◽  
Xiaobin Wang ◽  
Yuchen Wang

Memory is an intricate process involving various faculties of the brain and is a central component in human cognition. However, the exact mechanism that brings about memory in our brain remains elusive and the performance of the existing memory models is not satisfactory. To overcome these problems, this paper puts forward a brain-inspired spatio-temporal sequential memory model based on spiking neural networks (SNNs). Inspired by the structure of the neocortex, the proposed model is structured by many mini-columns composed of biological spiking neurons. Each mini-column represents one memory item, and the firing of different spiking neurons in the mini-column depends on the context of the previous inputs. The Spike-Timing-Dependant Plasticity (STDP) is used to update the connections between excitatory neurons and formulates association between two memory items. In addition, the inhibitory neurons are employed to prevent incorrect prediction, which contributes to improving the retrieval accuracy. Experimental results demonstrate that the proposed model can effectively store a huge number of data and accurately retrieve them when sufficient context is provided. This work not only provides a new memory model but also suggests how memory could be formulated with excitatory/inhibitory neurons, spike-based encoding, and mini-column structure.


Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 820
Author(s):  
Sanghyun Park ◽  
Jaeyung Jun ◽  
Changhyun Kim ◽  
Gyeong Il Min ◽  
Hun Jae Lee ◽  
...  

The fetched instructions would have data dependency with in-flight ones in the pipeline execution of a processor, so the dependency prevents the processor from executing the incoming instructions for guaranteeing the program’s correctness. The register and memory dependencies are detected in the decode and memory stages, respectively. In a small embedded processor that supports as many ISAsas possible to reduce code size, the instruction decoding to identify register usage with the dependence check generally results in long delay and sometimes a critical path in its implementation. For reducing the delay, this paper proposes two methods—One method assumes the widely used source register operand bit-fields without fully decoding the instructions. However, this assumption would cause additional stalls due to the incorrect prediction; thus, it would degrade the performance. To solve this problem, as the other method, we adopt a table-based way to store the dependence history and later use this information for more precisely predicting the dependency. We applied our methods to the commercial EISC embedded processor with the Samsung 65nm process; thus, we reduced the critical path delay and increased its maximum operating frequency by 12.5% and achieved an average 11.4% speed-up in the execution time of the EEMBC applications. We also improved the static, dynamic power consumption, and EDP by 7.2%, 8.5%, and 13.6%, respectively, despite the implementation area overhead of 2.5%.


Author(s):  
Huili Chen ◽  
Cheng Fu ◽  
Jishen Zhao ◽  
Farinaz Koushanfar

Deep Neural Networks (DNNs) are vulnerable to Neural Trojan (NT) attacks where the adversary injects malicious behaviors during DNN training. This type of ‘backdoor’ attack is activated when the input is stamped with the trigger pattern specified by the attacker, resulting in an incorrect prediction of the model. Due to the wide application of DNNs in various critical fields, it is indispensable to inspect whether the pre-trained DNN has been trojaned before employing a model. Our goal in this paper is to address the security concern on unknown DNN to NT attacks and ensure safe model deployment. We propose DeepInspect, the first black-box Trojan detection solution with minimal prior knowledge of the model. DeepInspect learns the probability distribution of potential triggers from the queried model using a conditional generative model, thus retrieves the footprint of backdoor insertion. In addition to NT detection, we show that DeepInspect’s trigger generator enables effective Trojan mitigation by model patching. We corroborate the effectiveness, efficiency, and scalability of DeepInspect against the state-of-the-art NT attacks across various benchmarks. Extensive experiments show that DeepInspect offers superior detection performance and lower runtime overhead than the prior work.


2019 ◽  
Vol 20 (2) ◽  
pp. 368-383 ◽  
Author(s):  
Michal Kuběnka ◽  
Renáta Myšková

The aim of this article is to prove the key role of the structure of the research sample used for accuracy determining on the accuracy of bankruptcy models. The creators of these models report the accuracy usually in the range of 60 to 90%. The authors of this article claim that these values are inaccurate and misleading. The real I. type error should be detected on a sample where obvious features of financial default were eliminated. The research tested more than 1200 of thriving businesses and also 270 businesses in future bankruptcy. The research has determined real current accuracy of selected three bankruptcy models on the standard sample of Czech businesses amounting 67.77%, 62.27% and 74.36%. This confirmed hypothesis no. 1, which says that actual accuracy of bankruptcy model is lower than original accuracy indicated by model makers. An accuracy of 58.70%, 61.59% and 65.94% was measured on a sample where businesses with obvious features of financial distress were eliminated. Due to the modification of the test sample, the order of accuracy has changed. This confirmed hypothesis no. 2. The Index of Karas and Reznakova reached the highest overall accuracy of 80.31% including incorrect prediction of bankruptcy also.


Sign in / Sign up

Export Citation Format

Share Document