wrong prediction
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 10)

H-INDEX

3
(FIVE YEARS 1)

Author(s):  
Hsin-Yao Wang ◽  
Yu-Hsin Liu ◽  
Yi-Ju Tseng ◽  
Chia-Ru Chung ◽  
Ting-Wei Lin ◽  
...  

Combining Matrix-Assisted Laser Desorption/Ionization Time-of-Flight (MALDI-TOF) spectra data and artificial intelligence (AI) has been introduced for rapid prediction on antibiotic susceptibility test (AST) of S. aureus. Based on the AI predictive probability, the cases with probabilities between low and high cut-offs are defined as “grey zone”. We aimed to investigate the underlying reasons of unconfident (grey zone) or wrong predictive AST. A total 479 S. aureus isolates were collected, analyzed by MALDI-TOF, and AST prediction, standard AST were obtained in a tertiary medical center. The predictions were categorized into the correct prediction group, wrong prediction group, and grey zone group. We analyzed the association between the predictive results and the demographic data, spectral data, and strain types. For MRSA, larger cefoxitin zone size was found in the wrong prediction group. MLST of the MRSA isolates in the grey zone group revealed that uncommon strain types composed 80%. Amid MSSA isolates in the grey zone group, the majority (60%) was composed of over 10 different strain types. In predicting AST based on MALDI-TOF AI, uncommon strains and high diversity would contribute to suboptimal predictive performance.


2021 ◽  
Vol 5 (4) ◽  
pp. 544
Author(s):  
Antonius Angga Kurniawan ◽  
Metty Mustikasari

This research aims to implement deep learning techniques to determine fact and fake news in Indonesian language. The methods used are Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). The stages of the research consisted of collecting data, labeling data, preprocessing data, word embedding, splitting data, forming CNN and LSTM models, evaluating, testing new input data and comparing evaluations of the established CNN and LSTM models. The Data are collected from a fact and fake news provider site that is valid, namely TurnbackHoax.id. There are 1786 news used in this study, with 802 fact and 984 fake news. The results indicate that the CNN and LSTM methods were successfully applied to determine fact and fake news in Indonesian language properly. CNN has an accuracy test, precision and recall value of 0.88, while the LSTM model has an accuracy test and precision value of 0.84 and a recall of 0.83. In testing the new data input, all of the predictions obtained by CNN are correct, while the prediction results obtained by LSTM have 1 wrong prediction. Based on the evaluation results and the results of testing the new data input, the model produced by the CNN method is better than the model produced by the LSTM method.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Jui-En Lo ◽  
Eugene Yu-Chuan Kang ◽  
Yun-Nung Chen ◽  
Yi-Ting Hsieh ◽  
Nan-Kai Wang ◽  
...  

This study is aimed at evaluating a deep transfer learning-based model for identifying diabetic retinopathy (DR) that was trained using a dataset with high variability and predominant type 2 diabetes (T2D) and comparing model performance with that in patients with type 1 diabetes (T1D). The Kaggle dataset, which is a publicly available dataset, was divided into training and testing Kaggle datasets. In the comparison dataset, we collected retinal fundus images of T1D patients at Chang Gung Memorial Hospital in Taiwan from 2013 to 2020, and the images were divided into training and testing T1D datasets. The model was developed using 4 different convolutional neural networks (Inception-V3, DenseNet-121, VGG1, and Xception). The model performance in predicting DR was evaluated using testing images from each dataset, and area under the curve (AUC), sensitivity, and specificity were calculated. The model trained using the Kaggle dataset had an average (range) AUC of 0.74 (0.03) and 0.87 (0.01) in the testing Kaggle and T1D datasets, respectively. The model trained using the T1D dataset had an AUC of 0.88 (0.03), which decreased to 0.57 (0.02) in the testing Kaggle dataset. Heatmaps showed that the model focused on retinal hemorrhage, vessels, and exudation to predict DR. In wrong prediction images, artifacts and low-image quality affected model performance. The model developed with the high variability and T2D predominant dataset could be applied to T1D patients. Dataset homogeneity could affect the performance, trainability, and generalization of the model.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1957
Author(s):  
Yu Shi ◽  
Cien Fan ◽  
Lian Zou ◽  
Caixia Sun ◽  
Yifeng Liu

Deep neural networks are vulnerable to the adversarial example synthesized by adding imperceptible perturbations to the original image but can fool the classifier to provide wrong prediction outputs. This paper proposes an image restoration approach which provides a strong defense mechanism to provide robustness against adversarial attacks. We show that the unsupervised image restoration framework, deep image prior, can effectively eliminate the influence of adversarial perturbations. The proposed method uses multiple deep image prior networks called tandem deep image priors to recover the original image from adversarial example. Tandem deep image priors contain two deep image prior networks. The first network captures the main information of images and the second network recovers original image based on the prior information provided by the first network. The proposed method reduces the number of iterations originally required by deep image prior network and does not require adjusting the classifier or pre-training. It can be combined with other defensive methods. Our experiments show that the proposed method surprisingly achieves higher classification accuracy on ImageNet against a wide variety of adversarial attacks than previous state-of-the-art defense methods.


Author(s):  
Lars Wein ◽  
Tim Kluge ◽  
Joerg R. Seume ◽  
Rainer Hain ◽  
Thomas Fuchs ◽  
...  

Abstract Accurate prediction of labyrinth seal flows is important for the design and optimisation of turbomachinery. However, the prediction of such flows with RANS turbulence models is still lacking. The identification of modelling deficits and the development of improved turbulence models requires detailed experimental data. Consequently, a new test rig for straight labyrinth seals was built at the Institute for Turbomachinery and Fluid Dynamics which allows for non-intrusive measurements of the three dimensional velocity field in the cavities. Two linear eddy viscosity models and one algebraic Reynolds stress turbulence model have been tested and validated against global parameters, local pressure measurements, and non-intrusive measurements of the velocity field. While some models accurately predict the discharge coefficient, large local errors occurred in the prediction of the wall static pressure in the seal. Although improved predictions were possible by using model extensions, significant errors in the prediction of vortex systems remained in the solution. These were identified with the help of PIV results. All turbulence models struggled to accurately predict the size of separations and the swirl imposed by viscous effects at the rotor surface. Additionally, the expansion of the leakage jet in the outlet cavity is not modelled correctly by the numerical models. This is caused by a wrong prediction of turbulent kinetic energy and, presumably, its rate of dissipation.


Author(s):  
Tilman Raphael Schröder ◽  
Sebastian Schuster ◽  
Dieter Brillert

Abstract Side chambers of centrifugal turbomachinery resemble rotor–stator cavities. The flow in these cavities develops complex patterns which substantially influence the axial thrust on the shaft and the frictional torque on the rotor. Axial thrust caused by the flow pattern in side chambers accumulates in multistage single shaft radial compressors where it is often balanced by a single axial bearing. Miscalculation of axial thrust may lead to axial loads significantly higher than predicted or even undefined load situations which may cause early bearing failure. Likewise, a wrong prediction of friction losses may lead to lower efficiency than originally intended. Current models for axial thrust and friction torque are limited to circumferential Reynolds numbers of Re ≤ 107. New models are needed for modern high-pressure centrifugal compressors which reach circumferential Reynolds numbers up to Re = 109. The rotor–stator cavity flow model by Kurokawa and Sakuma [16] for merged boundary layers is analysed. It is based on the assumptions of axisymmetric and time invariant flow. Functional forms of the mean tangential and radial velocity and the surface stress vectors on the rotor and stator are assumed. Reynolds averaging is applied to consider turbulence effects in the model. The modelling assumptions are compared with detailed RANS CFD analyses at Reynolds numbers of 4 · 106 ≤ Re ≤ 2 · 108 to investigate their accuracy. Based on these CFD results, a way towards a high Reynolds number model is presented, providing prediction of disc torque, radial pressure distribution and axial thrust in rotor–stator cavities.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Shahidul Islam Khan ◽  
Abu Sayed Md Latiful Hoque

Abstract In data analytics, missing data is a factor that degrades performance. Incorrect imputation of missing values could lead to a wrong prediction. In this era of big data, when a massive volume of data is generated in every second, and utilization of these data is a major concern to the stakeholders, efficiently handling missing values becomes more important. In this paper, we have proposed a new technique for missing data imputation, which is a hybrid approach of single and multiple imputation techniques. We have proposed an extension of popular Multivariate Imputation by Chained Equation (MICE) algorithm in two variations to impute categorical and numeric data. We have also implemented twelve existing algorithms to impute binary, ordinal, and numeric missing values. We have collected sixty-five thousand real health records from different hospitals and diagnostic centers of Bangladesh, maintaining the privacy of data. We have also collected three public datasets from the UCI Machine Learning Repository, ETH Zurich, and Kaggle. We have compared the performance of our proposed algorithms with existing algorithms using these datasets. Experimental results show that our proposed algorithm achieves 20% higher F-measure for binary data imputation and 11% less error for numeric data imputations than its competitors with similar execution time.


Stock market consists of various buyers and sellers. The stock market value is dynamic. It means the stock market value is changed day by day. Actually stock has been represented as shares. The owner of the share may be an individual or group of peoples. In this current economic condition stock market value prediction is the critical task because the data is dynamic. Stock market prediction means to find the future value of the stock on a financial exchange. The expected prediction output to be accurate, efficient and robust value. Traditionally the stock values are predicted by using stock related news. But it does not provide a better result. Wrong prediction of stock value leads to heavy loss. Machine learning concepts play a very important role in various domains. It is also used to predict the stock market value with the help of collected data. This paper describes about stock market value prediction using machine learning SVM (Support Vector Machine) technique. This proposed concept is implemented by python programming language. This machine learning concept produces better prediction result compared with other machine learning techniques.


Author(s):  
Wenkai Dong ◽  
Zhaoxiang Zhang ◽  
Tieniu Tan

Deep learning based methods have achieved remarkable progress in action recognition. Existing works mainly focus on designing novel deep architectures to achieve video representations learning for action recognition. Most methods treat sampled frames equally and average all the frame-level predictions at the testing stage. However, within a video, discriminative actions may occur sparsely in a few frames and most other frames are irrelevant to the ground truth and may even lead to a wrong prediction. As a result, we think that the strategy of selecting relevant frames would be a further important key to enhance the existing deep learning based action recognition. In this paper, we propose an attentionaware sampling method for action recognition, which aims to discard the irrelevant and misleading frames and preserve the most discriminative frames. We formulate the process of mining key frames from videos as a Markov decision process and train the attention agent through deep reinforcement learning without extra labels. The agent takes features and predictions from the baseline model as input and generates importance scores for all frames. Moreover, our approach is extensible, which can be applied to different existing deep learning based action recognition models. We achieve very competitive action recognition performance on two widely used action recognition datasets.


Ekonomia ◽  
2019 ◽  
Vol 25 (1) ◽  
pp. 73-80
Author(s):  
Igor Wysocki ◽  
Dawid Megger

Austrian welfare economics: A critical approachIt seemed that since Rothbard’s 2008 [1956] exquisite Toward a Reconstruction of Utility and Welfare Economics, one can make a case for the free market based on some modified concept of efficiency. Rothbard famously argued that being equipped with the notions of Pareto-superior moves and demonstrated preference suffices for the above purpose. Our agenda in the present paper is purely negative. First, we face the challenge — in our opinion, inadequately addressed in Austrian literature so far — of sharply defining Pareto-superior moves; to wit, how to evaluate whether a Pareto-superior move occurs; or, more specifically, what is the standard of comparison which would allow us to determine whether a given action constitutes a Pareto-superior move or not. Thus, we sieve out any approaches to social welfare that would be either trivial and therefore uninteresting and the ones that would be irreconcilable with fundamental Austrian premises e.g., ordinal value scales and therefore non-aggregation of utility, etc.. As a result, we seemingly end up with what might constitute a specifically Austrian view on welfare, which non-surprisingly coincides with the actual positions taken by contemporary prominent Austrians themselves for instance, see: Gordon, 1993; Herbener, 1997; Block 1995. Yet, the main thrust of our paper is to argue that this very position cannot withstand criticism, for it either makes an intuitively wrong prediction as we demonstrate in our thought experiment or it vitiates the argument for the free market from the concept of Pareto-efficiency.


Sign in / Sign up

Export Citation Format

Share Document