random guessing
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 11)

H-INDEX

8
(FIVE YEARS 1)

2022 ◽  
Vol 74 (1) ◽  
Author(s):  
Fuyuki Hirose ◽  
Kenji Maeda ◽  
Osamu Kamigaichi

AbstractThe correlation between Earth’s tides and background seismicity has been suggested to become stronger before great earthquakes and weaker after. However, previous studies have only retrospectively analyzed this correlation after individual large earthquakes; it thus remains vague (i) whether such variations might be expected preceding future large earthquakes, and (ii) the strength of the tidal correlation during interseismic periods. Therefore, we retrospectively investigated whether significant temporal variations of the tidal correlation precede large interplate earthquakes along the Tonga–Kermadec trench, where Mw 7-class earthquakes frequently occurred from 1977 to 31 December 2020. We evaluated a forecast model based on the temporal variations of the tidal correlation via Molchan’s error diagram, using the tidal correlation value itself as well as its rate of change as threshold values. For Mw ≥ 7.0 earthquakes, this model was as ineffective as random guessing. For Mw ≥ 6.5, 6.0, or 5.5 earthquakes, the forecast model performed better than random guessing in some cases, but even the best forecast only had a probability gain of about 1.7. Therefore, the practicality of this model alone is poor, at least in this region. These results suggest that changes of the tidal correlation are not reliable indicators of large earthquakes along the Tonga–Kermadec trench. Graphical Abstract


2021 ◽  
Author(s):  
David Nathan Lang ◽  
Alex Wang ◽  
Nathan dalal ◽  
Andreas Paepcke ◽  
Mitchell Stevens

Abstract: Committing to a major is a fateful step in an undergraduate education, yet the relationship between courses taken early in an academic career and ultimate major selection remains little studied at scale. Using transcript data capturing the academic careers of 26,892 undergraduates enrolled at a private university between 2000 and 2020, we describe enrollment histories using natural-language methods and vector embeddings to forecast terminal major on the basis of course sequences beginning at college entry. We find (I) a student's very first enrolled course predicts major thirty times better than random guessing and more than a third better than majority-class voting, (II) modeling strategies substantially influence forecasting accuracy, and (III) course portfolios varies substantially within majors, raising novel questions what majors mean or signify in relation to undergraduate course histories.


2021 ◽  
Author(s):  
Saeede Sadat Asadi Kakhki

The purpose of this study is to detect stock switching points from historical stock data and analyze corresponding financial news to predict upcoming stock switching points. Various change point detection methods have been investigated in the literature, such as online bayesian change point detection technique. Prediction of stock changing points using financial news has been implemented by different types of text mining techniques. In this study, online bayesian change point detection is implemented to detect stock switching points from historical stock data. Relevant news to detected change points are retrieved in the past and Latent Dirichlet Allocation technique is used to learn the hidden structures in the news data. Unseen news are then transferred to the trained topic representation. Similarity of relevant news and unseen news are used for prediction of future stock change points. Results show that stock switching points can be detected by historical stock data with better performance comparing to random guessing. It is possible to predict stock switching points by only fraction of financial news and with good result in terms of common performance metrics. According to this research, traders can take advantage of financial news to enhance prediction of future stock switching points.


2021 ◽  
Author(s):  
Saeede Sadat Asadi Kakhki

The purpose of this study is to detect stock switching points from historical stock data and analyze corresponding financial news to predict upcoming stock switching points. Various change point detection methods have been investigated in the literature, such as online bayesian change point detection technique. Prediction of stock changing points using financial news has been implemented by different types of text mining techniques. In this study, online bayesian change point detection is implemented to detect stock switching points from historical stock data. Relevant news to detected change points are retrieved in the past and Latent Dirichlet Allocation technique is used to learn the hidden structures in the news data. Unseen news are then transferred to the trained topic representation. Similarity of relevant news and unseen news are used for prediction of future stock change points. Results show that stock switching points can be detected by historical stock data with better performance comparing to random guessing. It is possible to predict stock switching points by only fraction of financial news and with good result in terms of common performance metrics. According to this research, traders can take advantage of financial news to enhance prediction of future stock switching points.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250268
Author(s):  
Xiaozhu Jian ◽  
Dai Buyun ◽  
Deng Yuanping

The three-parameter Logistic model (3PLM) and the four-parameter Logistic model (4PLM) have been proposed to reduce biases in cases of response disturbances, including random guessing and carelessness. However, they could also influence the examinees who do not guess or make careless errors. This paper proposes a new approach to solve this problem, which is a robust estimation based on the 4PLM (4PLM-Robust), involving a critical-probability guessing parameter and a carelessness parameter. This approach is compared with the 2PLM-MLE(two-parameter Logistic model and a maximum likelihood estimator), the 3PLM-MLE, the 4PLM-MLE, the Biweight estimation and the Huber estimation in terms of bias using an example and three simulation studies. The results show that the 4PLM-Robust is an effective method for robust estimation, and its calculation is simpler than the Biweight estimation and the Huber estimation.


2020 ◽  
Vol 11 ◽  
Author(s):  
Chia-Ling Hsu ◽  
Kuan-Yu Jin ◽  
Ming Ming Chiu

2020 ◽  
Vol 7 (7) ◽  
pp. 200307
Author(s):  
Jamie Webster ◽  
Martyn Amos

The accuracy and believability of crowd simulations underpins computational studies of human collective behaviour, with implications for urban design, policing, security and many other areas. Accuracy concerns the closeness of the fit between a simulation and observed data, and believability concerns the human perception of plausibility. In this paper, we address both issues via a so-called ‘Turing test’ for crowds, using movies generated from both accurate simulations and observations of real crowds. The fundamental question we ask is ‘Can human observers distinguish between real and simulated crowds?’ In two studies with student volunteers ( n = 384 and n = 156), we find that non-specialist individuals are able to reliably distinguish between real and simulated crowds when they are presented side-by-side, but they are unable to accurately classify them. Classification performance improves slightly when crowds are presented individually, but not enough to out-perform random guessing. We find that untrained individuals have an idealized view of human crowd behaviour which is inconsistent with observations of real crowds. Our results suggest a possible framework for establishing a minimal set of collective behaviours that should be integrated into the next generation of crowd simulation models.


2020 ◽  
Vol 34 (04) ◽  
pp. 3324-3331
Author(s):  
Chris Cameron ◽  
Rex Chen ◽  
Jason Hartford ◽  
Kevin Leyton-Brown

Strangely enough, it is possible to use machine learning models to predict the satisfiability status of hard SAT problems with accuracy considerably higher than random guessing. Existing methods have relied on extensive, manual feature engineering and computationally complex features (e.g., based on linear programming relaxations). We show for the first time that even better performance can be achieved by end-to-end learning methods — i.e., models that map directly from raw problem inputs to predictions and take only linear time to evaluate. Our work leverages deep network models which capture a key invariance exhibited by SAT problems: satisfiability status is unaffected by reordering variables and clauses. We showed that end-to-end learning with deep networks can outperform previous work on random 3-SAT problems at the solubility phase transition, where: (1) exactly 50% of problems are satisfiable; and (2) empirical runtimes of known solution methods scale exponentially with problem size (e.g., we achieved 84% prediction accuracy on 600-variable problems, which take hours to solve with state-of-the-art methods). We also showed that deep networks can generalize across problem sizes (e.g., a network trained only on 100-variable problems, which typically take about 10 ms to solve, achieved 81% accuracy on 600-variable problems).


2019 ◽  
Vol 6 (12) ◽  
Author(s):  
Joseph E Marturano ◽  
Thomas J Lowery

Abstract Background ESKAPE bacteria are thought to be especially resistant to antibiotics, and their resistance and prevalence in bloodstream infections are rising. Large studies are needed to better characterize the clinical impact of these bacteria and to develop algorithms that alert clinicians when patients are at high risk of an ESKAPE infection. Methods From a US data set of >1.1 M patient encounters, we evaluated if ESKAPE pathogens produced worse outcomes than non-ESKAPE pathogens and if an ESKAPE infection could be predicted using simple word group algorithms built from decision trees. Results We found that ESKAPE pathogens represented 42.2% of species isolated from bloodstream infections and, compared with non-ESKAPE pathogens, were associated with a 3.3-day increase in length of stay, a $5500 increase in cost of care, and a 2.1% absolute increase in mortality (P < 1e-99). ESKAPE pathogens were not universally more resistant to antibiotics, but only to select antibiotics (P < 5e-6), particularly against common empiric therapies. In addition, simple word group algorithms predicted ESKAPE pathogens with a positive predictive value of 7.9% to 56.2%, exceeding 4.8% by random guessing (P < 1e-99). Conclusions Taken together, these data highlight the pathogenicity of ESKAPE bacteria, potential mechanisms of their pathogenicity, and the potential to predict ESKAPE infections upon admission. Implementing word group algorithms could enable earlier and targeted therapies against ESKAPE bacteria and thus reduce their burden on the health care system.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Yonghyun Nam ◽  
Dong-gi Lee ◽  
Sunjoo Bang ◽  
Ju Han Kim ◽  
Jae-Hoon Kim ◽  
...  

Abstract Background The recent advances in human disease network have provided insights into establishing the relationships between the genotypes and phenotypes of diseases. In spite of the great progress, it yet remains as only a map of topologies between diseases, but not being able to be a pragmatic diagnostic/prognostic tool in medicine. It can further evolve from a map to a translational tool if it equips with a function of scoring that measures the likelihoods of the association between diseases. Then, a physician, when practicing on a patient, can suggest several diseases that are highly likely to co-occur with a primary disease according to the scores. In this study, we propose a method of implementing ‘n-of-1 utility’ (n potential diseases of one patient) to human disease network—the translational disease network. Results We first construct a disease network by introducing the notion of walk in graph theory to protein-protein interaction network, and then provide a scoring algorithm quantifying the likelihoods of disease co-occurrence given a primary disease. Metabolic diseases, that are highly prevalent but have found only a few associations in previous studies, are chosen as entries of the network. Conclusions The proposed method substantially increased connectivity between metabolic diseases and provided scores of co-occurring diseases. The increase in connectivity turned the disease network info-richer. The result lifted the AUC of random guessing up to 0.72 and appeared to be concordant with the existing literatures on disease comorbidity.


Sign in / Sign up

Export Citation Format

Share Document