Comparison of Machine Learning Performance for Earnings Forecasting

2019 ◽  
Vol 20 (6) ◽  
pp. 9-34
Author(s):  
Woo June Jung
2021 ◽  
Author(s):  
Muhammad Sajid

Abstract Machine learning is proving its successes in all fields of life including medical, automotive, planning, engineering, etc. In the world of geoscience, ML showed impressive results in seismic fault interpretation, advance seismic attributes analysis, facies classification, and geobodies extraction such as channels, carbonates, and salt, etc. One of the challenges faced in geoscience is the availability of label data which is one of the most time-consuming requirements in supervised deep learning. In this paper, an advanced learning approach is proposed for geoscience where the machine observes the seismic interpretation activities and learns simultaneously as the interpretation progresses. Initial testing showed that through the proposed method along with transfer learning, machine learning performance is highly effective, and the machine accurately predicts features requiring minor post prediction filtering to be accepted as the optimal interpretation.


2020 ◽  
Vol 125 (2) ◽  
pp. 1197-1212
Author(s):  
Yeow Chong Goh ◽  
Xin Qing Cai ◽  
Walter Theseira ◽  
Giovanni Ko ◽  
Khiam Aik Khor

AbstractWe study whether humans or machine learning (ML) classification models are better at classifying scientific research abstracts according to a fixed set of discipline groups. We recruit both undergraduate and postgraduate assistants for this task in separate stages, and compare their performance against the support vectors machine ML algorithm at classifying European Research Council Starting Grant project abstracts to their actual evaluation panels, which are organised by discipline groups. On average, ML is more accurate than human classifiers, across a variety of training and test datasets, and across evaluation panels. ML classifiers trained on different training sets are also more reliable than human classifiers, meaning that different ML classifiers are more consistent in assigning the same classifications to any given abstract, compared to different human classifiers. While the top five percentile of human classifiers can outperform ML in limited cases, selection and training of such classifiers is likely costly and difficult compared to training ML models. Our results suggest ML models are a cost effective and highly accurate method for addressing problems in comparative bibliometric analysis, such as harmonising the discipline classifications of research from different funding agencies or countries.


Sign in / Sign up

Export Citation Format

Share Document