Evaluating Machine Learning Performance for Safe, Intelligent Robots

Author(s):  
Raymond Sheh
2021 ◽  
Author(s):  
Muhammad Sajid

Abstract Machine learning is proving its successes in all fields of life including medical, automotive, planning, engineering, etc. In the world of geoscience, ML showed impressive results in seismic fault interpretation, advance seismic attributes analysis, facies classification, and geobodies extraction such as channels, carbonates, and salt, etc. One of the challenges faced in geoscience is the availability of label data which is one of the most time-consuming requirements in supervised deep learning. In this paper, an advanced learning approach is proposed for geoscience where the machine observes the seismic interpretation activities and learns simultaneously as the interpretation progresses. Initial testing showed that through the proposed method along with transfer learning, machine learning performance is highly effective, and the machine accurately predicts features requiring minor post prediction filtering to be accepted as the optimal interpretation.


AI Magazine ◽  
2013 ◽  
Vol 34 (3) ◽  
pp. 93-98 ◽  
Author(s):  
Vita Markman ◽  
Georgi Stojanov ◽  
Bipin Indurkhya ◽  
Takashi Kido ◽  
Keiki Takadama ◽  
...  

The Association for the Advancement of Artificial Intelligence was pleased to present the AAAI 2013 Spring Symposium Series, held Monday through Wednesday, March 25-27, 2013. The titles of the eight symposia were Analyzing Microtext, Creativity and (Early) Cognitive Development, Data Driven Wellness: From Self-Tracking to Behavior Change, Designing Intelligent Robots: Reintegrating AI II, Lifelong Machine Learning, Shikakeology: Designing Triggers for Behavior Change, Trust and Autonomous Systems, and Weakly Supervised Learning from Multimedia. This report contains summaries of the symposia, written, in most cases, by the cochairs of the symposium.


2020 ◽  
Vol 125 (2) ◽  
pp. 1197-1212
Author(s):  
Yeow Chong Goh ◽  
Xin Qing Cai ◽  
Walter Theseira ◽  
Giovanni Ko ◽  
Khiam Aik Khor

AbstractWe study whether humans or machine learning (ML) classification models are better at classifying scientific research abstracts according to a fixed set of discipline groups. We recruit both undergraduate and postgraduate assistants for this task in separate stages, and compare their performance against the support vectors machine ML algorithm at classifying European Research Council Starting Grant project abstracts to their actual evaluation panels, which are organised by discipline groups. On average, ML is more accurate than human classifiers, across a variety of training and test datasets, and across evaluation panels. ML classifiers trained on different training sets are also more reliable than human classifiers, meaning that different ML classifiers are more consistent in assigning the same classifications to any given abstract, compared to different human classifiers. While the top five percentile of human classifiers can outperform ML in limited cases, selection and training of such classifiers is likely costly and difficult compared to training ML models. Our results suggest ML models are a cost effective and highly accurate method for addressing problems in comparative bibliometric analysis, such as harmonising the discipline classifications of research from different funding agencies or countries.


Sign in / Sign up

Export Citation Format

Share Document