On Machine Symbol Grounding and Optimization

Author(s):  
Oliver Kramer

From the point of view of an autonomous agent the world consists of high-dimensional dynamic sensorimotor data. Interface algorithms translate this data into symbols that are easier to handle for cognitive processes. Symbol grounding is about whether these systems can, based on this data, construct symbols that serve as a vehicle for higher symbol-oriented cognitive processes. Machine learning and data mining techniques are geared towards finding structures and input-output relations in this data by providing appropriate interface algorithms that translate raw data into symbols. This work formulates the interface design as global optimization problem with the objective to maximize the success of the overlying symbolic algorithm. For its implementation various known algorithms from data mining and machine learning turn out to be adequate methods that do not only exploit the intrinsic structure of the subsymbolic data, but that also allow to flexibly adapt to the objectives of the symbolic process. Furthermore, this work discusses the optimization formulation as a functional perspective on symbol grounding that does not hurt the zero semantical commitment condition. A case study illustrates technical details of the machine symbol grounding approach.

Author(s):  
Oliver Kramer

From the point of view of an autonomous agent the world consists of high-dimensional dynamic sensorimotor data. Interface algorithms translate this data into symbols that are easier to handle for cognitive processes. Symbol grounding is about whether these systems can, based on this data, construct symbols that serve as a vehicle for higher symbol-oriented cognitive processes. Machine learning and data mining techniques are geared towards finding structures and input-output relations in this data by providing appropriate interface algorithms that translate raw data into symbols. This work formulates the interface design as global optimization problem with the objective to maximize the success of the overlying symbolic algorithm. For its implementation various known algorithms from data mining and machine learning turn out to be adequate methods that do not only exploit the intrinsic structure of the subsymbolic data, but that also allow to flexibly adapt to the objectives of the symbolic process. Furthermore, this work discusses the optimization formulation as a functional perspective on symbol grounding that does not hurt the zero semantical commitment condition. A case study illustrates technical details of the machine symbol grounding approach.


2010 ◽  
Vol 19 (07) ◽  
pp. 1049-1106 ◽  
Author(s):  
NICHOLAS M. BALL ◽  
ROBERT J. BRUNNER

We review the current state of data mining and machine learning in astronomy. Data Mining can have a somewhat mixed connotation from the point of view of a researcher in this field. If used correctly, it can be a powerful approach, holding the potential to fully exploit the exponentially increasing amount of available data, promising great scientific advance. However, if misused, it can be little more than the black box application of complex computing algorithms that may give little physical insight, and provide questionable results. Here, we give an overview of the entire data mining process, from data collection through to the interpretation of results. We cover common machine learning algorithms, such as artificial neural networks and support vector machines, applications from a broad range of astronomy, emphasizing those in which data mining techniques directly contributed to improving science, and important current and future directions, including probability density functions, parallel algorithms, Peta-Scale computing, and the time domain. We conclude that, so long as one carefully selects an appropriate algorithm and is guided by the astronomical problem at hand, data mining can be very much the powerful tool, and not the questionable black box.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

2002 ◽  
Vol 16 (3) ◽  
pp. 129-149 ◽  
Author(s):  
Boris Kotchoubey

Abstract Most cognitive psychophysiological studies assume (1) that there is a chain of (partially overlapping) cognitive processes (processing stages, mechanisms, operators) leading from stimulus to response, and (2) that components of event-related brain potentials (ERPs) may be regarded as manifestations of these processing stages. What is usually discussed is which particular processing mechanisms are related to some particular component, but not whether such a relationship exists at all. Alternatively, from the point of view of noncognitive (e. g., “naturalistic”) theories of perception ERP components might be conceived of as correlates of extraction of the information from the experimental environment. In a series of experiments, the author attempted to separate these two accounts, i. e., internal variables like mental operations or cognitive parameters versus external variables like information content of stimulation. Whenever this separation could be performed, the latter factor proved to significantly affect ERP amplitudes, whereas the former did not. These data indicate that ERPs cannot be unequivocally linked to processing mechanisms postulated by cognitive models of perception. Therefore, they cannot be regarded as support for these models.


2019 ◽  
Vol 12 (3) ◽  
pp. 171-179 ◽  
Author(s):  
Sachin Gupta ◽  
Anurag Saxena

Background: The increased variability in production or procurement with respect to less increase of variability in demand or sales is considered as bullwhip effect. Bullwhip effect is considered as an encumbrance in optimization of supply chain as it causes inadequacy in the supply chain. Various operations and supply chain management consultants, managers and researchers are doing a rigorous study to find the causes behind the dynamic nature of the supply chain management and have listed shorter product life cycle, change in technology, change in consumer preference and era of globalization, to name a few. Most of the literature that explored bullwhip effect is found to be based on simulations and mathematical models. Exploring bullwhip effect using machine learning is the novel approach of the present study. Methods: Present study explores the operational and financial variables affecting the bullwhip effect on the basis of secondary data. Data mining and machine learning techniques are used to explore the variables affecting bullwhip effect in Indian sectors. Rapid Miner tool has been used for data mining and 10-fold cross validation has been performed. Weka Alternating Decision Tree (w-ADT) has been built for decision makers to mitigate bullwhip effect after the classification. Results: Out of the 19 selected variables affecting bullwhip effect 7 variables have been selected which have highest accuracy level with minimum deviation. Conclusion: Classification technique using machine learning provides an effective tool and techniques to explore bullwhip effect in supply chain management.


Author(s):  
Sanford C. Goldberg

Chapter 3 deals with the first issue one faces in the task of articulating the explicit epistemic criteria for belief: the problem of the criterion. It is tempting to suppose that a belief can be normatively proper from the epistemic point of view only if the believer can certify for herself the reliability of every belief-forming process on which she relied. But insisting on this quickly leads to the threat of an infinite regress. This chapter defends a foundationalist response to this problem, according to which we enjoy a default (albeit defeasible) permission to rely on certain cognitive processes in belief-formation. These are processes that satisfy what the author calls the Reliabilist Rationale. Importantly, our permissions here are social: any one of us is permitted to rely on any token process that satisfies this rationale, whether the token process resides in one’s own mind/brain or that of another epistemic subject.


2021 ◽  
Vol 1088 (1) ◽  
pp. 012035
Author(s):  
Mulyawan ◽  
Agus Bahtiar ◽  
Githera Dwilestari ◽  
Fadhil Muhammad Basysyar ◽  
Nana Suarna

2021 ◽  
pp. 097215092098485
Author(s):  
Sonika Gupta ◽  
Sushil Kumar Mehta

Data mining techniques have proven quite effective not only in detecting financial statement frauds but also in discovering other financial crimes, such as credit card frauds, loan and security frauds, corporate frauds, bank and insurance frauds, etc. Classification of data mining techniques, in recent years, has been accepted as one of the most credible methodologies for the detection of symptoms of financial statement frauds through scanning the published financial statements of companies. The retrieved literature that has used data mining classification techniques can be broadly categorized on the basis of the type of technique applied, as statistical techniques and machine learning techniques. The biggest challenge in executing the classification process using data mining techniques lies in collecting the data sample of fraudulent companies and mapping the sample of fraudulent companies against non-fraudulent companies. In this article, a systematic literature review (SLR) of studies from the area of financial statement fraud detection has been conducted. The review has considered research articles published between 1995 and 2020. Further, a meta-analysis has been performed to establish the effect of data sample mapping of fraudulent companies against non-fraudulent companies on the classification methods through comparing the overall classification accuracy reported in the literature. The retrieved literature indicates that a fraudulent sample can either be equally paired with non-fraudulent sample (1:1 data mapping) or be unequally mapped using 1:many ratio to increase the sample size proportionally. Based on the meta-analysis of the research articles, it can be concluded that machine learning approaches, in comparison to statistical approaches, can achieve better classification accuracy, particularly when the availability of sample data is low. High classification accuracy can be obtained with even a 1:1 mapping data set using machine learning classification approaches.


Sign in / Sign up

Export Citation Format

Share Document