scholarly journals The Problem of Meaning in AI and Robotics: Still with Us after All These Years

Philosophies ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 14 ◽  
Author(s):  
Tom Froese ◽  
Shigeru Taguchi

In this essay we critically evaluate the progress that has been made in solving the problem of meaning in artificial intelligence (AI) and robotics. We remain skeptical about solutions based on deep neural networks and cognitive robotics, which in our opinion do not fundamentally address the problem. We agree with the enactive approach to cognitive science that things appear as intrinsically meaningful for living beings because of their precarious existence as adaptive autopoietic individuals. But this approach inherits the problem of failing to account for how meaning as such could make a difference for an agent’s behavior. In a nutshell, if life and mind are identified with physically deterministic phenomena, then there is no conceptual room for meaning to play a role in its own right. We argue that this impotence of meaning can be addressed by revising the concept of nature such that the macroscopic scale of the living can be characterized by physical indeterminacy. We consider the implications of this revision of the mind-body relationship for synthetic approaches.

Author(s):  
Vishal Babu Siramshetty ◽  
Dac-Trung Nguyen ◽  
Natalia J. Martinez ◽  
Anton Simeonov ◽  
Noel T. Southall ◽  
...  

The rise of novel artificial intelligence methods necessitates a comparison of this wave of new approaches with classical machine learning for a typical drug discovery project. Inhibition of the potassium ion channel, whose alpha subunit is encoded by human Ether-à-go-go-Related Gene (hERG), leads to prolonged QT interval of the cardiac action potential and is a significant safety pharmacology target for the development of new medicines. Several computational approaches have been employed to develop prediction models for assessment of hERG liabilities of small molecules including recent work using deep learning methods. Here we perform a comprehensive comparison of prediction models based on classical (random forests and gradient boosting) and modern (deep neural networks and recurrent neural networks) artificial intelligence methods. The training set (~9000 compounds) was compiled by integrating hERG bioactivity data from ChEMBL database with experimental data generated from an in-house, high-throughput thallium flux assay. We utilized different molecular descriptors including the latent descriptors, which are real-valued continuous vectors derived from chemical autoencoders trained on a large chemical space (> 1.5 million compounds). The models were prospectively validated on ~840 in-house compounds screened in the same thallium flux assay. The deep neural networks performed significantly better than the classical methods with the latent descriptors. The recurrent neural networks that operate on SMILES provided highest model sensitivity. The best models were merged into a consensus model that offered superior performance compared to reference models from academic and commercial domains. Further, we shed light on the potential of artificial intelligence methods to exploit the chemistry big data and generate novel chemical representations useful in predictive modeling and tailoring new chemical space.<br>


2020 ◽  
pp. 57-63
Author(s):  
admin admin ◽  
◽  
◽  
◽  
◽  
...  

The human facial emotions recognition has attracted interest in the field of Artificial Intelligence. The emotions on a human face depicts what’s going on inside the mind. Facial expression recognition is the part of Facial recognition which is gaining more importance and need for it increases tremendously. Though there are methods to identify expressions using machine learning and Artificial Intelligence techniques, this work attempts to use convolution neural networks to recognize expressions and classify the expressions into 6 emotions categories. Various datasets are investigated and explored for training expression recognition models are explained in this paper and the models which are used in this paper are VGG 19 and RESSNET 18. We included facial emotional recognition with gender identification also. In this project we have used fer2013 and ck+ dataset and ultimately achieved 73% and 94% around accuracies respectively.


2021 ◽  
Vol 6 (5) ◽  
pp. 10-15
Author(s):  
Ela Bhattacharya ◽  
D. Bhattacharya

COVID-19 has emerged as the latest worrisome pandemic, which is reported to have its outbreak in Wuhan, China. The infection spreads by means of human contact, as a result, it has caused massive infections across 200 countries around the world. Artificial intelligence has likewise contributed to managing the COVID-19 pandemic in various aspects within a short span of time. Deep Neural Networks that are explored in this paper have contributed to the detection of COVID-19 from imaging sources. The datasets, pre-processing, segmentation, feature extraction, classification and test results which can be useful for discovering future directions in the domain of automatic diagnosis of the disease, utilizing artificial intelligence-based frameworks, have been investigated in this paper.


2020 ◽  
Author(s):  
Simon Nachtergaele ◽  
Johan De Grave

Abstract. Artificial intelligence techniques such as deep neural networks and computer vision are developed for fission track recognition and included in a computer program for the first time. These deep neural networks use the Yolov3 object detection algorithm, which is currently one of the most powerful and fastest object recognition algorithms. These deep neural networks can be used in new software called AI-Track-tive. The developed program successfully finds most of the fission tracks in the microscope images, however, the user still needs to supervise the automatic counting. The success rates of the automatic recognition range from 70 % to 100 % depending on the areal track densities in apatite and (muscovite) external detector. The success rate generally decreases for images with high areal track densities, because overlapping tracks are less easily recognizable for computer vision techniques.


Deep neural networks with the artificial intelligence on Machine Learning (ML) algorithms constitute the best design specifically to deal with vast amount of data for retail business. The limited research approach is referred towards reducing memory consumption on integrating ML algorithms on data management system. This paper proposed combining data management and deep neural networks, ideas to build systems, which vast amount data can share in the database system. Therefore, ML algorithm has a pattern with multi-hidden layer that can use to synthesis different decision within a minimum processing. Finally, system precede and follow a NoSQL layers of a model employs in-memory database compression techniques and executes data management challenges with large datasets successfully.


Author(s):  
Jessica A. F. Thompson

Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. In order to discuss what constitutes scientific progress, one must have a goal in mind (progress towards what?). One such long term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Towards this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.


Author(s):  
Henrik Sergoyan

Customer experience and resource management determine the degree to which transportation service providers can compete in today’s heavily saturated markets. The paper investigates and suggests a new methodology to optimize calculations for Estimated Time of Arrival (from now on ETA, meaning the time it will take for the driver to reach the designated location) based on the data provided by GG collected from rides made in 2018. GG is a transportation service providing company, and it currently uses The Open Source Routing Machine (OSRM) which exhibits significant errors in the prediction phase. This paper shows that implementing algorithms such as XGBoost, CatBoost, and Neural Networks for the said task will improve the accuracy of estimation. Paper discusses the benefits and drawbacks of each model and then considers the performance of the stacking algorithm that combines several models into one. Thus, using those techniques, final results showed that Mean Squared Error (MSE) was decreased by 54% compared to the current GG model.


Sign in / Sign up

Export Citation Format

Share Document