scholarly journals The Curious Case of Connectionism

2019 ◽  
Vol 2 (1) ◽  
pp. 190-205
Author(s):  
Istvan S. N. Berkeley

AbstractConnectionist research first emerged in the 1940s. The first phase of connectionism attracted a certain amount of media attention, but scant philosophical interest. The phase came to an abrupt halt, due to the efforts of Minsky and Papert (1969), when they argued for the intrinsic limitations of the approach. In the mid-1980s connectionism saw a resurgence. This marked the beginning of the second phase of connectionist research. This phase did attract considerable philosophical attention. It was of philosophical interest, as it offered a way of counteracting the conceptual ties to the philosophical traditions of atomism, rationalism, logic, nativism, rule realism and a concern with the role symbols play in human cognitive functioning, which was prevalent as a consequence of artificial intelligence research. The surge in philosophical interest waned, possibly in part due to the efforts of some traditionalists and the so-called black box problem. Most recently, what may be thought of as a third phase of connectionist research, based on so-called deep learning methods, is beginning to show some signs of again exciting philosophical interest.

Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 39
Author(s):  
Zhiyuan Xie ◽  
Shichang Du ◽  
Jun Lv ◽  
Yafei Deng ◽  
Shiyao Jia

Remaining Useful Life (RUL) prediction is significant in indicating the health status of the sophisticated equipment, and it requires historical data because of its complexity. The number and complexity of such environmental parameters as vibration and temperature can cause non-linear states of data, making prediction tremendously difficult. Conventional machine learning models such as support vector machine (SVM), random forest, and back propagation neural network (BPNN), however, have limited capacity to predict accurately. In this paper, a two-phase deep-learning-model attention-convolutional forget-gate recurrent network (AM-ConvFGRNET) for RUL prediction is proposed. The first phase, forget-gate convolutional recurrent network (ConvFGRNET) is proposed based on a one-dimensional analog long short-term memory (LSTM), which removes all the gates except the forget gate and uses chrono-initialized biases. The second phase is the attention mechanism, which ensures the model to extract more specific features for generating an output, compensating the drawbacks of the FGRNET that it is a black box model and improving the interpretability. The performance and effectiveness of AM-ConvFGRNET for RUL prediction is validated by comparing it with other machine learning methods and deep learning methods on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset and a dataset of ball screw experiment.


2020 ◽  
Vol 79 (Suppl 1) ◽  
pp. 1871-1872
Author(s):  
A. C. Genç ◽  
F. Turkoglu Genc ◽  
A. B. Kara ◽  
L. Genc Kaya ◽  
Z. Ozturk ◽  
...  

Background:Magnetic resonance imaging (MRI) of sacroiliac (SI) joints is used to detect early sacroiliitis(1). There can be an interobserver disagreement in MRI findings of SI joints of spondyloarthropathy patients between a rheumatologist, a local radiologist, and an expert radiologist(2). Artificial Intelligence and deep learning methods to detect abnormalities have become popular in radiology and other medical fields in recent years(3). Search for “artificial intelligence” and “radiology” in Pubmed for the last five years returned around 1500 clinical studies yet no results were retrieved for “artificial intelligence” and “rheumatology”.Objectives:Artificial Intelligence (AI) can help to detect the pathological area like sacroiliitis or not and also allows us to characterize it as quantitatively rather than qualitatively in the SI-MRI.Methods:Between the years of 2015 and 2019, 8100 sacroiliac MRIs were taken at our center. The MRIs of 1150 patients who were reported as active or chronic sacroiliitis from these sacroiliac MRIs or whose MRIs were considered by the primary physician in favor of sacroiliitis was included in the study. 1441 MRI coronal STIR sequence of 1150 patients were tagged as ‘’active sacroiliitis’’ and trained to detect and localize active sacroiliitis and provide prediction performance. This model is available for various operating systems. (Image1)Results:Precision score, the percentage of sacroiliac images of the trained model, is 87.1%. Recall, the percentage of the total sacroiliac MRIs correctly classified by the model, is 82.1% and the mean average precision (mAP) of the model is 89%.Conclusion:There are gray areas in medicine like sacroiliitis. Inter-observer variability can be reduced by AI and deep learning methods. The efficiency and reliability of health services can be increased in this way.References:[1]Jans L, Egund N, Eshed I, Sudoł-Szopińska I, Jurik AG. Sacroiliitis in Axial Spondyloarthritis: Assessing Morphology and Activity. Semin Musculoskelet Radiol. 2018;22: 180–188.[2]B. Arnbak, T. S. Jensen, C. Manniche, A. Zejden, N. Egund, and A. G. Jurik, “Spondyloarthritis-related and degenerative MRI changes in the axial skeleton—an inter- and intra-observer agreement study,”BMC Musculoskeletal Disorders, vol. 14, article 274, 2013.[3]Rueda, Juan C et al. “Interobserver Agreement in Magnetic Resonance of the Sacroiliac Joints in Patients with Spondyloarthritis.”International journal of rheumatology(2017).Image1.Bilateral active sacroiliitis detected automatically by AI model (in right sacroiliac joint 75.6%> (50%), in left sacroiliac joint 65% (>50%))Disclosure of Interests:None declared


2020 ◽  
Vol 189 ◽  
pp. 105316 ◽  
Author(s):  
Rogier R. Wildeboer ◽  
Ruud J.G. van Sloun ◽  
Hessel Wijkstra ◽  
Massimo Mischi

Author(s):  
Abraham Rudnick

Artificial intelligence (AI) and its correlates, such as machine and deep learning, are changing health care, where complex matters such as comoribidity call for dynamic decision-making. Yet, some people argue for extreme caution, referring to AI and its correlates as a black box. This brief article uses philosophy and science to address the black box argument about knowledge as a myth, concluding that this argument is misleading as it ignores a fundamental tenet of science, i.e., that no empirical knowledge is certain, and that scientific facts – as well as methods – often change. Instead, control of the technology of AI and its correlates has to be addressed to mitigate such unexpected negative consequences.


2018 ◽  
Vol 1 (1) ◽  
pp. 181-205 ◽  
Author(s):  
Pierre Baldi

Since the 1980s, deep learning and biomedical data have been coevolving and feeding each other. The breadth, complexity, and rapidly expanding size of biomedical data have stimulated the development of novel deep learning methods, and application of these methods to biomedical data have led to scientific discoveries and practical solutions. This overview provides technical and historical pointers to the field, and surveys current applications of deep learning to biomedical data organized around five subareas, roughly of increasing spatial scale: chemoinformatics, proteomics, genomics and transcriptomics, biomedical imaging, and health care. The black box problem of deep learning methods is also briefly discussed.


2020 ◽  
Vol 73 (4) ◽  
pp. 275-284
Author(s):  
Dukyong Yoon ◽  
Jong-Hwan Jang ◽  
Byung Jin Choi ◽  
Tae Young Kim ◽  
Chang Ho Han

Biosignals such as electrocardiogram or photoplethysmogram are widely used for determining and monitoring the medical condition of patients. It was recently discovered that more information could be gathered from biosignals by applying artificial intelligence (AI). At present, one of the most impactful advancements in AI is deep learning. Deep learning-based models can extract important features from raw data without feature engineering by humans, provided the amount of data is sufficient. This AI-enabled feature presents opportunities to obtain latent information that may be used as a digital biomarker for detecting or predicting a clinical outcome or event without further invasive evaluation. However, the black box model of deep learning is difficult to understand for clinicians familiar with a conventional method of analysis of biosignals. A basic knowledge of AI and machine learning is required for the clinicians to properly interpret the extracted information and to adopt it in clinical practice. This review covers the basics of AI and machine learning, and the feasibility of their application to real-life situations by clinicians in the near future.


2021 ◽  
Vol 2070 (1) ◽  
pp. 012141
Author(s):  
Pavan Sharma ◽  
Hemant Amhia ◽  
Sunil Datt Sharma

Abstract Nowadays, artificial intelligence techniques are getting popular in modern industry to diagnose the rolling bearing faults (RBFs). The RBFs occur in rotating machinery and these are common in every manufacturing industry. The diagnosis of the RBFs is highly needed to reduce the financial and production losses. Therefore, various artificial intelligence techniques such as machine and deep learning have been developed to diagnose the RBFs in the rotating machines. But, the performance of these techniques has suffered due the size of the dataset. Because, Machine learning and deep learning methods based methods are suitable for the small and large datasets respectively. Deep learning methods have also been limited to large training time. In this paper, performance of the different pre-trained models for the RBFs classification has been analysed. CWRU Dataset has been used for the performance comparison.


2019 ◽  
Vol 87 (2) ◽  
pp. 27-29
Author(s):  
Meagan Wiederman

Artificial intelligence (AI) is the ability of any device to take an input, like that of its environment, and work to achieve a desired output. Some advancements in AI have focused n replicating the human brain in machinery. This is being made possible by the human connectome project: an initiative to map all the connections between neurons within the brain. A full replication of the thinking brain would inherently create something that could be argued to be a thinking machine. However, it is more interesting to question whether a non-biologically faithful AI could be considered as a thinking machine. Under Turing’s definition of ‘thinking’, a machine which can be mistaken as human when responding in writing from a “black box,” where they can not be viewed, can be said to pass for thinking. Backpropagation is an error minimizing algorithm to program AI for feature detection with no biological counterpart which is prevalent in AI. The recent success of backpropagation demonstrates that biological faithfulness is not required for deep learning or ‘thought’ in a machine. Backpropagation has been used in medical imaging compression algorithms and in pharmacological modelling.


Author(s):  
Mehmet Ali Şimşek ◽  
Zeynep Orman

Nowadays, the main features of Industry 4.0 are interpreted to the ability of machines to communicate with each other and with a system, increasing the production efficiency and development of the decision-making mechanisms of robots. In these cases, new analytical algorithms of Industry 4.0 are needed. By using deep learning technologies, various industrial challenging problems in Industry 4.0 can be solved. Deep learning provides algorithms that can give better results on datasets owing to hidden layers. In this chapter, deep learning methods used in Industry 4.0 are examined and explained. In addition, data sets, metrics, methods, and tools used in the previous studies are explained. This study can lead to artificial intelligence studies with high potential to accelerate the implementation of Industry 4.0. Therefore, the authors believe that it will be very useful for researchers and practitioners who want to do research on this topic.


Sign in / Sign up

Export Citation Format

Share Document