Rethinking Learning: What the Interdisciplinary Science Tells Us

2021 ◽  
pp. 0013189X2110472
Author(s):  
Na’ilah Suad Nasir ◽  
Carol D. Lee ◽  
Roy Pea ◽  
Maxine McKinney de Royston

Theories of learning developed in education and psychology for the past 100 years are woefully inadequate to support the design of schools and classrooms that foster deep learning and equity. Needed is learning theory that can guide us in creating schools and classrooms where deep learning occurs, where learners’ full selves are engaged, and that disrupt existing patterns of inequality and oppression. In this article, we build on recent research in education, neuroscience, psychology, and anthropology to articulate a theory of learning that has the potential to move us toward that goal. We elaborate four key principles of learning: (1) learning is rooted in evolutionary, biological, and neurological systems; (2) learning is integrated with other developmental processes whereby the whole child (emotion, identity, cognition) must be taken into account; (3) learning is shaped in culturally organized practice across people’s lives; and (4) learning is experienced as embodied and coordinated through social interaction. Taken together, these principles help us understand learning in a way that foregrounds the range of community and cultural experiences people have throughout the life course and across the multiple settings of life and accounts for learning as set within systems of injustice.

2015 ◽  
Vol 20 (3) ◽  
pp. 190-203 ◽  
Author(s):  
Ernesto Panadero ◽  
Sanna Järvelä

Abstract. Socially shared regulation of learning (SSRL) has been recognized as a new and growing field in the framework of self-regulated learning theory in the past decade. In the present review, we examine the empirical evidence to support such a phenomenon. A total of 17 articles addressing SSRL were identified, 13 of which presented empirical evidence. Through a narrative review it could be concluded that there is enough data to maintain the existence of SSRL in comparison to other social regulation (e.g., co-regulation). It was found that most of the SSRL research has focused on characterizing phenomena through the use of mixed methods through qualitative data, mostly video-recorded observation data. Also, SSRL seems to contribute to students’ performance. Finally, the article discusses the need for the field to move forward, exploring the best conditions to promote SSRL, clarifying whether SSRL is always the optimal form of collaboration, and identifying more aspects of groups’ characteristics.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3046
Author(s):  
Shervin Minaee ◽  
Mehdi Minaei ◽  
Amirali Abdolrashidi

Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.


2021 ◽  
Vol 7 (5) ◽  
pp. 89
Author(s):  
George K. Sidiropoulos ◽  
Polixeni Kiratsa ◽  
Petros Chatzipetrou ◽  
George A. Papakostas

This paper aims to provide a brief review of the feature extraction methods applied for finger vein recognition. The presented study is designed in a systematic way in order to bring light to the scientific interest for biometric systems based on finger vein biometric features. The analysis spans over a period of 13 years (from 2008 to 2020). The examined feature extraction algorithms are clustered into five categories and are presented in a qualitative manner by focusing mainly on the techniques applied to represent the features of the finger veins that uniquely prove a human’s identity. In addition, the case of non-handcrafted features learned in a deep learning framework is also examined. The conducted literature analysis revealed the increased interest in finger vein biometric systems as well as the high diversity of different feature extraction methods proposed over the past several years. However, last year this interest shifted to the application of Convolutional Neural Networks following the general trend of applying deep learning models in a range of disciplines. Finally, yet importantly, this work highlights the limitations of the existing feature extraction methods and describes the research actions needed to face the identified challenges.


Author(s):  
Ruofan Liao ◽  
Paravee Maneejuk ◽  
Songsak Sriboonchitta

In the past, in many areas, the best prediction models were linear and nonlinear parametric models. In the last decade, in many application areas, deep learning has shown to lead to more accurate predictions than the parametric models. Deep learning-based predictions are reasonably accurate, but not perfect. How can we achieve better accuracy? To achieve this objective, we propose to combine neural networks with parametric model: namely, to train neural networks not on the original data, but on the differences between the actual data and the predictions of the parametric model. On the example of predicting currency exchange rate, we show that this idea indeed leads to more accurate predictions.


Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.


Drones ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 52
Author(s):  
Thomas Lee ◽  
Susan Mckeever ◽  
Jane Courtney

With the rise of Deep Learning approaches in computer vision applications, significant strides have been made towards vehicular autonomy. Research activity in autonomous drone navigation has increased rapidly in the past five years, and drones are moving fast towards the ultimate goal of near-complete autonomy. However, while much work in the area focuses on specific tasks in drone navigation, the contribution to the overall goal of autonomy is often not assessed, and a comprehensive overview is needed. In this work, a taxonomy of drone navigation autonomy is established by mapping the definitions of vehicular autonomy levels, as defined by the Society of Automotive Engineers, to specific drone tasks in order to create a clear definition of autonomy when applied to drones. A top–down examination of research work in the area is conducted, focusing on drone navigation tasks, in order to understand the extent of research activity in each area. Autonomy levels are cross-checked against the drone navigation tasks addressed in each work to provide a framework for understanding the trajectory of current research. This work serves as a guide to research in drone autonomy with a particular focus on Deep Learning-based solutions, indicating key works and areas of opportunity for development of this area in the future.


2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Effective productivity estimates of fresh produced crops are very essential for efficient farming, commercial planning, and logistical support. In the past ten years, machine learning (ML) algorithms have been widely used for grading and classification of agricultural products in agriculture sector. However, the precise and accurate assessment of the maturity level of tomatoes using ML algorithms is still a quite challenging to achieve due to these algorithms being reliant on hand crafted features. Hence, in this paper we propose a deep learning based tomato maturity grading system that helps to increase the accuracy and adaptability of maturity grading tasks with less amount of training data. The performance of proposed system is assessed on the real tomato datasets collected from the open fields using Nikon D3500 CCD camera. The proposed approach achieved an average maturity classification accuracy of 99.8 % which seems to be quite promising in comparison to the other state of art methods.


2022 ◽  
pp. 1-27
Author(s):  
Clifford Bohm ◽  
Douglas Kirkpatrick ◽  
Arend Hintze

Abstract Deep learning (primarily using backpropagation) and neuroevolution are the preeminent methods of optimizing artificial neural networks. However, they often create black boxes that are as hard to understand as the natural brains they seek to mimic. Previous work has identified an information-theoretic tool, referred to as R, which allows us to quantify and identify mental representations in artificial cognitive systems. The use of such measures has allowed us to make previous black boxes more transparent. Here we extend R to not only identify where complex computational systems store memory about their environment but also to differentiate between different time points in the past. We show how this extended measure can identify the location of memory related to past experiences in neural networks optimized by deep learning as well as a genetic algorithm.


Sign in / Sign up

Export Citation Format

Share Document