scholarly journals UPDATING THE MULTIDIMENSIONAL RELATIONSHIP TO INFORMATION IN SCHOOL ENVIRONMENT

Author(s):  
Maja Gržina Cergolj

The 21st century announces the launch of many changes in getting and giving information. The cyber era requires the ability of managing different skills as well as meaningful integration with the past. In fact, the present deals with the management of multidimensional communications and requires lifelong training and readiness for change. Transliteracy as the ability to write, read and interact across a range of platforms, tools and media, allows the fusion of the cyber era peculiarities with the sense of belonging to the past. It indicates the upgrading of literacy and operates on the level of individual attention. It is anchored in the form of deep learning, regardless of the sense which enabled it to flow. The multidimensionality provides a holistic view of information and a critical elimination of the irrelevant issues. It is also a means by which a teacher can achieve better results in the classroom, if he finds the appropriate channeling of attention. It does not mean the absolute transition into the information society, but only the management of yesterday, today and tomorrow. It means collaboration, management and education for the literacy of the future. Key words: cyber era, deep learning, lifelong learning, transliteracy.

AILA Review ◽  
2013 ◽  
Vol 26 ◽  
pp. 42-56 ◽  
Author(s):  
Li Wei ◽  
Zhu Hua

The nature of diaspora is changing in the 21st century. Yet many of the communication issues remain the same. At the heart of it is multilingual and intercultural communication across time and space. There is much that applied linguists can contribute to the understanding of diaspora in the era of globalization. This article discusses some of the core issues of communication between the diaspora and the homeland, the past and the present, the individual and the community, and the sense of belonging and the ascribed category with a detailed analysis of empirical data collected through linguistic ethnography in the Chinese diaspora in Britain and elsewhere. It also highlights the significance of dynamic multilingualism in everyday communication.


2017 ◽  
Vol 4 (3) ◽  
pp. 19 ◽  
Author(s):  
Hiranya Nath

This article briefly discusses various definitions and concepts of the so-called information society. The term information society has been proposed to refer to the post-industrial society in which information plays a pivotal role. The definitions that have been proposed over the years highlight five underlying characterisations of an information society: technological, economic, sociological, spatial, and cultural. This article discusses those characteristics. While the emergence of an information society may be just a figment of one’s imagination, the concept could be a good organising principle to describe and analyse the changes of the past 50 years and of the future in the 21st century. 


Author(s):  
James J. Coleman

At a time when the Union between Scotland and England is once again under the spotlight, Remembering the Past in Nineteenth-Century Scotland examines the way in which Scotland’s national heroes were once remembered as champions of both Scottish and British patriotism. Whereas 19th-century Scotland is popularly depicted as a mire of sentimental Jacobitism and kow-towing unionism, this book shows how Scotland’s national heroes were once the embodiment of a consistent, expressive and robust view of Scottish nationality. Whether celebrating the legacy of William Wallace and Robert Bruce, the reformer John Knox, the Covenanters, 19th-century Scots rooted their national heroes in a Presbyterian and unionist view of Scotland’s past. Examined through the prism of commemoration, this book uncovers collective memories of Scotland’s past entirely opposed to 21st-century assumptions of medieval proto-nationalism and Calvinist misery. Detailed studies of 19th-century commemoration of Scotland’s national heroes Uncovers an all but forgotten interpretation of these ‘great Scots’ Shines a new light on the mindset of nineteenth-century Scottish national identity as being comfortably Scottish and British Overturns the prevailing view of Victorian Scottishness as parochial, sentimental tartanry


2017 ◽  
Vol 7 (2) ◽  
pp. 7-25
Author(s):  
Karolina Diallo

Pupil with Obsessive-Compulsive Disorder. Over the past twenty years childhood OCD has received more attention than any other anxiety disorder that occurs in the childhood. The increasing interest and research in this area have led to increasing number of diagnoses of OCD in children and adolescents, which affects both specialists and teachers. Depending on the severity of symptoms OCD has a detrimental effect upon child's school performance, which can lead almost to the impossibility to concentrate on school and associated duties. This article is devoted to the obsessive-compulsive disorder and its specifics in children, focusing on the impact of this disorder on behaviour, experience and performance of the child in the school environment. It mentions how important is the role of the teacher in whose class the pupil with this diagnosis is and it points out that it is necessary to increase teachers' competence to identify children with OCD symptoms, to take the disease into the account, to adapt the course of teaching and to introduce such measures that could help children reduce the anxiety and maintain (or increase) the school performance within and in accordance with the school regulations and curriculum.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3046
Author(s):  
Shervin Minaee ◽  
Mehdi Minaei ◽  
Amirali Abdolrashidi

Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.


2021 ◽  
Vol 54 (1) ◽  
pp. 21-33
Author(s):  
Julie Berg ◽  
Clifford Shearing

The 40th Anniversary Edition of Taylor, Walton and Young’s New Criminology, published in 2013, opened with these words: ‘The New Criminology was written at a particular time and place, it was a product of 1968 and its aftermath; a world turned upside down’. We are at a similar moment today. Several developments have been, and are turning, our 21st century world upside down. Among the most profound has been the emergence of a new earth, that the ‘Anthropocene’ references, and ‘cyberspace’, a term first used in the 1960s, which James Lovelock has recently termed a ‘Novacene’, a world that includes both human and artificial intelligences. We live today on an earth that is proving to be very different to the Holocene earth, our home for the past 12,000 years. To appreciate the Novacene one need only think of our ‘smart’ phones. This world constitutes a novel domain of existence that Castells has conceived of as a terrain of ‘material arrangements that allow for simultaneity of social practices without territorial contiguity’ – a world of sprawling material infrastructures, that has enabled a ‘space of flows’, through which massive amounts of information travel. Like the Anthropocene, the Novacene has brought with it novel ‘harmscapes’, for example, attacks on energy systems. In this paper, we consider how criminology has responded to these harmscapes brought on by these new worlds. We identify ‘lines of flight’ that are emerging, as these challenges are being met by criminological thinkers who are developing the conceptual trajectories that are shaping 21st century criminologies.


2021 ◽  
Vol 7 (5) ◽  
pp. 89
Author(s):  
George K. Sidiropoulos ◽  
Polixeni Kiratsa ◽  
Petros Chatzipetrou ◽  
George A. Papakostas

This paper aims to provide a brief review of the feature extraction methods applied for finger vein recognition. The presented study is designed in a systematic way in order to bring light to the scientific interest for biometric systems based on finger vein biometric features. The analysis spans over a period of 13 years (from 2008 to 2020). The examined feature extraction algorithms are clustered into five categories and are presented in a qualitative manner by focusing mainly on the techniques applied to represent the features of the finger veins that uniquely prove a human’s identity. In addition, the case of non-handcrafted features learned in a deep learning framework is also examined. The conducted literature analysis revealed the increased interest in finger vein biometric systems as well as the high diversity of different feature extraction methods proposed over the past several years. However, last year this interest shifted to the application of Convolutional Neural Networks following the general trend of applying deep learning models in a range of disciplines. Finally, yet importantly, this work highlights the limitations of the existing feature extraction methods and describes the research actions needed to face the identified challenges.


Author(s):  
Ruofan Liao ◽  
Paravee Maneejuk ◽  
Songsak Sriboonchitta

In the past, in many areas, the best prediction models were linear and nonlinear parametric models. In the last decade, in many application areas, deep learning has shown to lead to more accurate predictions than the parametric models. Deep learning-based predictions are reasonably accurate, but not perfect. How can we achieve better accuracy? To achieve this objective, we propose to combine neural networks with parametric model: namely, to train neural networks not on the original data, but on the differences between the actual data and the predictions of the parametric model. On the example of predicting currency exchange rate, we show that this idea indeed leads to more accurate predictions.


Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.


Sign in / Sign up

Export Citation Format

Share Document