Guest Editor’s Introduction: Software Mathematical Models for Human Understanding

Author(s):  
Iaakov Exman

The unrelenting trend of larger and larger sizes of Software Systems and data has made software comprehensibility an increasingly difficult problem. However, a tacit consensus that human understanding of software is essential for most software related activities, stimulated software developers to embed comprehensibility in their systems’ design. On the other hand, recent empirical successes of Deep Learning neural networks, in several application areas, seem to challenge the tacit consensus: is software comprehensibility a necessity, or just superfluous? This introductory paper, to the 2020 special issue on Theoretical Software Engineering, offers reasons justifying our standpoint on the referred controversy. This paper also points out to specific techniques enabling Human Understanding of software systems relevant to this issue’s papers.

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Elisabeth Gibert-Sotelo ◽  
Isabel Pujol Payet

Abstract The interest in morphology and its interaction with the other grammatical components has increased in the last twenty years, with new approaches coming into stage so as to get more accurate analyses of the processes involved in morphological construal. This special issue is a valuable contribution to this field of study. It gathers a selection of five papers from the Morphology and Syntax workshop (University of Girona, July 2017) which, on the basis of Romance and Latin phenomena, discuss word structure and its decomposition into hierarchies of features. Even though the papers share a compositional view of lexical items, they adopt different formal theoretical approaches to the lexicon-syntax interface, thus showing the benefit of bearing in mind the possibilities that each framework provides. This introductory paper serves as a guide for the readers of this special collection and offers an overview of the topics dealt in each contribution.


Author(s):  
G. Touya ◽  
F. Brisebard ◽  
F. Quinton ◽  
A. Courtial

Abstract. Visually impaired people cannot use classical maps but can learn to use tactile relief maps. These tactile maps are crucial at school to learn geography and history as well as the other students. They are produced manually by professional transcriptors in a very long and costly process. A platform able to generate tactile maps from maps scanned from geography textbooks could be extremely useful to these transcriptors, to fasten their production. As a first step towards such a platform, this paper proposes a method to infer the scale and the content of the map from its image. We used convolutional neural networks trained with a few hundred maps from French geography textbooks, and the results show promising results to infer labels about the content of the map (e.g. ”there are roads, cities and administrative boundaries”), and to infer the extent of the map (e.g. a map of France or of Europe).


2021 ◽  
Vol 19 (3) ◽  
pp. 262-268
Author(s):  
Marcus Morse ◽  
Sean Blenkinsop ◽  
Bob Jickling

This introductory paper begins by summarizing the premises of this special issue on “Wilding Educational Policy.” That is, first, current normalized educational practices in education are not adequate for these times of extraordinary social and ecological upheaval. Second, an important way forward will be to problematize modernist tendencies to control discourse and practice in education in ways that tend to “domesticate” educational possibilities. We then describe how the papers in this collection are framed around two emergent thematic arcs. One arc is directly aimed at initiating conversations with and amongst policy-makers. The other arc illustrates how authors have been expanding their understanding of the premises of this issue and how “wilding” can be interpreted in different cultural settings. These papers all add to a growing body of literature that builds on experiments and musings in “wild pedagogies.”


Author(s):  
Vani Rajasekar ◽  
K Venu ◽  
Soumya Ranjan Jena ◽  
R. Janani Varthini ◽  
S. Ishwarya

Agriculture is a vital part of every country’s economy, and India is regarded an agro-based nation. One of the main purposes of agriculture is to yield healthy crops without any disease. Cotton is a significant crop in India in relation to income. India is the world’s largest producer of cotton. Cotton crops are affected when leaves fall off early or become afflicted with diseases. Farmers and planting experts, on the other hand, have faced numerous concerns and ongoing agricultural obstacles for millennia, including much cotton disease. Because severe cotton disease can result in no grain harvest, a rapid, efficient, less expensive and reliable approach for detecting cotton illnesses is widely wanted in the agricultural information area. Deep learning method is used to solve the issue because it will perform exceptionally well in image processing and classification problems. The network was built using a combination of the benefits of both the ResNet pre-trained on ImageNet and the Xception component, and this technique outperforms other state-of-the-art techniques. Every convolution layer with in dense block is tiny, so each convolution kernel is still in charge of learning the tiniest details. The deep convolution neural networks for the detection of plant leaf diseases contemplate utilising a pre-trained model acquired from usual enormous datasets, and then applying it to a specific task educated with their own data. The experimental results show that for ResNet-50, a training accuracy of 0.95 and validation accuracy of 0.98 is obtained whereas training loss of 0.33 and validation loss of 0.5.


Symmetry ◽  
2018 ◽  
Vol 10 (11) ◽  
pp. 564 ◽  
Author(s):  
Thanh Vo ◽  
Trang Nguyen ◽  
C. Le

Race recognition (RR), which has many applications such as in surveillance systems, image/video understanding, analysis, etc., is a difficult problem to solve completely. To contribute towards solving that problem, this article investigates using a deep learning model. An efficient Race Recognition Framework (RRF) is proposed that includes information collector (IC), face detection and preprocessing (FD&P), and RR modules. For the RR module, this study proposes two independent models. The first model is RR using a deep convolutional neural network (CNN) (the RR-CNN model). The second model (the RR-VGG model) is a fine-tuning model for RR based on VGG, the famous trained model for object recognition. In order to examine the performance of our proposed framework, we perform an experiment on our dataset named VNFaces, composed specifically of images collected from Facebook pages of Vietnamese people, to compare the accuracy between RR-CNN and RR-VGG. The experimental results show that for the VNFaces dataset, the RR-VGG model with augmented input images yields the best accuracy at 88.87% while RR-CNN, an independent and lightweight model, yields 88.64% accuracy. The extension experiments conducted prove that our proposed models could be applied to other race dataset problems such as Japanese, Chinese, or Brazilian with over 90% accuracy; the fine-tuning RR-VGG model achieved the best accuracy and is recommended for most scenarios.


Author(s):  
Vishal Shah ◽  
Neha Sajnani

In recent years’ machine learning is playing a vital role in our everyday lifelike, it can help us to route somewhere, find something for what we aren’t aware of, or can schedule appointments in seconds. Looking at the other side of the coin besides machine learning Mobile phones are equivocating and competing in the same field. If we take an optimistic view, by applying machine learning in our mobile devices, we can make our lives better and even move society forward. Image Classification is the most common and trending topic of machine learning. Among several different types of models in deep learning, Convolutional Neural Networks (CNN’s) have intimated high performance on image classification which are made out of various handling layers to gain proficiency with the portrayals of information with numerous unique levels, are the best AI models as of late. Here, we have trained a simple CNN and completed the experiments on the dataset called Fashion Mnist and Flower Recognition, and also analyzed the techniques of integrating the trained model in the Android platform.


REGION ◽  
2019 ◽  
Vol 6 (2) ◽  
pp. E1-E6 ◽  
Author(s):  
David Castells ◽  
Paula Herrera

In this (introductory) paper, we present i) some basic figures about the rise of cities in the developing world, and ii) the four papers of this special issue. This paper and the other four papers in the issue have the intention to bring to the frontline the reality of cities of the developing world in the 21st century, hoping to motivate further and much needed research. 


Author(s):  
Wencan Zhong ◽  
Vijayalakshmi G. V. Mahesh ◽  
Alex Noel Joseph Raj ◽  
Nersisson Ruban

Finding faces in the clutter scenes is a challenging task in automatic face recognition systems as facial images are subjected to changes in the illumination, facial expression, orientation, and occlusions. Also, in the cluttered scenes, faces are not completely visible and detecting them is essential as it is significant in surveillance applications to study the mood of the crowd. This chapter utilizes the deep learning methods to understand the cluttered scenes to find the faces and discriminate them into partial and full faces. The work proves that MTCNN used for detecting the faces and Zernike moments-based kernels employed in CNN for classifying the faces into partial and full takes advantage in delivering a notable performance as compared to the other techniques. Considering the limitation of recognition on partial face emotions, only the full faces are preserved, and further, the KDEF dataset is modified by MTCNN to detect only faces and classify them into four emotions. PatternNet is utilized to train and test the modified dataset to improve the accuracy of the results.


Author(s):  
Bhavana D. ◽  
K. Chaitanya Krishna ◽  
Tejaswini K. ◽  
N. Venkata Vikas ◽  
A. N. V. Sahithya

The task of image caption generator is mainly about extracting the features and ongoings of an image and generating human-readable captions that translate the features of the objects in the image. The contents of an image can be described by having knowledge about natural language processing and computer vision. The features can be extracted using convolution neural networks which makes use of transfer learning to implement the exception model. It stands for extreme inception, which has a feature extraction base with 36 convolution layers. This shows accurate results when compared with the other CNNs. Recurrent neural networks are used for describing the image and to generate accurate sentences. The feature vector that is extracted by using the CNN is fed to the LSTM. The Flicker 8k dataset is used to train the network in which the data is labeled properly. The model will be able to generate accurate captions that nearly describe the activities carried in the image when an input image is given to it. Further, the authors use the BLEU scores to validate the model.


Sign in / Sign up

Export Citation Format

Share Document