scholarly journals Computer code comprehension shares neural resources with formal logical inference in the fronto-parietal network

eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Yun-Fei Liu ◽  
Judy Kim ◽  
Colin Wilson ◽  
Marina Bedny

Despite the importance of programming to modern society, the cognitive and neural bases of code comprehension are largely unknown. Programming languages might ‘recycle’ neurocognitive mechanisms originally developed for natural languages. Alternatively, comprehension of code could depend on fronto-parietal networks shared with other culturally-invented symbol systems, such as formal logic and symbolic math such as algebra. Expert programmers (average 11 years of programming experience) performed code comprehension and memory control tasks while undergoing fMRI. The same participants also performed formal logic, symbolic math, executive control, and language localizer tasks. A left-lateralized fronto-parietal network was recruited for code comprehension. Patterns of activity within this network distinguish between ‘for’ loops and ‘if’ conditional code functions. In terms of the underlying neural basis, code comprehension overlapped extensively with formal logic and to a lesser degree math. Overlap with executive processes and language was low, but laterality of language and code covaried across individuals. Cultural symbol systems, including code, depend on a distinctive fronto-parietal cortical network.

Author(s):  
Y. Liu ◽  
J. Kim ◽  
C. Wilson ◽  
M. Bedny

AbstractDespite the importance of programming to modern society, the cognitive and neural bases of code comprehension are largely unknown. Programming languages might ‘recycle’ neurocognitive mechanisms originally used for natural languages. Alternatively, comprehension of code could depend on fronto-parietal networks shared with other culturally derived symbol systems, such as formal logic and math. Expert programmers (average 11 years of programming experience) performed code comprehension and memory control tasks while undergoing fMRI. The same participants also performed language, math, formal logic, and executive control localizer tasks. A left-lateralized fronto-parietal network was recruited for code comprehension. Patterns of activity within this network distinguish between “for” loops and “if” conditional code functions. Code comprehension overlapped extensively with neural basis of formal logic and to a lesser degree math. Overlap with simpler executive processes and language was low, but laterality of language and code covaried across individuals. Cultural symbol systems, including code, depend on a distinctive fronto-parietal cortical network.


Author(s):  
Xiaoqing Wu ◽  
Marjan Mernik ◽  
Barrett R. Bryant ◽  
Jeff Gray

Unlike natural languages, programming languages are strictly stylized entities created to facilitate human communication with computers. In order to make programming languages recognizable by computers, one of the key challenges is to describe and implement language syntax and semantics such that the program can be translated into machine-readable code. This process is normally considered as the front-end of a compiler, which is mainly related to the programming language, but not the target machine. This article will address the most important aspects in building a compiler front-end; that is, syntax and semantic analysis, including related theories, technologies and tools, as well as existing problems and future trends. As the main focus, formal syntax and semantic specifications will be discussed in detail. The article provides the reader with a high-level overview of the language implementation process, as well as some commonly used terms and development practices.


1999 ◽  
Vol 22 (4) ◽  
pp. 577-660 ◽  
Author(s):  
Lawrence W. Barsalou

Prior to the twentieth century, theories of knowledge were inherently perceptual. Since then, developments in logic, statistics, and programming languages have inspired amodal theories that rest on principles fundamentally different from those underlying perception. In addition, perceptual approaches have become widely viewed as untenable because they are assumed to implement recording systems, not conceptual systems. A perceptual theory of knowledge is developed here in the context of current cognitive science and neuroscience. During perceptual experience, association areas in the brain capture bottom-up patterns of activation in sensory-motor areas. Later, in a top-down manner, association areas partially reactivate sensory-motor areas to implement perceptual symbols. The storage and reactivation of perceptual symbols operates at the level of perceptual components – not at the level of holistic perceptual experiences. Through the use of selective attention, schematic representations of perceptual components are extracted from experience and stored in memory (e.g., individual memories of green, purr, hot). As memories of the same component become organized around a common frame, they implement a simulator that produces limitless simulations of the component (e.g., simulations of purr). Not only do such simulators develop for aspects of sensory experience, they also develop for aspects of proprioception (e.g., lift,run) and introspection (e.g., compare,memory,happy, hungry). Once established, these simulators implement a basic conceptual system that represents types, supports categorization, and produces categorical inferences. These simulators further support productivity, propositions, and abstract concepts, thereby implementing a fully functional conceptual system. Productivity results from integrating simulators combinatorially and recursively to produce complex simulations. Propositions result from binding simulators to perceived individuals to represent type-token relations. Abstract concepts are grounded in complex simulations of combined physical and introspective events. Thus, a perceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal symbol systems. Implications for cognition, neuroscience, evolution, development, and artificial intelligence are explored.


Systems ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 8 ◽  
Author(s):  
Gene M. Alarcon ◽  
Charles Walter ◽  
Anthony M. Gibson ◽  
Rose F. Gamble ◽  
August Capiola ◽  
...  

Automation and autonomous systems are quickly becoming a more engrained aspect of modern society. The need for effective, secure computer code in a timely manner has led to the creation of automated code repair techniques to resolve issues quickly. However, the research to date has largely ignored the human factors aspects of automated code repair. The current study explored trust perceptions, reuse intentions, and trust intentions in code repair with human generated patches versus automated code repair patches. In addition, comments in the headers were manipulated to determine the effect of the presence or absence of comments in the header of the code. Participants were 51 programmers with at least 3 years’ experience and knowledge of the C programming language. Results indicated only repair source (human vs. automated code repair) had a significant influence on trust perceptions and trust intentions. Specifically, participants consistently reported higher levels of perceived trustworthiness, intentions to reuse, and trust intentions for human referents compared to automated code repair. No significant effects were found for comments in the headers.


2018 ◽  
Vol 67 (5) ◽  
pp. 1
Author(s):  
Oleksandr M. Romanukha

This article deals with the issue of updating the principles of creation of e-textbooks via graphical interface. The close attention has to be paid to the transformation of the modern society where the new generation of people uses computers, smartphones and tablets not only as working tools but as the means of discovering the world as well. Therefore the graphical interface is considered the code of understanding the information environment. It is emphasised that the spread of information technologies and information transmission and processing methods have become an integral element of human thinking and perception of the world. Getting most of their information through the Internet modern people perceive process and memorize it according to the principles of the interface and programming languages. In this regard graphical interface is seen as the fundamental of the e-textbook visualization. The article presents the model description of the e-textbook “History of Ukraine” visualized as the cube. Each tier in this cube describes the cultural stratum of the epoch and shows the general dynamics of historical development. Each plane in this cube analyses the content of each problem. Studying every part, students open consistently the horizontal cube stratums and see the topics of the epoch represented by the graphical interface device. Every topic contains visualized scheme with hyperactive dates and surnames with zero traditional text with hyperlinks. The advantage of such e-textbook structure is to rise student cognitive activity due to the new principles of the educational material visualization. The e-textbook interface is intuitive; it can be updated and used to get the insight into selected topics and questions. It has means to activate the resources of human higher nervous system taking into account the individual features of students and topics they are studying. Attention is drawn to the fact that scientific progress has been made possible largely thanks to the improvement of semiotics, that is, the development of our language, especially those of its branches, as the language of symbolic logic, rather than by improving brain function.


2021 ◽  
Vol 10 (34) ◽  
Author(s):  
I.V ABRAMOVA ◽  
◽  
T.V RICHTER ◽  

The presented article is devoted to the formation of professional competencies in future programmers. The relevance of the research is based on the fact that information technologies are used in all spheres of life of modern society, therefore, specialists who can develop and code algorithms for working with information are in great demand. Algorithms form the basis of any information protection process, labor efficiency calculations, therefore it becomes important to form competencies in future IT specialists related to the ability to program using various programming languages and methods, using the main types and data structures from the everyday practice of programmers. As indicators of the effectiveness of methods for the formation of professional competencies, there are competencies: knowledge of modern trends in the development of tools and software; the presence of theoretical knowledge and practical skills that allow you to build an algorithm, analyze its work with different input data and implement it using modern programming languages; the ability to use high-level PL and professional programming systems, tools for solving professional-applied problems in the information sphere of the enterprise. The article deals with traditional and developed by the author methods of teaching programming


2020 ◽  
Vol 2020 (8) ◽  
pp. 309-1-309-6
Author(s):  
Xunyu Pan ◽  
Colin Crowe ◽  
Toby Myers ◽  
Emily Jetton

Mobile devices typically support input from virtual keyboards or pen-based technologies, allowing handwriting to be a potentially viable text input solution for programming on touchscreen devices. The major problem, however, is that handwriting recognition systems are built to take advantage of the rules of natural languages rather than programming languages. In addition, mobile devices are also inherently restricted by the limitation of screen size and the inconvenient use of a virtual keyboard. In this work, we create a novel handwriting-to-code transformation system on a mobile platform to recognize and analyze source code written directly on a whiteboard or a piece of paper. First, the system recognizes and further compiles the handwritten source code into an executable program. Second, a friendly graphical user interface (GUI) is provided to visualize how manipulating different sections of code impacts the program output. Finally, the coding system supports an automatic error detection and correction mechanism to help address the common syntax and spelling errors during the process of whiteboard coding. The mobile application provides a flexible and user-friendly solution for realtime handwriting-based programming for learners under various environments where the keyboard or touchscreen input is not preferred.


Author(s):  
Mark Richard

Scope is a notion used by logicians and linguists in describing artificial and natural languages. It is best introduced in terms of the languages of formal logic. Consider a particular occurrence of an operator in a sentence – say, that of ‘→’ in (1) below, or that of the universal quantifier ‘∀’ in (2) below. - (1) A → (B & C) - (2) ∀x(Bxy →∃ yAxy) Speaking intuitively, the scope of the operator is that part of the sentence which it governs. The scope of ‘→’ in (1) is the whole sentence; this renders the whole sentence a conditional. The scope of ‘&’, on the other hand, is just ‘(B & C)’. In (2), the scope of the quantifier ‘∀’ is the whole sentence, which allows it to bind every occurrence of x. The scope of ‘∃’ is only ‘∃yAxy’. Since ‘Bxy’ is outside its scope, the ‘y’ in ‘Bxy’ is left unbound.


Sign in / Sign up

Export Citation Format

Share Document