scholarly journals Tunable Neural Encoding of a Symbolic Robotic Manipulation Algorithm

2021 ◽  
Vol 15 ◽  
Author(s):  
Garrett E. Katz ◽  
Akshay ◽  
Gregory P. Davis ◽  
Rodolphe J. Gentili ◽  
James A. Reggia

We present a neurocomputational controller for robotic manipulation based on the recently developed “neural virtual machine” (NVM). The NVM is a purely neural recurrent architecture that emulates a Turing-complete, purely symbolic virtual machine. We program the NVM with a symbolic algorithm that solves blocks-world restacking problems, and execute it in a robotic simulation environment. Our results show that the NVM-based controller can faithfully replicate the execution traces and performance levels of a traditional non-neural program executing the same restacking procedure. Moreover, after programming the NVM, the neurocomputational encodings of symbolic block stacking knowledge can be fine-tuned to further improve performance, by applying reinforcement learning to the underlying neural architecture.

2015 ◽  
Vol 24 (03) ◽  
pp. 1541001 ◽  
Author(s):  
Johannes Wettinger ◽  
Uwe Breitenbücher ◽  
Frank Leymann

Leading paradigms to develop, deploy, and operate applications such as continuous delivery, configuration management, and the merge of development and operations (DevOps) are the foundation for various techniques and tools to implement automated deployment. To make such applications available for users and customers, these approaches are typically used in conjunction with Cloud computing to automatically provision and manage underlying resources such as storage and virtual servers. A major class of these automation approaches follow the idea of converging toward a desired state of a resource (e.g. a middleware component deployed on a virtual machine). This is achieved by repeatedly executing idempotent scripts to reach the desired state. Because of major drawbacks of this approach, we discuss an alternative deployment automation approach based on compensation and fine-grained snapshots using container virtualization. We perform an evaluation comparing both approaches in terms of difficulties at design time and performance at runtime. Moreover, we discuss concepts, strategies, and implementations to effectively combine different deployment automation approaches.


2014 ◽  
Vol 11 (1) ◽  
pp. 47-68 ◽  
Author(s):  
Patricia Conde ◽  
Francisco Ortin

Java 7 has included the new invokedynamic opcode in the Java virtual machine. This new instruction allows the user to define method linkage at runtime. Once the link is established, the virtual machine performs its common optimizations, providing better runtime performance than reflection. However, this feature has not been offered at the abstraction level of the Java programming language. Since the functionality of the new opcode is not provided as a library, the existing languages in the Java platform can only use it at the assembly level. For this reason, we have developed the JINDY library that offers invokedynamic to any programming language in the Java platform. JINDY supports three modes of use, establishing a trade-off between runtime performance and flexibility. A runtime performance and memory consumption evaluation is presented. We analyze the efficiency of JINDY compared to reflection, the MethodHandle class in Java 7 and the Dynalink library. The memory and performance costs compared to the invokedynamic opcode are also measured.


2020 ◽  
Author(s):  
Andrey De Aguiar Salvi ◽  
Rodrigo Coelho Barros

Recent research on Convolutional Neural Networks focuses on how to create models with a reduced number of parameters and a smaller storage size while keeping the model’s ability to perform its task, allowing the use of the best CNN for automating tasks in limited devices, with reduced processing power, memory, or energy consumption constraints. There are many different approaches in the literature: removing parameters, reduction of the floating-point precision, creating smaller models that mimic larger models, neural architecture search (NAS), etc. With all those possibilities, it is challenging to say which approach provides a better trade-off between model reduction and performance, due to the difference between the approaches, their respective models, the benchmark datasets, or variations in training details. Therefore, this article contributes to the literature by comparing three state-of-the-art model compression approaches to reduce a well-known convolutional approach for object detection, namely YOLOv3. Our experimental analysis shows that it is possible to create a reduced version of YOLOv3 with 90% fewer parameters and still outperform the original model by pruning parameters. We also create models that require only 0.43% of the original model’s inference effort.


2018 ◽  
Vol 22 (5) ◽  
pp. 1123-1140 ◽  
Author(s):  
KAZUYA SAITO ◽  
HUI SUN ◽  
ADAM TIERNEY

The current study examines the role of cognitive and perceptual individual differences (i.e., aptitude) in second language (L2) pronunciation learning, when L2 learners’ varied experience background is controlled for. A total of 48 Chinese learners of English in the UK were assessed for their sensitivity to segmental and suprasegmental aspects of speech on explicit and implicit modes via behavioural (language/music aptitude tests) and neurophysiological (electroencephalography) measures. Subsequently, the participants’ aptitude profiles were compared to the segmental and suprasegmental dimensions of their L2 pronunciation proficiency analyzed through rater judgements and acoustic measurements. According to the results, the participants’ segmental attainment was associated not only with explicit aptitude (phonemic coding), but also with implicit aptitude (enhanced neural encoding of spectral peaks). Whereas the participants’ suprasegmental attainment was linked to explicit aptitude (rhythmic imagery) to some degree, it was primarily influenced by the quality and quantity of their most recent L2 learning experience.


2021 ◽  
Author(s):  
Alison I. Weber ◽  
Thomas L. Daniel ◽  
Bingni W. Brunton

AbstractAnimals rely on sensory feedback to generate accurate, reliable movements. In many flying insects, strain-sensitive neurons on the wings provide rapid feedback that enables stable flight control. While the impacts of wing structure on aerodynamic performance have been widely studied, the impacts of wing structure on sensing remain unexplored. In this paper, we show how the structural properties of the wing and encoding by mechanosensory neurons interact to jointly determine optimal sensing strategies and performance. Specifically, we examine how neural sensors can be placed effectively over a flapping wing to detect body rotation about different axes, using a computational wing model with varying flexural stiffness inspired by the hawkmoth Manduca sexta. A small set of mechanosensors, conveying strain information at key locations with a single action potential per wingbeat, permit accurate detection of body rotation. Optimal sensor locations are concentrated at either the wing base or the wing tip, and they transition sharply as a function of both wing stiffness and neural threshold. Moreover, the sensing strategy and performance is robust to both external disturbances and sensor loss. Typically, only five sensors are needed to achieve near-peak accuracy, with a single sensor often providing accuracy well above chance. Our results show that small-amplitude, dynamic signals can be extracted efficiently with spatially and temporally sparse sensors in the context of flight. The demonstrated interaction of wing structure and neural encoding properties points to the importance of their joint evolution.


Sign in / Sign up

Export Citation Format

Share Document