scholarly journals Temporal State Machines: Using Temporal Memory to Stitch Time-based Graph Computations

2021 ◽  
Vol 17 (3) ◽  
pp. 1-27
Author(s):  
Advait Madhavan ◽  
Matthew W. Daniels ◽  
Mark D. Stiles

Race logic, an arrival-time-coded logic family, has demonstrated energy and performance improvements for applications ranging from dynamic programming to machine learning. However, the various ad hoc mappings of algorithms into hardware rely on researcher ingenuity and result in custom architectures that are difficult to systematize. We propose to associate race logic with the mathematical field of tropical algebra, enabling a more methodical approach toward building temporal circuits. This association between the mathematical primitives of tropical algebra and generalized race logic computations guides the design of temporally coded tropical circuits. It also serves as a framework for expressing high-level timing-based algorithms. This abstraction, when combined with temporal memory, allows for the systematic exploration of race logic–based temporal architectures by making it possible to partition feed-forward computations into stages and organize them into a state machine. We leverage analog memristor-based temporal memories to design such a state machine that operates purely on time-coded wavefronts. We implement a version of Dijkstra’s algorithm to evaluate this temporal state machine. This demonstration shows the promise of expanding the expressibility of temporal computing to enable it to deliver significant energy and throughput advantages.

Water ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 1443 ◽  
Author(s):  
Amir Nafi ◽  
Jonathan Brans

This paper deals with the development of a decision-aiding model for predicting, in an ex-ante way, the effects of a mix of actions on an asset and on its operation. The objective is then to define a compromised policy between costs and performance improvements. We investigate the use of multiple regression analysis (MRA) and an artificial neural network (ANN) to establish causal relationships between the network efficiency rate, and a set of explanatory variables on one hand, and potential water loss management actions such as leak detection, maintenance and asset renewal, on the other hand. The originality of our approach is in developing a two-step ex-ante model for predicting the efficiency rate involving low and high level explanatory variables in a context of unavailability of data at the scale of the water utility. The first step exploits a national French database «SISPEA» (Système d’Information d’information sur les Services Publics d’Eau et d’Assainissement) to calibrate a general prediction model that establishes a correlation between efficiency (output) and other performance indicators (inputs). The second step involves the utility manager to build a causal model between endogenous and exogenous variables of a specific water network (low level) and performance indicators considered as inputs for the previous step (high level). Uncertainty is taken into account by Monte Carlo simulations. An application of our decision model on a water utility in the southeast of France is provided as a case study.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 49-54 ◽  
Author(s):  
E. Todd Ryan ◽  
Andrew J. McKerrow ◽  
Jihperng Leu ◽  
Paul S. Ho

Continuing improvement in device density and performance has significantly affected the dimensions and complexity of the wiring structure for on-chip interconnects. These enhancements have led to a reduction in the wiring pitch and an increase in the number of wiring levels to fulfill demands for density and performance improvements. As device dimensions shrink to less than 0.25 μm, the propagation delay, crosstalk noise, and power dissipation due to resistance-capacitance (RC) coupling become significant. Accordingly the interconnect delay now constitutes a major fraction of the total delay limiting the overall chip performance. Equally important is the processing complexity due to an increase in the number of wiring levels. This inevitably drives cost up by lowering the manufacturing yield due to an increase in defects and processing complexity.To address these problems, new materials for use as metal lines and interlayer dielectrics (ILDs) and alternative architectures have surfaced to replace the current Al(Cu)/SiO2 interconnect technology. These alternative architectures will require the introduction of low-dielectric-constant k materials as the interlayer dielectrics and/or low-resistivity conductors such as copper. The electrical and thermomechanical properties of SiO2 are ideal for ILD applications, and a change to material with different properties has important process-integration implications. To facilitate the choice of an alternative ILD, it is necessary to establish general criterion for evaluating thin-film properties of candidate low-k materials, which can be later correlated with process-integration problems.


2020 ◽  
Vol 12 (2) ◽  
pp. 19-50 ◽  
Author(s):  
Muhammad Siddique ◽  
Shandana Shoaib ◽  
Zahoor Jan

A key aspect of work processes in service sector firms is the interconnection between tasks and performance. Relational coordination can play an important role in addressing the issues of coordinating organizational activities due to high level of interdependence complexity in service sector firms. Research has primarily supported the aspect that well devised high performance work systems (HPWS) can intensify organizational performance. There is a growing debate, however, with regard to understanding the “mechanism” linking HPWS and performance outcomes. Using relational coordination theory, this study examines a model that examine the effects of subsets of HPWS, such as motivation, skills and opportunity enhancing HR practices on relational coordination among employees working in reciprocal interdependent job settings. Data were gathered from multiple sources including managers and employees at individual, functional and unit levels to know their understanding in relation to HPWS and relational coordination (RC) in 218 bank branches in Pakistan. Data analysis via structural equation modelling, results suggest that HPWS predicted RC among officers at the unit level. The findings of the study have contributions to both, theory and practice.


Author(s):  
Richard Stone ◽  
Minglu Wang ◽  
Thomas Schnieders ◽  
Esraa Abdelall

Human-robotic interaction system are increasingly becoming integrated into industrial, commercial and emergency service agencies. It is critical that human operators understand and trust automation when these systems support and even make important decisions. The following study focused on human-in-loop telerobotic system performing a reconnaissance operation. Twenty-four subjects were divided into groups based on level of automation (Low-Level Automation (LLA), and High-Level Automation (HLA)). Results indicated a significant difference between low and high word level of control in hit rate when permanent error occurred. In the LLA group, the type of error had a significant effect on the hit rate. In general, the high level of automation was better than the low level of automation, especially if it was more reliable, suggesting that subjects in the HLA group could rely on the automatic implementation to perform the task more effectively and more accurately.


Author(s):  
Mark O Sullivan ◽  
Carl T Woods ◽  
James Vaughan ◽  
Keith Davids

As it is appreciated that learning is a non-linear process – implying that coaching methodologies in sport should be accommodative – it is reasonable to suggest that player development pathways should also account for this non-linearity. A constraints-led approach (CLA), predicated on the theory of ecological dynamics, has been suggested as a viable framework for capturing the non-linearity of learning, development and performance in sport. The CLA articulates how skills emerge through the interaction of different constraints (task-environment-performer). However, despite its well-established theoretical roots, there are challenges to implementing it in practice. Accordingly, to help practitioners navigate such challenges, this paper proposes a user-friendly framework that demonstrates the benefits of a CLA. Specifically, to conceptualize the non-linear and individualized nature of learning, and how it can inform player development, we apply Adolph’s notion of learning IN development to explain the fundamental ideas of a CLA. We then exemplify a learning IN development framework, based on a CLA, brought to life in a high-level youth football organization. We contend that this framework can provide a novel approach for presenting the key ideas of a CLA and its powerful pedagogic concepts to practitioners at all levels, informing coach education programs, player development frameworks and learning environment designs in sport.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1342
Author(s):  
Borja Nogales ◽  
Miguel Silva ◽  
Ivan Vidal ◽  
Miguel Luís ◽  
Francisco Valera ◽  
...  

5G communications have become an enabler for the creation of new and more complex networking scenarios, bringing together different vertical ecosystems. Such behavior has been fostered by the network function virtualization (NFV) concept, where the orchestration and virtualization capabilities allow the possibility of dynamically supplying network resources according to its needs. Nevertheless, the integration and performance of heterogeneous network environments, each one supported by a different provider, and with specific characteristics and requirements, in a single NFV framework is not straightforward. In this work we propose an NFV-based framework capable of supporting the flexible, cost-effective deployment of vertical services, through the integration of two distinguished mobile environments and their networks: small sized unmanned aerial vehicles (SUAVs), supporting a flying ad hoc network (FANET) and vehicles, promoting a vehicular ad hoc network (VANET). In this context, a use case involving the public safety vertical will be used as an illustrative example to showcase the potential of this framework. This work also includes the technical implementation details of the framework proposed, allowing to analyse and discuss the delays on the network services deployment process. The results show that the deployment times can be significantly reduced through a distributed VNF configuration function based on the publish–subscribe model.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Peter Baumann ◽  
Dimitar Misev ◽  
Vlad Merticariu ◽  
Bang Pham Huu

AbstractMulti-dimensional arrays (also known as raster data or gridded data) play a key role in many, if not all science and engineering domains where they typically represent spatio-temporal sensor, image, simulation output, or statistics “datacubes”. As classic database technology does not support arrays adequately, such data today are maintained mostly in silo solutions, with architectures that tend to erode and not keep up with the increasing requirements on performance and service quality. Array Database systems attempt to close this gap by providing declarative query support for flexible ad-hoc analytics on large n-D arrays, similar to what SQL offers on set-oriented data, XQuery on hierarchical data, and SPARQL and CIPHER on graph data. Today, Petascale Array Database installations exist, employing massive parallelism and distributed processing. Hence, questions arise about technology and standards available, usability, and overall maturity. Several papers have compared models and formalisms, and benchmarks have been undertaken as well, typically comparing two systems against each other. While each of these represent valuable research to the best of our knowledge there is no comprehensive survey combining model, query language, architecture, and practical usability, and performance aspects. The size of this comparison differentiates our study as well with 19 systems compared, four benchmarked to an extent and depth clearly exceeding previous papers in the field; for example, subsetting tests were designed in a way that systems cannot be tuned to specifically these queries. It is hoped that this gives a representative overview to all who want to immerse into the field as well as a clear guidance to those who need to choose the best suited datacube tool for their application. This article presents results of the Research Data Alliance (RDA) Array Database Assessment Working Group (ADA:WG), a subgroup of the Big Data Interest Group. It has elicited the state of the art in Array Databases, technically supported by IEEE GRSS and CODATA Germany, to answer the question: how can data scientists and engineers benefit from Array Database technology? As it turns out, Array Databases can offer significant advantages in terms of flexibility, functionality, extensibility, as well as performance and scalability—in total, the database approach of offering “datacubes” analysis-ready heralds a new level of service quality. Investigation shows that there is a lively ecosystem of technology with increasing uptake, and proven array analytics standards are in place. Consequently, such approaches have to be considered a serious option for datacube services in science, engineering and beyond. Tools, though, vary greatly in functionality and performance as it turns out.


Automation ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 48-61
Author(s):  
Bhavyansh Mishra ◽  
Robert Griffin ◽  
Hakki Erhan Sevil

Visual simultaneous localization and mapping (VSLAM) is an essential technique used in areas such as robotics and augmented reality for pose estimation and 3D mapping. Research on VSLAM using both monocular and stereo cameras has grown significantly over the last two decades. There is, therefore, a need for emphasis on a comprehensive review of the evolving architecture of such algorithms in the literature. Although VSLAM algorithm pipelines share similar mathematical backbones, their implementations are individualized and the ad hoc nature of the interfacing between different modules of VSLAM pipelines complicates code reuseability and maintenance. This paper presents a software model for core components of VSLAM implementations and interfaces that govern data flow between them while also attempting to preserve the elements that offer performance improvements over the evolution of VSLAM architectures. The framework presented in this paper employs principles from model-driven engineering (MDE), which are used extensively in the development of large and complicated software systems. The presented VSLAM framework will assist researchers in improving the performance of individual modules of VSLAM while not having to spend time on system integration of those modules into VSLAM pipelines.


Author(s):  
Xiaomo Jiang ◽  
Craig Foster

Gas turbine simple or combined cycle plants are built and operated with higher availability, reliability, and performance in order to provide the customer with sufficient operating revenues and reduced fuel costs meanwhile enhancing customer dispatch competitiveness. A tremendous amount of operational data is usually collected from the everyday operation of a power plant. It has become an increasingly important but challenging issue about how to turn this data into knowledge and further solutions via developing advanced state-of-the-art analytics. This paper presents an integrated system and methodology to pursue this purpose by automating multi-level, multi-paradigm, multi-facet performance monitoring and anomaly detection for heavy duty gas turbines. The system provides an intelligent platform to drive site-specific performance improvements, mitigate outage risk, rationalize operational pattern, and enhance maintenance schedule and service offerings via taking appropriate proactive actions. In addition, the paper also presents the components in the system, including data sensing, hardware, and operational anomaly detection, expertise proactive act of company, site specific degradation assessment, and water wash effectiveness monitoring and analytics. As demonstrated in two examples, this remote performance monitoring aims to improve equipment efficiency by converting data into knowledge and solutions in order to drive value for customers including lowering operating fuel cost and increasing customer power sales and life cycle value.


Sign in / Sign up

Export Citation Format

Share Document