Ag2Se Nanowire Network as an Effective In-Materio Reservoir Computing Device

Author(s):  
Takumi Kotooka ◽  
Sam Lilak ◽  
Adam Stieg ◽  
James Gimzewski ◽  
Naoyuki Sugiyama ◽  
...  

Abstract Modern applications of artificial intelligence (AI) are generally algorithmic in nature and implemented using either general-purpose or application-specific hardware systems that have high power requirements. In the present study, physical (in-materio) reservoir computing (RC) implemented in hardware was explored as an alternative to software-based AI. The device, made up of a random, highly interconnected network of nonlinear Ag2Se nanojunctions, demonstrated the requisite characteristics of an in-materio reservoir, including but not limited to nonlinear switching, memory, and higher harmonic generation. As a hardware reservoir, the devices successfully performed waveform generation tasks, where tasks conducted at elevated network temperatures were found to be more stable than those conducted at room temperature. Finally, a comparison of voice classification, with and without the network device, showed that classification performance increased in the presence of the network device.

2021 ◽  
Vol 60 (SC) ◽  
pp. SCCF02
Author(s):  
Hadiyawarman ◽  
Yuki Usami ◽  
Takumi Kotooka ◽  
Saman Azhari ◽  
Masanori Eguchi ◽  
...  

Author(s):  
Mário Pereira Vestias

High-performance reconfigurable computing systems integrate reconfigurable technology in the computing architecture to improve performance. Besides performance, reconfigurable hardware devices also achieve lower power consumption compared to general-purpose processors. Better performance and lower power consumption could be achieved using application-specific integrated circuit (ASIC) technology. However, ASICs are not reconfigurable, turning them application specific. Reconfigurable logic becomes a major advantage when hardware flexibility permits to speed up whatever the application with the same hardware module. The first and most common devices utilized for reconfigurable computing are fine-grained FPGAs with a large hardware flexibility. To reduce the performance and area overhead associated with the reconfigurability, coarse-grained reconfigurable solutions has been proposed as a way to achieve better performance and lower power consumption. In this chapter, the authors provide a description of reconfigurable hardware for high-performance computing.


Author(s):  
Mário Pereira Vestias

High-Performance Reconfigurable Computing systems integrate reconfigurable technology in the computing architecture to improve performance. Besides performance, reconfigurable hardware devices also achieve lower power consumption compared to General-Purpose Processors. Better performance and lower power consumption could be achieved using Application Specific Integrated Circuit (ASIC) technology. However, ASICs are not reconfigurable, turning them application specific. Reconfigurable logic becomes a major advantage when hardware flexibility permits to speed up whatever the application with the same hardware module. The first and most common devices utilized for reconfigurable computing are fine-grained FPGAs with a large hardware flexibility. To reduce the performance and area overhead associated with the reconfigurability, coarse-grained reconfigurable solutions has been proposed as a way to achieve better performance and lower power consumption. In this chapter we will provide a description of reconfigurable hardware for high performance computing.


Author(s):  
Michael Leventhal ◽  
Eric Lemoine

The XML chip is now more than six years old. The diffusion of this technology has been very limited, due, on the one hand, to the long period of evolutionary development needed to develop hardware capable of accelerating a significant portion of the XML computing workload and, on the other hand, to the fact that the chip was invented by start-up Tarari in a commercial context which required, for business reasons, a minimum of public disclosure of its design features. It remains, nevertheless, a significant landmark that the XML chip has been sold and continuously improved for the last six years. From the perspective of general computing history, the XML chip is an uncommon example of a successful workload-specific symbolic computing device. With respect to the specific interests of the XML community, the XML chip is a remarkable validation of one of its core founding principles: normalizing on a data format, whatever its imperfections, would enable the developers to, eventually, create tools to process it efficiently. This paper was prepared for the International Symposium on Processing XML Efficiently: Overcoming Limits on Space, Time, or Bandwidth, a day of discussion among, predominately, software developers working in the area of efficient XML processing. The Symposium is being held as a workshop within Balisage, a conference of specialists in markup theory. Given the interests of the audience this paper does not delve into the design features and principles of the chip itself; rather it presents a dialectic on the motivation for the development of an XML chip in view of related and potentially competing developments in scaling as it is commonly characterized as a manifestation of Moore's Law, parallelization through increasing the number of computing cores on general purpose processors (multicore Von Neumann architecture), and optimization of software.


2005 ◽  
Vol 17 (1) ◽  
pp. 3-10 ◽  
Author(s):  
Makoto Oya ◽  

Modeling is the key to software design, from large information systems to embedded software. Without well-considered software models, the developed implementation becomes inconsistent or distant from the original requirement. A model is created using a modeling language. UML is a standardized general-purpose modeling language widely used in enterprise systems design. Because it is very large language, UML is not always appropriate for designing small software. Designers also often want to describe models differently based on the immediate need preferring simple, application-specific but flexible notation rather than the rigidity of UML. We propose a metamodeling language, called <I>sMML</I>, to define custom-made modeling language that enables designers to define a suitable modeling language on demand, then write actual models using it. <I>sMML</I> is a metamodeling language small enough to define a variety of modeling languages, self-closed and independent of other modeling languages, and aligned with UML. After completely defining <I>sMML</I>, we present experimental results applying <I>sMML</I>, taking a simple modeling language and UML as examples, which demonstrates that <I>sMML</I> is useful for flexible modeling and capable of defining a wide range of modeling languages.


2017 ◽  
Vol 2017 ◽  
pp. 1-12
Author(s):  
Chunlei Chen ◽  
Li He ◽  
Huixiang Zhang ◽  
Hao Zheng ◽  
Lei Wang

Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.


Author(s):  
Marcelo Brandalero ◽  
Antonio Carlos Beck

Power consumption, earlier a design constraint only in embedded systems, has become the major driver for architectural optimizations in all domains, from the cloud to the edge. Application-specific accelerators provide a low-power processing solution by efficiently matching the hardware to the application; however, since in many domains the hardware must execute efficiently a broad range of fast-evolving applications, unpredictable at design time and each with distinct resource requirements, alternatives approaches are required. Besides that, the same hardware must also adapt the computational power at run time to the system status and workload sizes. To address these issues, this thesis presents a general-purpose reconfigurable accelerator that can be coupled to a heterogeneous set of cores and supports Dynamic Voltage and Frequency Scaling (DVFS), synergistically combining the techniques for a better match between different applications and hardware when compared to current designs. The resulting architecture, MuTARe, provides a coarse-grained regular and reconfigurable structure which is suitable for automatic acceleration of deployed code through dynamic binary translation. In extension to that, the structure of MuTARe is further leveraged to apply two emerging computing paradigms that can boost the power-efficiency: Near-Threshold Voltage (NTV) computing (while still supporting transparent acceleration) and Approximate Computing (AxC). Compared to a traditional heterogeneous system with DVFS support, the base MuTARe architecture can automatically improve the execution time by up to 1:3×, or adapt to the same task deadline with 1:6× smaller energy consumption, or adapt to the same low energy budget with 2:3× better performance. In NTV mode, MuTARe can transparently save further 30% energy in memory-intensive workloads by operating the combinatorial datapath at half the memory frequency. In AxC mode, MuTARe can further improve power savings by up to 50% by leveraging approximate functional units for arithmetic computations.


Author(s):  
Markus Held ◽  
Wolfgang Küchlin ◽  
Wolfgang Blochinger

Web-based problem solving environments provide sharing, execution and monitoring of scientific workflows. Where they depend on general purpose workflow development systems, the workflow notations are likely far too powerful and complex, especially in the area of biology, where programming skills are rare. On the other hand, application specific workflow systems may use special purpose languages and execution engines, suffering from a lack of standards, portability, documentation, stability of investment etc. In both cases, the need to support yet another application on the desk-top places a burden on the system administration of a research lab. In previous research the authors have developed the web based workflow systems Calvin and Hobbes, which enable biologists and computer scientists to approach these problems in collaboration. Both systems use a server-centric Web 2.0 based approach. Calvin is tailored to molecular biology applications, with a simple graphical workflow-language and easy access to existing BioMoby web services. Calvin workflows are compiled to industry standard BPEL workflows, which can be edited and refined in collaboration between researchers and computer scientists using the Hobbes tool. Together, Calvin and Hobbes form our workflow platform MoBiFlow, whose principles, design, and use cases are described in this paper.


Author(s):  
Sam Gianelli ◽  
Edward Richter ◽  
Diego Jimenez ◽  
Hugo Valdez ◽  
Tosiron Adegbija ◽  
...  

2021 ◽  
Vol 13 (9) ◽  
pp. 4873
Author(s):  
Amanda Otley ◽  
Michelle Morris ◽  
Andy Newing ◽  
Mark Birkin

This work seeks to introduce improvements to the traditional variable selection procedures employed in the development of geodemographic classifications. It presents a proposal for shifting from a traditional approach for generating general-purpose one-size-fits-all geodemographic classifications to application-specific classifications. This proposal addresses the recent scepticism towards the utility of general-purpose applications by employing supervised machine learning techniques in order to identify contextually relevant input variables from which to develop geodemographic classifications with increased discriminatory power. A framework introducing such techniques in the variable selection phase of geodemographic classification development is presented via a practical use-case that is focused on generating a geodemographic classification with an increased capacity for discriminating the propensity for Library use in the UK city of Leeds. Two local classifications are generated for the city, one a general-purpose classification, and the other, an application-specific classification incorporating supervised Feature Selection methods in the selection of input variables. The discriminatory power of each classification is evaluated and compared, with the result successfully demonstrating the capacity for the application-specific approach to generate a more contextually relevant result, and thus underpins increasingly targeted public policy decision making, particularly in the context of urban planning.


Sign in / Sign up

Export Citation Format

Share Document