scholarly journals Neuro-Inspired Signal Processing in Ferromagnetic Nanofibers

Biomimetics ◽  
2021 ◽  
Vol 6 (2) ◽  
pp. 32
Author(s):  
Tomasz Blachowicz ◽  
Jacek Grzybowski ◽  
Pawel Steblinski ◽  
Andrea Ehrmann

Computers nowadays have different components for data storage and data processing, making data transfer between these units a bottleneck for computing speed. Therefore, so-called cognitive (or neuromorphic) computing approaches try combining both these tasks, as is done in the human brain, to make computing faster and less energy-consuming. One possible method to prepare new hardware solutions for neuromorphic computing is given by nanofiber networks as they can be prepared by diverse methods, from lithography to electrospinning. Here, we show results of micromagnetic simulations of three coupled semicircle fibers in which domain walls are excited by rotating magnetic fields (inputs), leading to different output signals that can be used for stochastic data processing, mimicking biological synaptic activity and thus being suitable as artificial synapses in artificial neural networks.

2015 ◽  
Vol 9s1 ◽  
pp. BBI.S28988 ◽  
Author(s):  
Frank A. Feltus ◽  
Joseph R. Breen ◽  
Juan Deng ◽  
Ryan S. Izard ◽  
Christopher A. Konger ◽  
...  

In the last decade, high-throughput DNA sequencing has become a disruptive technology and pushed the life sciences into a distributed ecosystem of sequence data producers and consumers. Given the power of genomics and declining sequencing costs, biology is an emerging “Big Data” discipline that will soon enter the exabyte data range when all subdisciplines are combined. These datasets must be transferred across commercial and research networks in creative ways since sending data without thought can have serious consequences on data processing time frames. Thus, it is imperative that biologists, bioinformaticians, and information technology engineers recalibrate data processing paradigms to fit this emerging reality. This review attempts to provide a snapshot of Big Data transfer across networks, which is often overlooked by many biologists. Specifically, we discuss four key areas: 1) data transfer networks, protocols, and applications; 2) data transfer security including encryption, access, firewalls, and the Science DMZ; 3) data flow control with software-defined networking; and 4) data storage, staging, archiving and access. A primary intention of this article is to orient the biologist in key aspects of the data transfer process in order to frame their genomics-oriented needs to enterprise IT professionals.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Youngjin Kim ◽  
Chul Hyeon Park ◽  
Jun Seop An ◽  
Seung-Hye Choi ◽  
Tae Whan Kim

AbstractArtificial synaptic devices based on natural organic materials are becoming the most desirable for extending their fields of applications to include wearable and implantable devices due to their biocompatibility, flexibility, lightweight, and scalability. Herein, we proposed a zein material, extracted from natural maize, as an active layer in an artificial synapse. The synaptic device exhibited notable digital-data storage and analog data processing capabilities. Remarkably, the zein-based synaptic device achieved recognition accuracy of up to 87% and exhibited clear digit-classification results on the learning and inference test. Moreover, the recognition accuracy of the zein-based artificial synapse was maintained within a difference of less than 2%, regardless of mechanically stressed conditions. We believe that this work will be an important asset toward the realization of wearable and implantable devices utilizing artificial synapses.


Molecules ◽  
2020 ◽  
Vol 25 (11) ◽  
pp. 2550 ◽  
Author(s):  
Tomasz Blachowicz ◽  
Andrea Ehrmann

Neuromorphic computing is assumed to be significantly more energy efficient than, and at the same time expected to outperform, conventional computers in several applications, such as data classification, since it overcomes the so-called von Neumann bottleneck. Artificial synapses and neurons can be implemented into conventional hardware using new software, but also be created by diverse spintronic devices and other elements to completely avoid the disadvantages of recent hardware architecture. Here, we report on diverse approaches to implement neuromorphic functionalities in novel hardware using magnetic elements, published during the last years. Magnetic elements play an important role in neuromorphic computing. While other approaches, such as optical and conductive elements, are also under investigation in many groups, magnetic nanostructures and generally magnetic materials offer large advantages, especially in terms of data storage, but they can also unambiguously be used for data transport, e.g., by propagation of skyrmions or domain walls. This review underlines the possible applications of magnetic materials and nanostructures in neuromorphic systems.


Nanomaterials ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 349
Author(s):  
Devika Sudsom ◽  
Andrea Ehrmann

Combining clusters of magnetic materials with a matrix of other magnetic materials is very interesting for basic research because new, possibly technologically applicable magnetic properties or magnetization reversal processes may be found. Here we report on different arrays combining iron and nickel, for example, by surrounding circular nanodots of one material with a matrix of the other or by combining iron and nickel nanodots in air. Micromagnetic simulations were performed using the OOMMF (Object Oriented MicroMagnetic Framework). Our results show that magnetization reversal processes are strongly influenced by neighboring nanodots and the magnetic matrix by which the nanodots are surrounded, respectively, which becomes macroscopically visible by several steps along the slopes of the hysteresis loops. Such material combinations allow for preparing quaternary memory systems, and are thus highly relevant for applications in data storage and processing.


2002 ◽  
Vol 41 (Part 1, No. 3B) ◽  
pp. 1804-1807 ◽  
Author(s):  
Gakuji Hashimoto ◽  
Hiroki Shima ◽  
Kenji Yamamoto ◽  
Tsutomu Maruyama ◽  
Takashi Nakao ◽  
...  

2018 ◽  
Vol 8 (11) ◽  
pp. 2216
Author(s):  
Jiahui Jin ◽  
Qi An ◽  
Wei Zhou ◽  
Jiakai Tang ◽  
Runqun Xiong

Network bandwidth is a scarce resource in big data environments, so data locality is a fundamental problem for data-parallel frameworks such as Hadoop and Spark. This problem is exacerbated in multicore server-based clusters, where multiple tasks running on the same server compete for the server’s network bandwidth. Existing approaches solve this problem by scheduling computational tasks near the input data and considering the server’s free time, data placements, and data transfer costs. However, such approaches usually set identical values for data transfer costs, even though a multicore server’s data transfer cost increases with the number of data-remote tasks. Eventually, this hampers data-processing time, by minimizing it ineffectively. As a solution, we propose DynDL (Dynamic Data Locality), a novel data-locality-aware task-scheduling model that handles dynamic data transfer costs for multicore servers. DynDL offers greater flexibility than existing approaches by using a set of non-decreasing functions to evaluate dynamic data transfer costs. We also propose online and offline algorithms (based on DynDL) that minimize data-processing time and adaptively adjust data locality. Although DynDL is NP-complete (nondeterministic polynomial-complete), we prove that the offline algorithm runs in quadratic time and generates optimal results for DynDL’s specific uses. Using a series of simulations and real-world executions, we show that our algorithms are 30% better than algorithms that do not consider dynamic data transfer costs in terms of data-processing time. Moreover, they can adaptively adjust data localities based on the server’s free time, data placement, and network bandwidth, and schedule tens of thousands of tasks within subseconds or seconds.


2021 ◽  
Vol 23 (06) ◽  
pp. 784-793
Author(s):  
Kiran Guruprasad Shetty P S ◽  
◽  
Dr. Ravish Aradhya H V ◽  

Power estimation is a very prominent aspect in micro controllers which aims to to be more efficient in terms of power. A new method of estimation of power based on the execution of instruction in AURIX, which is an automotive micro- controller is proposed. The main aim of this method is to estimate the power in perspective of program(software) or instruction level which is constantly processed in microprocessor which is more accurate when compared with the previous methodologies. The estimation is done based on some set of instructions which is used in AURIX for Data transfer/storing in to memories, Data processing and Data Execution for various application. Most of the previous methodologies are all not accurate due to the abstraction levels.


Author(s):  
Ivan Mozghovyi ◽  
Anatoliy Sergiyenko ◽  
Roman Yershov

Increasing requirements for data transfer and storage is one of the crucial questions now. There are several ways of high-speed data transmission, but they meet limited requirements applied to their narrowly focused specific target. The data compression approach gives the solution to the problems of high-speed transfer and low-volume data storage. This paper is devoted to the compression of GIF images, using a modified LZW algorithm with a tree-based dictionary. It has led to a decrease in lookup time and an increase in the speed of data compression, and in turn, allows developing the method of constructing a hardware compression accelerator during the future research.


2018 ◽  
Vol 1 (2) ◽  
pp. 14-24
Author(s):  
Dame Christine Sagala ◽  
Ali Sadikin ◽  
Beni Irawan

The data processing systems is a very necessary way to manipulate a data into useful information. The system makes data storage, adding, changing, scheduling to reporting well integrated, so that it can help parts to exchange information and make decisions quickly. The problems faced by GKPI Pal Merah Jambi are currently still using Microsoft Office Word and in disseminating information such as worship schedules, church activities and other worship routines through paper and wall-based worship services. To print worship and report reports requires substantial operational funds, in addition to data collection and storage there are still deficiencies including recording data on the book, difficulty in processing large amounts of data and stored in only one special place that is passive. Based on the above problems, the author is interested in conducting research with the title Designing Data Processing Systems for Web-Based Churches in the GKPI Pal Merah Church in Jambi. The purpose of this study is to design and produce a data processing system for the church. Using this system can facilitate data processing in the GKPI Pal Merah Jambi Church. This study uses a waterfall development method, a method that provides a systematic and sequential approach to system needs analysis, design, implementation and unit testing, system testing and care. Applications built using the web with MySQL DBMS database, PHP programming language and Laravel.


Author(s):  
Abou_el_ela Abdou Hussein

Day by day advanced web technologies have led to tremendous growth amount of daily data generated volumes. This mountain of huge and spread data sets leads to phenomenon that called big data which is a collection of massive, heterogeneous, unstructured, enormous and complex data sets. Big Data life cycle could be represented as, Collecting (capture), storing, distribute, manipulating, interpreting, analyzing, investigate and visualizing big data. Traditional techniques as Relational Database Management System (RDBMS) couldn’t handle big data because it has its own limitations, so Advancement in computing architecture is required to handle both the data storage requisites and the weighty processing needed to analyze huge volumes and variety of data economically. There are many technologies manipulating a big data, one of them is hadoop. Hadoop could be understand as an open source spread data processing that is one of the prominent and well known solutions to overcome handling big data problem. Apache Hadoop was based on Google File System and Map Reduce programming paradigm. Through this paper we dived to search for all big data characteristics starting from first three V's that have been extended during time through researches to be more than fifty six V's and making comparisons between researchers to reach to best representation and the precise clarification of all big data V’s characteristics. We highlight the challenges that face big data processing and how to overcome these challenges using Hadoop and its use in processing big data sets as a solution for resolving various problems in a distributed cloud based environment. This paper mainly focuses on different components of hadoop like Hive, Pig, and Hbase, etc. Also we institutes absolute description of Hadoop Pros and cons and improvements to face hadoop problems by choosing proposed Cost-efficient Scheduler Algorithm for heterogeneous Hadoop system.


Sign in / Sign up

Export Citation Format

Share Document