scholarly journals Application of Neuromorphic Olfactory Approach for High-Accuracy Classification of Malts

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 440
Author(s):  
Anup Vanarse ◽  
Adam Osseiran ◽  
Alexander Rassau ◽  
Peter van der Made

Current developments in artificial olfactory systems, also known as electronic nose (e-nose) systems, have benefited from advanced machine learning techniques that have significantly improved the conditioning and processing of multivariate feature-rich sensor data. These advancements are complemented by the application of bioinspired algorithms and architectures based on findings from neurophysiological studies focusing on the biological olfactory pathway. The application of spiking neural networks (SNNs), and concepts from neuromorphic engineering in general, are one of the key factors that has led to the design and development of efficient bioinspired e-nose systems. However, only a limited number of studies have focused on deploying these models on a natively event-driven hardware platform that exploits the benefits of neuromorphic implementation, such as ultra-low-power consumption and real-time processing, for simplified integration in a portable e-nose system. In this paper, we extend our previously reported neuromorphic encoding and classification approach to a real-world dataset that consists of sensor responses from a commercial e-nose system when exposed to eight different types of malts. We show that the proposed SNN-based classifier was able to deliver 97% accurate classification results at a maximum latency of 0.4 ms per inference with a power consumption of less than 1 mW when deployed on neuromorphic hardware. One of the key advantages of the proposed neuromorphic architecture is that the entire functionality, including pre-processing, event encoding, and classification, can be mapped on the neuromorphic system-on-a-chip (NSoC) to develop power-efficient and highly-accurate real-time e-nose systems.

2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Supun Kamburugamuve ◽  
Leif Christiansen ◽  
Geoffrey Fox

We describe IoTCloud, a platform to connect smart devices to cloud services for real time data processing and control. A device connected to IoTCloud can communicate with real time data analysis frameworks deployed in the cloud via messaging. The platform design is scalable in connecting devices as well as transferring and processing data. With IoTCloud, a user can develop real time data processing algorithms in an abstract framework without concern for the underlying details of how the data is distributed and transferred. For this platform, we primarily consider real time robotics applications such as autonomous robot navigation, where there are strict requirements on processing latency and demand for scalable processing. To demonstrate the effectiveness of the system, a robotic application is developed on top of the framework. The system and the robotics application characteristics are measured to show that data processing in central servers is feasible for real time sensor applications.


2021 ◽  
Vol 33 (1) ◽  
Author(s):  
Donald Munro ◽  
Andre Calitz ◽  
Dieter Vogts

A software architecture codifies the design choices of software developers, which defines a modularorganizational spine for the design of a software artefact. Different architectures may bespecified for different types of artefacts, a real-time interactive artefact, for example, wouldhave markedly different requirements to those of a batch based transactional system. The use ofsoftware architecture becomes increasingly important as the complexity ofartefacts increases. Augmented Reality blends the real world observed through a computerinterface, with a computer generated virtual world. With the advent ofpowerful mobile devices, Mobile Augmented Reality (MAR)applications have become increasingly feasible, however the increased power hasled to increased complexity. Most MAR research has been directed towardstechnologies and not design resulting in a dearth of architecture and design literature for MAR. This research is targeted at addressing this void. The main requirement that a MAR architecture must meet isidentified as being the efficient real-time processing of data streams such asvideo frames and sensor data. A set of highly parallelised architecturalpatterns are documented within the context of MAR that meet thisrequirement. The contribution of this research is a software architecture, codifiedas architectural patterns, for MAR.


Author(s):  
William Shackleford ◽  
Keith Stouffer

Abstract The Next Generation Inspection System (NGIS) project is a testbed that consists of a Cordax Coordinate Measuring Machine (CMM), advanced sensors, and the National Institute of Standards and Technology (NIST) Real-Time Control System (RCS) open architecture controller. The RCS controller permits real-time processing of sensor data for feedback control of the inspection probe. The open architecture controller permits external access to internal data, such as the current position of the probe. A remote access web site was developed to access this data to drive a Virtual Reality Modeling Language (VRML) * model of the Cordax CMM. The remote access web site contains a client-controlled pan/tilt/zoom camera which sends video to the client as well as the VRML 3D model of the CMM that is controlled by the NGIS controller located at NIST. This remote access web site allows a client to monitor a remote inspection with a PC and an internet connection.


Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3409 ◽  
Author(s):  
Shiyu Wang ◽  
Shengbing Zhang ◽  
Xiaoping Huang ◽  
Jianfeng An ◽  
Libo Chang

The expansion and improvement of synthetic aperture radar (SAR) technology have greatly enhanced its practicality. SAR imaging requires real-time processing with limited power consumption for large input images. Designing a specific heterogeneous array processor is an effective approach to meet the power consumption constraints and real-time processing requirements of an application system. In this paper, taking a commonly used algorithm for SAR imaging—the chirp scaling algorithm (CSA)—as an example, the characteristics of each calculation stage in the SAR imaging process is analyzed, and the data flow model of SAR imaging is extracted. A heterogeneous array architecture for SAR imaging that effectively supports Fast Fourier Transformation/Inverse Fast Fourier Transform (FFT/IFFT) and phase compensation operations is proposed. First, a heterogeneous array architecture consisting of fixed-point PE units and floating-point FPE units, which are respectively proposed for the FFT/IFFT and phase compensation operations, increasing energy efficiency by 50% compared with the architecture using floating-point units. Second, data cross-placement and simultaneous access strategies are proposed to support the intra-block parallel processing of SAR block imaging, achieving up to 115.2 GOPS throughput. Third, a resource management strategy for heterogeneous computing arrays is designed, which supports the pipeline processing of FFT/IFFT and phase compensation operation, improving PE utilization by a factor of 1.82 and increasing energy efficiency by a factor of 1.5. Implemented in 65-nm technology, the experimental results show that the processor can achieve energy efficiency of up to 254 GOPS/W. The imaging fidelity and accuracy of the proposed processor were verified by evaluating the image quality of the actual scene.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1372 ◽  
Author(s):  
Manuel Garcia Alvarez ◽  
Javier Morales ◽  
Menno-Jan Kraak

Smart cities are urban environments where Internet of Things (IoT) devices provide a continuous source of data about urban phenomena such as traffic and air pollution. The exploitation of the spatial properties of data enables situation and context awareness. However, the integration and analysis of data from IoT sensing devices remain a crucial challenge for the development of IoT applications in smart cities. Existing approaches provide no or limited ability to perform spatial data analysis, even when spatial information plays a significant role in decision making across many disciplines. This work proposes a generic approach to enabling spatiotemporal capabilities in information services for smart cities. We adopted a multidisciplinary approach to achieving data integration and real-time processing, and developed a reference architecture for the development of event-driven applications. This type of applications seamlessly integrates IoT sensing devices, complex event processing, and spatiotemporal analytics through a processing workflow for the detection of geographic events. Through the implementation and testing of a system prototype, built upon an existing sensor network, we demonstrated the feasibility, performance, and scalability of event-driven applications to achieve real-time processing capabilities and detect geographic events.


Author(s):  
Oliver Rhodes ◽  
Luca Peres ◽  
Andrew G. D. Rowley ◽  
Andrew Gait ◽  
Luis A. Plana ◽  
...  

Real-time simulation of a large-scale biologically representative spiking neural network is presented, through the use of a heterogeneous parallelization scheme and SpiNNaker neuromorphic hardware. A published cortical microcircuit model is used as a benchmark test case, representing ≈1 mm 2 of early sensory cortex, containing 77 k neurons and 0.3 billion synapses. This is the first hard real-time simulation of this model, with 10 s of biological simulation time executed in 10 s wall-clock time. This surpasses best-published efforts on HPC neural simulators (3 × slowdown) and GPUs running optimized spiking neural network (SNN) libraries (2 × slowdown). Furthermore, the presented approach indicates that real-time processing can be maintained with increasing SNN size, breaking the communication barrier incurred by traditional computing machinery. Model results are compared to an established HPC simulator baseline to verify simulation correctness, comparing well across a range of statistical measures. Energy to solution and energy per synaptic event are also reported, demonstrating that the relatively low-tech SpiNNaker processors achieve a 10 × reduction in energy relative to modern HPC systems, and comparable energy consumption to modern GPUs. Finally, system robustness is demonstrated through multiple 12 h simulations of the cortical microcircuit, each simulating 12 h of biological time, and demonstrating the potential of neuromorphic hardware as a neuroscience research tool for studying complex spiking neural networks over extended time periods. This article is part of the theme issue ‘Harmonizing energy-autonomous computing and intelligence’.


2021 ◽  
Vol 7 ◽  
pp. e646
Author(s):  
Haitham Elwahsh ◽  
Engy El-shafeiy ◽  
Saad Alanazi ◽  
Medhat A. Tawfeek

Cardiovascular diseases (CVDs) are the most critical heart diseases. Accurate analytics for real-time heart disease is significant. This paper sought to develop a smart healthcare framework (SHDML) by using deep and machine learning techniques based on optimization stochastic gradient descent (SGD) to predict the presence of heart disease. The SHDML framework consists of two stage, the first stage of SHDML is able to monitor the heart beat rate condition of a patient. The SHDML framework to monitor patients in real-time has been developed using an ATmega32 Microcontroller to determine heartbeat rate per minute pulse rate sensors. The developed SHDML framework is able to broadcast the acquired sensor data to a Firebase Cloud database every 20 seconds. The smart application is infectious in regard to displaying the sensor data. The second stage of SHDML has been used in medical decision support systems to predict and diagnose heart diseases. Deep or machine learning techniques were ported to the smart application to analyze user data and predict CVDs in real-time. Two different methods of deep and machine learning techniques were checked for their performances. The deep and machine learning techniques were trained and tested using widely used open-access dataset. The proposed SHDML framework had very good performance with an accuracy of 0.99, sensitivity of 0.94, specificity of 0.85, and F1-score of 0.87.


Author(s):  
Shiyu Wang ◽  
Shengbing Zhang ◽  
Xiaoping Huang ◽  
Hao Lyu

Spaceborne SAR(synthetic aperture radar) imaging requires real-time processing of enormous amount of input data with limited power consumption. Designing advanced heterogeneous array processors is an effective way to meet the requirements of power constraints and real-time processing of application systems. To design an efficient SAR imaging processor, the on-chip data organization structure and access strategy are of critical importance. Taking the typical SAR imaging algorithm-chirp scaling algorithm-as the targeted algorithm, this paper analyzes the characteristics of each calculation stage engaged in the SAR imaging process, and extracts the data flow model of SAR imaging, and proposes a storage strategy of cross-region cross-placement and data sorting synchronization execution to ensure FFT/IFFT calculation pipelining parallel operation. The memory wall problem can be alleviated through on-chip multi-level data buffer structure, ensuring the sufficient data providing of the imaging calculation pipeline. Based on this memory organization and access strategy, the SAR imaging pipeline process that effectively supports FFT/IFFT and phase compensation operations is therefore optimized. The processor based on this storage strategy can realize the throughput of up to 115.2 GOPS, and the energy efficiency of up to 254 GOPS/W can be achieved by implementing 65 nm technology. Compared with conventional CPU+GPU acceleration solutions, the performance to power consumption ratio is increased by 63.4 times. The proposed architecture can not only improve the real-time performance, but also reduces the design complexity of the SAR imaging system, which facilitates excellent performance in tailoring and scalability, satisfying the practical needs of different SAR imaging platforms.


Sign in / Sign up

Export Citation Format

Share Document