Using AI at the Edge and Incremental Machine Learning to Process Onboard Instrument Data

2021 ◽  
Author(s):  
Nicholas Parkyn

Emerging heterogeneous computing, computing at the edge, machine learning and AI at the edge technology drives approaches and techniques for processing and analysing onboard instrument data in near real-time. The author has used edge computing and neural networks combined with high performance heterogeneous computing platforms to accelerate AI workloads. Heterogeneous computing hardware used is readily available, low cost, delivers impressive AI performance and can run multiple neural networks in parallel. Collecting, processing and machine learning from onboard instruments data in near real-time is not a trivial problem due to data volumes, complexities of data filtering, data storage and continual learning. Little research has been done on continual machine learning which aims at a higher level of machine intelligence through providing the artificial agents with the ability to learn from a non-stationary and never-ending stream of data. The author has applied the concept of continual learning to building a system that continually learns from actual boat performance and refines predictions previously done using static VPP data. The neural networks used are initially trained using the output from traditional VPP software and continue to learn from actual data collected under real sailing conditions. The author will present the system design, AI, and edge computing techniques used and the approaches he has researched for incremental training to realise continual learning.

2012 ◽  
Vol 433-440 ◽  
pp. 4565-4570
Author(s):  
Guo Sheng Xu

Due to the project in this article, a kind of image capture and processing system based on FPGA is proposed, the low cost high performance FPGA is selected as the main core, the design of the whole system including software and hardware are implemented. The system achieves to functions of the high -speed data collection, the high -speed video data compression the real time video data Network Transmission and the real time compression picture data storage. the data processed was transferred to PC through USB2.0 real-time to reconstruct defects microscopic images. Experimental results prove right and feasible by adopting the algorithm and scheme proposed in this paper.


Author(s):  
Petar Radanliev ◽  
David De Roure ◽  
Kevin Page ◽  
Max Van Kleek ◽  
Omar Santos ◽  
...  

AbstractMultiple governmental agencies and private organisations have made commitments for the colonisation of Mars. Such colonisation requires complex systems and infrastructure that could be very costly to repair or replace in cases of cyber-attacks. This paper surveys deep learning algorithms, IoT cyber security and risk models, and established mathematical formulas to identify the best approach for developing a dynamic and self-adapting system for predictive cyber risk analytics supported with Artificial Intelligence and Machine Learning and real-time intelligence in edge computing. The paper presents a new mathematical approach for integrating concepts for cognition engine design, edge computing and Artificial Intelligence and Machine Learning to automate anomaly detection. This engine instigates a step change by applying Artificial Intelligence and Machine Learning embedded at the edge of IoT networks, to deliver safe and functional real-time intelligence for predictive cyber risk analytics. This will enhance capacities for risk analytics and assists in the creation of a comprehensive and systematic understanding of the opportunities and threats that arise when edge computing nodes are deployed, and when Artificial Intelligence and Machine Learning technologies are migrated to the periphery of the internet and into local IoT networks.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 223
Author(s):  
Yen-Ling Tai ◽  
Shin-Jhe Huang ◽  
Chien-Chang Chen ◽  
Henry Horng-Shing Lu

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.


Author(s):  
Sridharan Chandrasekaran ◽  
G. Suresh Kumar

Rate of Penetration (ROP) is one of the important factors influencing the drilling efficiency. Since cost recovery is an important bottom line in the drilling industry, optimizing ROP is essential to minimize the drilling operational cost and capital cost. Traditional the empirical models are not adaptive to new lithology changes and hence the predictive accuracy is low and subjective. With advancement in big data technologies, real- time data storage cost is lowered, and the availability of real-time data is enhanced. In this study, it is shown that optimization methods together with data models has immense potential in predicting ROP based on real time measurements on the rig. A machine learning based data model is developed by utilizing the offset vertical wells’ real time operational parameters while drilling. Data pre-processing methods and feature engineering methods modify the raw data into a processed data so that the model learns effectively from the inputs. A multi – layer back propagation neural network is developed, cross-validated and compared with field measurements and empirical models.


2022 ◽  
Vol 201 ◽  
pp. 110881
Author(s):  
Xiaoxi Mi ◽  
Lianjuan Tian ◽  
Aitao Tang ◽  
Jing Kang ◽  
Peng Peng ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3144 ◽  
Author(s):  
Sherif Said ◽  
Ilyes Boulkaibet ◽  
Murtaza Sheikh ◽  
Abdullah S. Karar ◽  
Samer Alkork ◽  
...  

In this paper, a customizable wearable 3D-printed bionic arm is designed, fabricated, and optimized for a right arm amputee. An experimental test has been conducted for the user, where control of the artificial bionic hand is accomplished successfully using surface electromyography (sEMG) signals acquired by a multi-channel wearable armband. The 3D-printed bionic arm was designed for the low cost of 295 USD, and was lightweight at 428 g. To facilitate a generic control of the bionic arm, sEMG data were collected for a set of gestures (fist, spread fingers, wave-in, wave-out) from a wide range of participants. The collected data were processed and features related to the gestures were extracted for the purpose of training a classifier. In this study, several classifiers based on neural networks, support vector machine, and decision trees were constructed, trained, and statistically compared. The support vector machine classifier was found to exhibit an 89.93% success rate. Real-time testing of the bionic arm with the optimum classifier is demonstrated.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3021 ◽  
Author(s):  
Zeba Idrees ◽  
Zhuo Zou ◽  
Lirong Zheng

With the swift growth in commerce and transportation in the modern civilization, much attention has been paid to air quality monitoring, however existing monitoring systems are unable to provide sufficient spatial and temporal resolutions of the data with cost efficient and real time solutions. In this paper we have investigated the issues, infrastructure, computational complexity, and procedures of designing and implementing real-time air quality monitoring systems. To daze the defects of the existing monitoring systems and to decrease the overall cost, this paper devised a novel approach to implement the air quality monitoring system, employing the edge-computing based Internet-of-Things (IoT). In the proposed method, sensors gather the air quality data in real time and transmit it to the edge computing device that performs necessary processing and analysis. The complete infrastructure & prototype for evaluation is developed over the Arduino board and IBM Watson IoT platform. Our model is structured in such a way that it reduces the computational burden over sensing nodes (reduced to 70%) that is battery powered and balanced it with edge computing device that has its local data base and can be powered up directly as it is deployed indoor. Algorithms were employed to avoid temporary errors in low cost sensor, and to manage cross sensitivity problems. Automatic calibration is set up to ensure the accuracy of the sensors reporting, hence achieving data accuracy around 75–80% under different circumstances. In addition, a data transmission strategy is applied to minimize the redundant network traffic and power consumption. Our model acquires a power consumption reduction up to 23% with a significant low cost. Experimental evaluations were performed under different scenarios to validate the system’s effectiveness.


2019 ◽  
Vol 72 (04) ◽  
pp. 917-930
Author(s):  
Fang-Shii Ning ◽  
Xiaolin Meng ◽  
Yi-Ting Wang

Connected and Autonomous Vehicles (CAVs) have been researched extensively for solving traffic issues and for realising the concept of an intelligent transport system. A well-developed positioning system is critical for CAVs to achieve these aims. The system should provide high accuracy, mobility, continuity, flexibility and scalability. However, high-performance equipment is too expensive for the commercial use of CAVs; therefore, the use of a low-cost Global Navigation Satellite System (GNSS) receiver to achieve real-time, high-accuracy and ubiquitous positioning performance will be a future trend. This research used RTKLIB software to develop a low-cost GNSS receiver positioning system and assessed the developed positioning system according to the requirements of CAV applications. Kinematic tests were conducted to evaluate the positioning performance of the low-cost receiver in a CAV driving environment based on the accuracy requirements of CAVs. The results showed that the low-cost receiver satisfied the “Where in Lane” accuracy level (0·5 m) and achieved a similar positioning performance in rural, interurban, urban and motorway areas.


Sign in / Sign up

Export Citation Format

Share Document