Compensating Resource Fluctuations by Means of Evolvable Hardware

Author(s):  
Paul Kaufmann ◽  
Kyrre Glette ◽  
Marco Platzner ◽  
Jim Torresen

The evolvable hardware (EHW) paradigm facilitates the construction of autonomous systems that can adapt to environmental changes and degradation of the computational resources. Extending the EHW principle to architectural adaptation, the authors study the capability of evolvable hardware classifiers to adapt to intentional run-time fluctuations in the available resources, i.e., chip area, in this work. To that end, the authors leverage the Functional Unit Row (FUR) architecture, a coarse-grained reconfigurable classifier, and apply it to two medical benchmarks, the Pima and Thyroid data sets from the UCI Machine Learning Repository. While quick recovery from architectural changes was already demonstrated for the FUR architecture, the authors also introduce two reconfiguration schemes helping to reduce the magnitude of degradation after architectural reconfiguration.

2021 ◽  
Author(s):  
Chady Ghnatios ◽  
George El Haber ◽  
Jean-Louis Duval ◽  
Mustapha Ziane ◽  
Francisco Chinesta

The need of solving industrial problems using faster and less computationally expensive techniques is becoming a requirement to cope with the present digital transformation of most industries. Recently, data is conquering the domain of engineering with different purposes: (i) defining data-driven models of materials, processes, structures and systems, whose physics-based models, when they exists, remain too inaccurate; (ii) enriching the existing physics-based models within the so-called hybrid paradigm; and (iii) using advanced machine learning and artificial intelligence techniques for scales bridging (upscaling), that is, for creating models that operating at the coarse-grained scale (cheaper in what respect the computational resources) enables integrating the fine-scale richness. The present work addresses the last item, aiming at enhancing standard structural models (defined in 2D shell geometries) for accounting all the fine-scale details (3D with rich through-the-thickness behaviors). For this purpose, two main strategies will be combined: (i) the in-plane-out-of-plane proper generalized decomposition -PGD- serving to provide the fine-scale richness; and (ii) advance machine learning techniques able to learn and extract the regression relating the input parameters with those high-resolution detailed descriptions.


2020 ◽  
Author(s):  
Dianbo Liu

BACKGROUND Applications of machine learning (ML) on health care can have a great impact on people’s lives. At the same time, medical data is usually big, requiring a significant amount of computational resources. Although it might not be a problem for wide-adoption of ML tools in developed nations, availability of computational resource can very well be limited in third-world nations and on mobile devices. This can prevent many people from benefiting of the advancement in ML applications for healthcare. OBJECTIVE In this paper we explored three methods to increase computational efficiency of either recurrent neural net-work(RNN) or feedforward (deep) neural network (DNN) while not compromising its accuracy. We used in-patient mortality prediction as our case analysis upon intensive care dataset. METHODS We reduced the size of RNN and DNN by applying pruning of “unused” neurons. Additionally, we modified the RNN structure by adding a hidden-layer to the RNN cell but reduce the total number of recurrent layers to accomplish a reduction of total parameters in the network. Finally, we implemented quantization on DNN—forcing the weights to be 8-bits instead of 32-bits. RESULTS We found that all methods increased implementation efficiency–including training speed, memory size and inference speed–without reducing the accuracy of mortality prediction. CONCLUSIONS This improvements allow the implementation of sophisticated NN algorithms on devices with lower computational resources.


2021 ◽  
Vol 34 (2) ◽  
pp. 541-549 ◽  
Author(s):  
Leihong Wu ◽  
Ruili Huang ◽  
Igor V. Tetko ◽  
Zhonghua Xia ◽  
Joshua Xu ◽  
...  

2021 ◽  
Vol 13 (13) ◽  
pp. 2433
Author(s):  
Shu Yang ◽  
Fengchao Peng ◽  
Sibylle von Löwis ◽  
Guðrún Nína Petersen ◽  
David Christian Finger

Doppler lidars are used worldwide for wind monitoring and recently also for the detection of aerosols. Automatic algorithms that classify the lidar signals retrieved from lidar measurements are very useful for the users. In this study, we explore the value of machine learning to classify backscattered signals from Doppler lidars using data from Iceland. We combined supervised and unsupervised machine learning algorithms with conventional lidar data processing methods and trained two models to filter noise signals and classify Doppler lidar observations into different classes, including clouds, aerosols and rain. The results reveal a high accuracy for noise identification and aerosols and clouds classification. However, precipitation detection is underestimated. The method was tested on data sets from two instruments during different weather conditions, including three dust storms during the summer of 2019. Our results reveal that this method can provide an efficient, accurate and real-time classification of lidar measurements. Accordingly, we conclude that machine learning can open new opportunities for lidar data end-users, such as aviation safety operators, to monitor dust in the vicinity of airports.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4805
Author(s):  
Saad Abbasi ◽  
Mahmoud Famouri ◽  
Mohammad Javad Shafiee ◽  
Alexander Wong

Human operators often diagnose industrial machinery via anomalous sounds. Given the new advances in the field of machine learning, automated acoustic anomaly detection can lead to reliable maintenance of machinery. However, deep learning-driven anomaly detection methods often require an extensive amount of computational resources prohibiting their deployment in factories. Here we explore a machine-driven design exploration strategy to create OutlierNets, a family of highly compact deep convolutional autoencoder network architectures featuring as few as 686 parameters, model sizes as small as 2.7 KB, and as low as 2.8 million FLOPs, with a detection accuracy matching or exceeding published architectures with as many as 4 million parameters. The architectures are deployed on an Intel Core i5 as well as a ARM Cortex A72 to assess performance on hardware that is likely to be used in industry. Experimental results on the model’s latency show that the OutlierNet architectures can achieve as much as 30x lower latency than published networks.


2021 ◽  
pp. 1-36
Author(s):  
Henry Prakken ◽  
Rosa Ratsma

This paper proposes a formal top-level model of explaining the outputs of machine-learning-based decision-making applications and evaluates it experimentally with three data sets. The model draws on AI & law research on argumentation with cases, which models how lawyers draw analogies to past cases and discuss their relevant similarities and differences in terms of relevant factors and dimensions in the problem domain. A case-based approach is natural since the input data of machine-learning applications can be seen as cases. While the approach is motivated by legal decision making, it also applies to other kinds of decision making, such as commercial decisions about loan applications or employee hiring, as long as the outcome is binary and the input conforms to this paper’s factor- or dimension format. The model is top-level in that it can be extended with more refined accounts of similarities and differences between cases. It is shown to overcome several limitations of similar argumentation-based explanation models, which only have binary features and do not represent the tendency of features towards particular outcomes. The results of the experimental evaluation studies indicate that the model may be feasible in practice, but that further development and experimentation is needed to confirm its usefulness as an explanation model. Main challenges here are selecting from a large number of possible explanations, reducing the number of features in the explanations and adding more meaningful information to them. It also remains to be investigated how suitable our approach is for explaining non-linear models.


2021 ◽  
Vol 11 (5) ◽  
pp. 2177
Author(s):  
Zuo Xiang ◽  
Patrick Seeling ◽  
Frank H. P. Fitzek

With increasing numbers of computer vision and object detection application scenarios, those requiring ultra-low service latency times have become increasingly prominent; e.g., those for autonomous and connected vehicles or smart city applications. The incorporation of machine learning through the applications of trained models in these scenarios can pose a computational challenge. The softwarization of networks provides opportunities to incorporate computing into the network, increasing flexibility by distributing workloads through offloading from client and edge nodes over in-network nodes to servers. In this article, we present an example for splitting the inference component of the YOLOv2 trained machine learning model between client, network, and service side processing to reduce the overall service latency. Assuming a client has 20% of the server computational resources, we observe a more than 12-fold reduction of service latency when incorporating our service split compared to on-client processing and and an increase in speed of more than 25% compared to performing everything on the server. Our approach is not only applicable to object detection, but can also be applied in a broad variety of machine learning-based applications and services.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Chinmay P. Swami ◽  
Nicholas Lenhard ◽  
Jiyeon Kang

AbstractProsthetic arms can significantly increase the upper limb function of individuals with upper limb loss, however despite the development of various multi-DoF prosthetic arms the rate of prosthesis abandonment is still high. One of the major challenges is to design a multi-DoF controller that has high precision, robustness, and intuitiveness for daily use. The present study demonstrates a novel framework for developing a controller leveraging machine learning algorithms and movement synergies to implement natural control of a 2-DoF prosthetic wrist for activities of daily living (ADL). The data was collected during ADL tasks of ten individuals with a wrist brace emulating the absence of wrist function. Using this data, the neural network classifies the movement and then random forest regression computes the desired velocity of the prosthetic wrist. The models were trained/tested with ADLs where their robustness was tested using cross-validation and holdout data sets. The proposed framework demonstrated high accuracy (F-1 score of 99% for the classifier and Pearson’s correlation of 0.98 for the regression). Additionally, the interpretable nature of random forest regression was used to verify the targeted movement synergies. The present work provides a novel and effective framework to develop an intuitive control for multi-DoF prosthetic devices.


2020 ◽  
pp. 1-17
Author(s):  
Francisco Javier Balea-Fernandez ◽  
Beatriz Martinez-Vega ◽  
Samuel Ortega ◽  
Himar Fabelo ◽  
Raquel Leon ◽  
...  

Background: Sociodemographic data indicate the progressive increase in life expectancy and the prevalence of Alzheimer’s disease (AD). AD is raised as one of the greatest public health problems. Its etiology is twofold: on the one hand, non-modifiable factors and on the other, modifiable. Objective: This study aims to develop a processing framework based on machine learning (ML) and optimization algorithms to study sociodemographic, clinical, and analytical variables, selecting the best combination among them for an accurate discrimination between controls and subjects with major neurocognitive disorder (MNCD). Methods: This research is based on an observational-analytical design. Two research groups were established: MNCD group (n = 46) and control group (n = 38). ML and optimization algorithms were employed to automatically diagnose MNCD. Results: Twelve out of 37 variables were identified in the validation set as the most relevant for MNCD diagnosis. Sensitivity of 100%and specificity of 71%were achieved using a Random Forest classifier. Conclusion: ML is a potential tool for automatic prediction of MNCD which can be applied to relatively small preclinical and clinical data sets. These results can be interpreted to support the influence of the environment on the development of AD.


Sign in / Sign up

Export Citation Format

Share Document