scholarly journals DEVELOPMENT OF MACHINE LEARNING ALGORITHMS FOR MEMRISTOR MODEL PARAMETERS DEFINITION

Author(s):  
Evgeniy Shamin ◽  
Evgeniy Gornev ◽  
Dmitriy Zhevnenko ◽  
Fedor Meshchaninov ◽  
Vladislav Kozhevnikov

This work is dedicated to the development of algorithms for prediction of memristor model parameters via features of its current-voltage characteristic with the help of machine learning. An algorithm for extraction of current-voltage characteristic features is described. An attempt is made to examine their relationship with modified Yakopcic model parameters.

Author(s):  
Evgeniy Shamin ◽  
Dmitriy Zhevnenko ◽  
Fedor Meshchaninov ◽  
Vladislav Kozhevnikov ◽  
Evgeniy Gornev

The focus of this work is on the algorithm of extraction of parameters of the memristor model from the experimentally obtained current-voltage characteristics. The problem of finding the initial guess for this algorithm based on current-voltage characteristic features is stated and solved by means of machine learning algorithms.


Author(s):  
A. Ereshchenko

The goal of this work is to explore the possibility of using machine learning algorithms for modeling the current-voltage characteristic of a memristor, using modeling of HfO2 and TiO2 based memristors as an example. The possibility of combining several mathematical models based on the predictions of individual models is investigated. The results obtained using the combined model are compared with the predictions of individual models and experimental data.


2004 ◽  
Vol 27 (2) ◽  
pp. 61-67
Author(s):  
S. Dib ◽  
C. Salame ◽  
N. Toufik ◽  
A. Khoury ◽  
F. Pélanchon ◽  
...  

A new method for the extraction of junction parameters from a description of the current–voltage characteristic is developed. A simulation is performed and a high accuracy is obtained for the determination of the singleexponential model parameters. The method is easy to implement in a control process for device characterization. An application, achieved to observe the degradation of the emitter–base junction of a bipolar transistor during an aging experiment, shows that the evolutions of the single exponential model parameters versus time introduce a means for degradation quantification.


Author(s):  
Jakub Gęca

The consequences of failures and unscheduled maintenance are the reasons why engineers have been trying to increase the reliability of industrial equipment for years. In modern solutions, predictive maintenance is a frequently used method. It allows to forecast failures and alert about their possibility. This paper presents a summary of the machine learning algorithms that can be used in predictive maintenance and comparison of their performance. The analysis was made on the basis of data set from Microsoft Azure AI Gallery. The paper presents a comprehensive approach to the issue including feature engineering, preprocessing, dimensionality reduction techniques, as well as tuning of model parameters in order to obtain the highest possible performance. The conducted research allowed to conclude that in the analysed case , the best algorithm achieved 99.92% accuracy out of over 122 thousand test data records. In conclusion, predictive maintenance based on machine learning represents the future of machine reliability in industry.


2020 ◽  
Author(s):  
Alex J. C. Witsil

Volcanoes are dangerous and complex with processes coupled to both the subsurface and atmosphere. Effective monitoring of volcanic behavior during and in between periods of crisis requires a diverse suite of instruments and processing routines. Acoustic microphones and video cameras are typical in long-term deployments and provide important constraints on surficial and observational activity yet are underutilized relative to their seismic counterpart. This dissertation increases the utility of infrasound and video datasets through novel applications of computer vision and machine learning algorithms, which help constrain source dynamics and track shifts in activity. Data analyzed come from infrasound and camera installations at Stromboli Volcano, Italy and Villarrica Volcano, Chile and are diverse in terms of the recorded activity. At Villarrica, a computer vision algorithm quantifies video data into a set of characteristic features that are used in a multiparametric analysis with seismic and infrasound data to constrain activity during a period of crisis in 2015. Video features are also input into a machine learning algorithm that classifies data into five modes of activity, which helps track behavior over weekly and monthly time scales. At Stromboli, infrasound signals radiating from the multiple active vents are synthesized into characteristic features and then clustered via an unsupervised learning algorithm. Time histories of cluster activity at each vent reveal concurrent shifts in behavior that suggest a linked plumbing system between the vents. The algorithms presented are general and modular and can be implemented at monitoring agencies that already collect acoustic and video data.


2021 ◽  
Vol 21 (8) ◽  
pp. 2379-2405
Author(s):  
Luigi Cesarini ◽  
Rui Figueiredo ◽  
Beatrice Monteleone ◽  
Mario L. V. Martina

Abstract. Weather index insurance is an innovative tool in risk transfer for disasters induced by natural hazards. This paper proposes a methodology that uses machine learning algorithms for the identification of extreme flood and drought events aimed at reducing the basis risk connected to this kind of insurance mechanism. The model types selected for this study were the neural network and the support vector machine, vastly adopted for classification problems, which were built exploring thousands of possible configurations based on the combination of different model parameters. The models were developed and tested in the Dominican Republic context, based on data from multiple sources covering a time period between 2000 and 2019. Using rainfall and soil moisture data, the machine learning algorithms provided a strong improvement when compared to logistic regression models, used as a baseline for both hazards. Furthermore, increasing the amount of information provided during the training of the models proved to be beneficial to the performances, increasing their classification accuracy and confirming the ability of these algorithms to exploit big data and their potential for application within index insurance products.


2021 ◽  
Author(s):  
Luigi Cesarini ◽  
Rui Figueiredo ◽  
Beatrice Monteleone ◽  
Mario Martina

<p>A steady increase in the frequency and severity of extreme climate events has been observed in recent years, causing losses amounting to billions of dollars. Floods and droughts are responsible for almost half of those losses, severely affecting people’s livelihoods in the form of damaged property, goods and even loss of life. Weather index insurance is an innovative tool in risk transfer for disasters induced by natural hazards. In this type of insurance, payouts are triggered when an index calculated from one or multiple environmental variables exceeds a predefined threshold. Thus, contrary to traditional insurance, it does not require costly and time-consuming post-event loss assessments. Its ease of application makes it an ideal solution for developing countries, where fast payouts in light of a catastrophic event would guarantee the survival of an economic sector, for example, providing the monetary resources necessary for farmers to sustain a prolonged period of extreme temperatures. The main obstacle to a wider application of this type of insurance mechanism stems from the so-called basis risk, which arises when a loss event takes place but a payout is not issued, or vice-versa.</p><p>This study proposes and tests the application of machine learning algorithms for the identification of extreme flood and drought events in the context of weather index insurance, with the aim of reducing basis risk. Neural networks and support vector machines, widely adopted for classification problems, are employed exploring thousands of possible configurations based on the combination of different model parameters. The models were developed and tested in the Dominican Republic context, leveraging datasets from multiple sources with low latency, covering a time period between 2000 and 2019. Using rainfall (GSMaP, CMORPH, CHIRPS, CCS, PERSIANN and IMERG) and soil moisture (ERA5) data, the machine learning algorithms provided a strong improvement when compared to logistic regression models, used as a baseline for both hazards. Furthermore, increasing the number of information provided during model training proved to be beneficial to the performances, improving their classification accuracy and confirming the ability of these algorithms to exploit big data. Results highlight the potential of machine learning for application within index insurance products.</p>


Generally, air pollution refer to the release of various pollutants into the air which are threatening the human health and planet as well. The air pollution is the major dangerous vicious to the humanity ever faced. It causes major damage to animals, plants etc., if this keeps on continuing, the human being will face serious situations in the upcoming years. The major pollutants are from the transport and industries. So, to prevent this problem major sectors have to predict the air quality from transport and industries .In existing project there are many disadvantages. The project is about estimating the PM2.5 concentration by designing a photograph based method. But photographic method is not alone sufficient to calculate PM2.5 because it contains only one of the concentration of pollutants and it calculates only PM2.5 so there are some missing out of the major pollutants and the information needed for controlling the pollution .So thereby we proposed the machine learning techniques by user interface of GUI application. In this multiple dataset can be combined from the different source to form a generalized dataset and various machine learning algorithms are used to get the results with maximum accuracy. From comparing various machine learning algorithms we can obtain the best accuracy result. Our evaluation gives the comprehensive manual to sensitivity evaluation of model parameters with regard to overall performance in prediction of air high quality pollutants through accuracy calculation. Additionally to discuss and compare the performance of machine learning algorithms from the dataset with evaluation of GUI based user interface air quality prediction by attributes.


2021 ◽  
Author(s):  
Andrew Falkowski ◽  
Steven Kauwe ◽  
Taylor Sparks

Traditional, data-driven materials discovery involves screening chemical systems with machine learning algorithms and selecting candidates that excel in a target property. The number of screening candidates grows infinitely large as the fractional resolution of compositions the number of included elements increases. The computational infeasibility and probability of overlooking a successful candidate grow likewise. Our approach shifts the optimization focus from model parameters to the fractions of each element in a composition. Using a pretrained network, CrabNet, and writing a custom loss function to govern a vector of element fractions, compositions can be optimized such that a predicted property is maximized or minimized. Single and multi-property optimization examples are presented that highlight the capabilities and robustness of this approach to inverse design.


2021 ◽  
Author(s):  
Baki Harish ◽  
Sandeep Chinta ◽  
Chakravarthy Balaji ◽  
Balaji Srinivasan

<p>The Indian subcontinent is prone to tropical cyclones that originate in the Indian Ocean and cause widespread destruction to life and property. Accurate prediction of cyclone track, landfall, wind, and precipitation are critical in minimizing damage. The Weather Research and Forecast (WRF) model is widely used to predict tropical cyclones. The accuracy of the model prediction depends on initial conditions, physics schemes, and model parameters. The parameter values are selected empirically by scheme developers using the trial and error method, implying that the parameter values are sensitive to climatological conditions and regions. The number of tunable parameters in the WRF model is about several hundred, and calibrating all of them is highly impossible since it requires thousands of simulations. Therefore, sensitivity analysis is critical to screen out the parameters that significantly impact the meteorological variables. The Sobol’ sensitivity analysis method is used to identify the sensitive WRF model parameters. As this method requires a considerable amount of samples to evaluate the sensitivity adequately, machine learning algorithms are used to construct surrogate models trained using a limited number of samples. They could help generate a vast number of required pseudo-samples. Five machine learning algorithms, namely, Gaussian Process Regression (GPR), Support Vector Machine, Regression Tree, Random Forest, and K-Nearest Neighbor, are considered in this study. Ten-fold cross-validation is used to evaluate the surrogate models constructed using the five algorithms and identify the robust surrogate model among them. The samples generated from this surrogate model are then used by the Sobol’ method to evaluate the WRF model parameter sensitivity.</p>


Sign in / Sign up

Export Citation Format

Share Document