Machine Learning Based Device Simulation Using Multi-variable Non-linear Regression to Assess the Impact of Device Parameter Variability on Threshold Voltage of Double Gate-All-Around (DGAA) MOSFET

Author(s):  
Sandeep Moparthi ◽  
Chandan Yadav ◽  
Gopi Krishna Saramekala ◽  
Pramod Kumar Tiwari
Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 299
Author(s):  
Jaime Pinilla ◽  
Miguel Negrín

The interrupted time series analysis is a quasi-experimental design used to evaluate the effectiveness of an intervention. Segmented linear regression models have been the most used models to carry out this analysis. However, they assume a linear trend that may not be appropriate in many situations. In this paper, we show how generalized additive models (GAMs), a non-parametric regression-based method, can be useful to accommodate nonlinear trends. An analysis with simulated data is carried out to assess the performance of both models. Data were simulated from linear and non-linear (quadratic and cubic) functions. The results of this analysis show how GAMs improve on segmented linear regression models when the trend is non-linear, but they also show a good performance when the trend is linear. A real-life application where the impact of the 2012 Spanish cost-sharing reforms on pharmaceutical prescription is also analyzed. Seasonality and an indicator variable for the stockpiling effect are included as explanatory variables. The segmented linear regression model shows good fit of the data. However, the GAM concludes that the hypothesis of linear trend is rejected. The estimated level shift is similar for both models but the cumulative absolute effect on the number of prescriptions is lower in GAM.


2021 ◽  
Vol 46 (1) ◽  
Author(s):  
C. E. Chigbundu ◽  
K. O. Adebowale

Dyes are complex and sensitive organic chemicals which exposes microbial populations, aquatic lives and other living organisms to its toxic effects if their presence in water bodies or industrial effluents are not properly handled. This work therefore, comparatively studied the adsorption efficiencies of natural raw kaolinite (NRK) clay adsorbent and dimethyl sulphoxide (DMSO) faciley intercalated kaolinite clay (DIK) adsorbent for batch adsorption of Basis Red 2 (BR2) dye. The impact of varying the contact time, temperature and other operating variables on adsorption was also considered. The two adsorbents were characterized using SEM images, FTIR and XRD patterns. Linear and non-linear regression analysis of different isotherm and kinetic models were used to describe the appropriate fits to the experimental data. Error analysis equations were also used to measure the goodness-of-fit. Langmuir isotherm model best described the adsorption as being monolayer on homogenous surfaces while Kinetic studies showed that Elovich model provides the best fit to experimental data. The adsorption capacities of NRK and DIK adsorbents for the uptake of BR2 were 16.30 mg/g and 32.81 mg/g, respectively (linear regression) and 19.30 mg/g and 30.81 mg/g, respectively (non-linear regression). The thermodynamic parameter, ∆G showed that BR2 dye adsorption onto the adsorbents were spontaneous. DIK adsorbent was twice efficient compared with NRK for the uptake of BR2 dye.


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Agni Orfanoudaki ◽  
Amre M Nouh ◽  
Emma Chesley ◽  
Christian Cadisch ◽  
Barry Stein ◽  
...  

Background: Current stroke risk assessment tools presume the impact of risk factors is linear and cumulative. However, both novel risk factors and their interplay influencing stroke incidence are difficult to reveal using traditional linear models. Objective: To improve upon the Revised-Framingham Stroke Risk Score and design an interactive non-linear Stroke Risk Score (NSRS). Our work aimed at increasing the accuracy of event prediction and uncovering new relationships in an interpretable user-friendly fashion. Methods: A two phase approach was used to develop our stroke risk score predictor. First, clinical examinations of the Framingham offspring cohort were utilized as the training dataset for the predictive model consisting of 14,196 samples where each clinical examination was considered an independent observation. Optimal Classification Trees (OCT) were used to train a model to predict 10-year stroke risk. Second, this model was validated with 17,527 observations from the Boston Medical Center. The NSRS was developed into an online user friendly application in the form of a questionnaire (http://www.mit.edu/~agniorf/files/questionnaire_Cohort2.html). Results: The algorithm suggests a key dichotomy between patients with or without history of cardiovascular disease. While the model agrees with known findings, it also identified 23 unique stroke risk profiles and introduced new non-linear relationships; such as the role of T-wave abnormality on electrocardiography and hematocrit levels in a patient’s risk profile. Our results in both the training and validation populations suggested that the non-linear approach significantly improves upon the existing revised Framingham stroke risk calculator in the c-statistic (training 87.43% (CI 0.85-0.90) vs. 73.74% (CI 0.70-0.76); validation 75.29% (CI 0.74-0.76) vs 65.93% (CI 0.64-0.67), even in multi-ethnicity populations. Conclusions: We constructed a highly predictive, interpretable and user-friendly stroke risk calculator using novel machine-learning uncovering new risk factors, interactions and unique profiles. The clinical implications include prioritization of risk factor modification and personalized care improving targeted intervention for stroke prevention.


Crystals ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 1143
Author(s):  
Maximilian W. Feil ◽  
Andreas Huerner ◽  
Katja Puschkarsky ◽  
Christian Schleich ◽  
Thomas Aichinger ◽  
...  

Silicon carbide is an emerging material in the field of wide band gap semiconductor devices. Due to its high critical breakdown field and high thermal conductance, silicon carbide MOSFET devices are predestined for high-power applications. The concentration of defects with short capture and emission time constants is higher than in silicon technologies by orders of magnitude which introduces threshold voltage dynamics in the volt regime even on very short time scales. Measurements are heavily affected by timing of readouts and the applied gate voltage before and during the measurement. As a consequence, device parameter determination is not as reproducible as in the case of silicon technologies. Consequent challenges for engineers and researchers to measure device parameters have to be evaluated. In this study, we show how the threshold voltage of planar and trench silicon carbide MOSFET devices of several manufacturers react on short gate pulses of different lengths and voltages and how they influence the outcome of application-relevant pulsed current-voltage characteristics. Measurements are performed via a feedback loop allowing in-situ tracking of the threshold voltage with a measurement delay time of only 1 μs. Device preconditioning, recently suggested to enable reproducible BTI measurements, is investigated in the context of device parameter determination by varying the voltage and the length of the preconditioning pulse.


2018 ◽  
Vol 47 (2) ◽  
pp. 403-412 ◽  
Author(s):  
Shijie Zhou ◽  
Amir AbdelWahab ◽  
John L. Sapp ◽  
James W. Warren ◽  
B. Milan Horáček

2016 ◽  
Vol 16 (13) ◽  
pp. 8181-8191 ◽  
Author(s):  
Jani Huttunen ◽  
Harri Kokkola ◽  
Tero Mielonen ◽  
Mika Esa Juhani Mononen ◽  
Antti Lipponen ◽  
...  

Abstract. In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during the observation period.


2021 ◽  
Vol 27 (5) ◽  
pp. 1057-1071
Author(s):  
Martina Cernikova ◽  
Sarka Hyblerova

The article evaluates the impact of tax support for R&D on the volume of R&D outputs generated by companies. The number of patent applications was chosen as the R&D metric for business output. Both linear dependence using linear regression and non-linear dependence using decision trees were used within the research. The significance of indirect support in the context of other sources of funding R&D activities of companies was primarily assessed. The dependence of the number of patent applications on individual sources of financing of the Business Enterprise Expenditure on R&D was examined. Even after scaling variables, the research in the period under review confirmed the strongest dependence between the number of patent applications and the financial resources provided by the Business enterprise sector for all countries surveyed. Subsequently, the model reduced by the impact of Business enterprise sector resources was created. Of the three remaining variables considered, the analysis showed the strongest dependence of the number of patent applications on the amount of indirect support. The research points to the fact that impact of tax support on the volume of relevant R&D outputs is relatively significant.


2021 ◽  
Author(s):  
Wei Qiu ◽  
Hugh Chen ◽  
Ayse Berceste Dincer ◽  
Su-In Lee

AbstractExplainable artificial intelligence provides an opportunity to improve prediction accuracy over standard linear models using “black box” machine learning (ML) models while still revealing insights into a complex outcome such as all-cause mortality. We propose the IMPACT (Interpretable Machine learning Prediction of All-Cause morTality) framework that implements and explains complex, non-linear ML models in epidemiological research, by combining a tree ensemble mortality prediction model and an explainability method. We use 133 variables from NHANES 1999–2014 datasets (number of samples: n = 47, 261) to predict all-cause mortality. To explain our model, we extract local (i.e., per-sample) explanations to verify well-studied mortality risk factors, and make new discoveries. We present major factors for predicting x-year mortality (x = 1, 3, 5) across different age groups and their individualized impact on mortality prediction. Moreover, we highlight interactions between risk factors associated with mortality prediction, which leads to findings that linear models do not reveal. We demonstrate that compared with traditional linear models, tree-based models have unique strengths such as: (1) improving prediction power, (2) making no distribution assumptions, (3) capturing non-linear relationships and important thresholds, (4) identifying feature interactions, and (5) detecting different non-linear relationships between models. Given the popularity of complex ML models in prognostic research, combining these models with explainability methods has implications for further applications of ML in medical fields. To our knowledge, this is the first study that combines complex ML models and state-of-the-art feature attributions to explain mortality prediction, which enables us to achieve higher prediction accuracy and gain new insights into the effect of risk factors on mortality.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 590
Author(s):  
Alexis Lozano ◽  
Pedro Cabrera ◽  
Ana M. Blanco-Marigorta

Technological innovations are not enough by themselves to achieve social and environmental sustainability in companies. Sustainable development aims to determine the environmental impact of a product and the hidden price of products and services through the concept of radical transparency. This means that companies should show and disclose the impact on the environment of any good or service. This way, the consumer can choose in a transparent manner, not only for the price. The use of the eco-label as a European eco-label, which bases its criteria on life cycle assessment, could provide an indicator of corporate social responsibility for a given product. However, it does not give a full guarantee that the product was obtained in a sustainable manner. The aim of this work is to provide a way of calculating the value of the environmental impacts of an industrial product, under different operating conditions, so that each company can provide detailed information on the impacts of its products, information that can form part of its "green product sheet". As a case study, the daily production of a newspaper, printed by coldset, has been chosen. Each process involved in production was configured with raw material and energy consumption information from production plants, manufacturer data and existing databases. Four non-linear regression models have been trained to estimate the impact of a newspaper’s circulation from five input variables (pages, grammage, height, paper type, and print run) with 5508 data samples each. These non-linear regression models were trained using the Levenberg–Marquardt nonlinear least squares algorithm. The mean absolute percentage errors (MAPE) obtained by all the non-linear regression models tested were less than 5%. Through the proposed correlations, it is possible to obtain a score that reports on the impact of the product for different operating conditions and several types of raw materials. Ecolabelling can be further developed by incorporating a scoring system for the impact caused by the product or process, using a standardised impact methodology.


2020 ◽  
Vol 12 (5) ◽  
pp. 379-391
Author(s):  
Ihsane Gryech ◽  
Mounir Ghogho ◽  
Hajar Elhammouti ◽  
Nada Sbihi ◽  
Abdellatif Kobbane

The presence of pollutants in the air has a direct impact on our health and causes detrimental changes to our environment. Air quality monitoring is therefore of paramount importance. The high cost of the acquisition and maintenance of accurate air quality stations implies that only a small number of these stations can be deployed in a country. To improve the spatial resolution of the air monitoring process, an interesting idea is to develop data-driven models to predict air quality based on readily available data. In this paper, we investigate the correlations between air pollutants concentrations and meteorological and road traffic data. Using machine learning, regression models are developed to predict pollutants concentration. Both linear and non-linear models are investigated in this paper. It is shown that non-linear models, namely Random Forest (RF) and Support Vector Regression (SVR), better describe the impact of traffic flows and meteorology on the concentrations of pollutants in the atmosphere. It is also shown that more accurate prediction models can be obtained when including some pollutants’ concentration as predictors. This may be used to infer the concentrations of some pollutants using those of other pollutants, thereby reducing the number of air pollution sensors.


Sign in / Sign up

Export Citation Format

Share Document