scholarly journals Exploring Dielectric Constant and Dissipation Factor of LTCC Using Machine Learning

Materials ◽  
2021 ◽  
Vol 14 (19) ◽  
pp. 5784
Author(s):  
Yu-chen Liu ◽  
Tzu-Yu Liu ◽  
Tien-Heng Huang ◽  
Kuo-Chuang Chiu ◽  
Shih-kang Lin

Low-temperature co-fired ceramics (LTCCs) have been attracting attention due to rapid advances in wireless telecommunications. Low-dielectric-constant (Dk) and low-dissipation-factor (Df) LTCCs enable a low propagation delay and high signal quality. However, the wide ranges of glass, ceramic filler compositions, and processing features in fabricating LTCC make property modulating difficult via experimental trial-and-error approaches. In this study, we explored Dk and Df values of LTCCs using a machine learning method with a Gaussian kernel ridge regression model. A principal component analysis and k-means methods were initially performed to visually analyze data clustering and to reduce the dimension complexity. Model assessments, by using a five-fold cross-validation, residual analysis, and randomized test, suggest that the proposed Dk and Df models had some predictive ability, that the model selection was appropriate, and that the fittings were not just numerical due to a rather small data set. A cross-plot analysis and property contour plot were performed for the purpose of exploring potential LTCCs for real applications with Dk and Df values less than 10 and 2 × 10−3, respectively, at an operating frequency of 1 GHz. The proposed machine learning models can potentially be utilized to accelerate the design of technology-related LTCC systems.

2017 ◽  
Vol 36 (3) ◽  
pp. 267-269 ◽  
Author(s):  
Matt Hall ◽  
Brendon Hall

The Geophysical Tutorial in the October issue of The Leading Edge was the first we've done on the topic of machine learning. Brendon Hall's article ( Hall, 2016 ) showed readers how to take a small data set — wireline logs and geologic facies data from nine wells in the Hugoton natural gas and helium field of southwest Kansas ( Dubois et al., 2007 ) — and predict the facies in two wells for which the facies data were not available. The article demonstrated with 25 lines of code how to explore the data set, then create, train and test a machine learning model for facies classification, and finally visualize the results. The workflow took a deliberately naive approach using a support vector machine model. It achieved a sort of baseline accuracy rate — a first-order prediction, if you will — of 0.42. That might sound low, but it's not untypical for a naive approach to this kind of problem. For comparison, random draws from the facies distribution score 0.16, which is therefore the true baseline.


1996 ◽  
Vol 446 ◽  
Author(s):  
Bang Hung Tsao ◽  
Sandra Fries Carr ◽  
Joseph A. Weimer

AbstractBaTiO3 films prepared by RF sputtering was studied for capacitor applications. Some films produced have a capacitance storage of 0.85 μF/cm2, a high resistivity of 1014 ω‐cm, and a low dissipation factor of 0.005. The dielectric constant of these BaTi03 films were approximately 10 to 30, which is superior to that of the typical polymer film capacitor and had little dependence on frequency. However the breakdown strength of BaTi03 was approximately 5MV/meter. The theoretical breakdown strength of BaTi03 is reported to be as high as 200MV/m. The processing parameters of BaTi03 films must be optimized to obtain the potential benefit of the BaTi03.


Author(s):  
Daniel Elton ◽  
Zois Boukouvalas ◽  
Mark S. Butrico ◽  
Mark D. Fuge ◽  
Peter W. Chung

We present a proof of concept that machine learning techniques can be used to predict the properties of CNOHF energetic molecules from their molecular structures. We focus on a small but diverse dataset consisting of 109 molecular structures spread across ten compound classes. Up until now, candidate molecules for energetic materials have been screened using predictions from expensive quantum simulations and thermochemical codes. We present a comprehensive comparison of machine learning models and several molecular featurization methods - sum over bonds, custom descriptors, Coulomb matrices, bag of bonds, and fingerprints. The best featurization was sum over bonds (bond counting), and the best model was kernel ridge regression. Despite having a small data set, we obtain acceptable errors and Pearson correlations for the prediction of detonation pressure, detonation velocity, explosive energy, heat of formation, density, and other properties out of sample. By including another dataset with 309 additional molecules in our training we show how the error can be pushed lower, although the convergence with number of molecules is slow. Our work paves the way for future applications of machine learning in this domain, including automated lead generation and interpreting machine learning models to obtain novel chemical insights.


2020 ◽  
Author(s):  
Elzbieta Gralinska ◽  
Martin Vingron

SummaryIn molecular biology, just as in many other fields of science, data often come in the form of matrices or contingency tables with many measurements (rows) for a set of variables (columns). While projection methods like Principal Component Analysis or Correspondence Analysis can be applied for obtaining an overview of such data, in cases where the matrix is very large the associated loss of information upon projection into two or three dimensions may be dramatic. However, when the set of variables can be grouped into clusters, this opens up a new angle on the data. We focus on the question which measurements are associated to a cluster and distinguish it from other clusters. Correspondence Analysis employs a geometry geared towards answering this question. We exploit this feature in order to introduce Association Plots for visualizing cluster-specific measurements in complex data. Association Plots are two-dimensional, independent of the size of data matrix or cluster, and depict the measurements associated to a cluster of variables. We demonstrate our method first on a small data set and then on a genomic example comprising more than 10,000 conditions. We will show that Association Plots can clearly highlight those measurements which characterize a cluster of variables.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Charles Arcodia ◽  
Margarida Abreu Novais ◽  
Nevenka Cavlek ◽  
Andreas Humpe

Purpose This paper aims to investigate participants’ motivations and perceptions of a field trip. Specifically, this paper examines if and how students’ perceptions change with time and it explores the main factors for ensuring success in an experiential learning tourism program. Design/methodology/approach The study gathered and compared data collected in two points in time – immediately at the end of the experience and two months afterward. T-tests for paired samples were used to examine potential differences in perceptions and principal component analysis was used to identify the key factors determining the success of the experience. Findings The findings indicate that there are various motivations behind participation and that time barely affects perceptions of the experience. Furthermore, three factors emerged as important for meeting expectations, namely, social and professional connections, learning and traditional yet engaging teaching. Research limitations/implications While the outcomes are useful, they need to be thoughtfully applied because of the small data set. It is important to repeat similar investigations to allow more certainty in the propositions formulated. Furthermore, future studies should evaluate a broader variety of outcomes to determine whether perceptions remain constant. The implications are that educators and destination managers can easily apply these conclusions for the benefit and the findings can inform other field trips and broader experiential initiatives. Originality/value Despite research on learning outcomes and perceptions of experiential learning having expanded considerably, a fundamental question that remains unanswered is how perceptions of such experiences change and, consequently, when the most appropriate time is to assess participant perceptions.


2019 ◽  
Author(s):  
Hao Dai ◽  
Yu-Xi Zheng ◽  
Xiao-Qi Shan ◽  
Yan-Yi Chu ◽  
Wei Wang ◽  
...  

Abstract Cytochrome P450 (CYP) is the most important drug-metabolizing enzyme in human beings. Each CYP isoform is able to metabolize a large number of compounds, and if patients take more than one drugs during the treatment, it is possible that some drugs would be metabolized by the same CYP isoform, leading to potential drug-drug interactions and side effects. Therefore, it is necessary to investigate the isoform specificity of CYP substrates. In this study, we constructed a data set consisting of 10 major CYP isoforms associated with 776 substrates, and used machine learning methods to construct the predictive models based on the features of structural and physicochemical properties of substrates. We also proposed a new method called Improved Bayesian method, which is suitable for small data sets and is able to construct more stable and accurate predictive models compared with other traditional machine learning models. Based on this method, the predictive performance of our method got the accuracy of 86% for the independent test, which was significantly better to the existing models. We believe that our proposed method will facilitate the understanding of drug metabolisms and help the large-scale analysis of drug-drug interactions.


Author(s):  
Norsyela Muhammad Noor Mathivanan ◽  
Nor Azura Md.Ghani ◽  
Roziah Mohd Janor

<span>The curse of dimensionality and the empty space phenomenon emerged as a critical problem in text classification. One way of dealing with this problem is applying a feature selection technique before performing a classification model. This technique helps to reduce the time complexity and sometimes increase the classification accuracy. This study introduces a feature selection technique using K-Means clustering to overcome the weaknesses of traditional feature selection technique such as principal component analysis (PCA) that require a lot of time to transform all the inputs data. This proposed technique decides on features to retain based on the significance value of each feature in a cluster. This study found that k-means clustering helps to increase the efficiency of KNN model for a large data set while KNN model without feature selection technique is suitable for a small data set. A comparison between K-Means clustering and PCA as a feature selection technique shows that proposed technique is better than PCA especially in term of computation time. Hence, k-means clustering is found to be helpful in reducing the data dimensionality with less time complexity compared to PCA without affecting the accuracy of KNN model for a high frequency data.</span>


2020 ◽  
Vol 34 (08) ◽  
pp. 13148-13155
Author(s):  
Nisha Dalal ◽  
Martin Mølnå ◽  
Mette Herrem ◽  
Magne Røen ◽  
Odd Erik Gundersen

We present a commercially deployed machine learning system that automates the day-ahead nomination of the expected grid loss for a Norwegian utility company. It meets several practical constraints and issues related to, among other things, delayed, missing and incorrect data and a small data set. The system incorporates a total of 24 different models that performs forecasts for three sub-grids. Each day one model is selected for making the hourly day-ahead forecasts for each sub-grid. The deployed system reduces the MAE with 41% from 3.68 MW to 2.17 MW per hour from mid July to mid October. It is robust and reduces manual work.


2020 ◽  
Vol 27 (10) ◽  
pp. 2721-2757
Author(s):  
Rajat Kumar Behera ◽  
Pradip Kumar Bala ◽  
Rashmi Jain

PurposeAny business that opts to adopt a recommender engine (RE) for various potential benefits must choose from the candidate solutions, by matching to the task of interest and domain. The purpose of this paper is to choose RE that fits best from a set of candidate solutions using rule-based automated machine learning (ML) approach. The objective is to draw trustworthy conclusion, which results in brand building, and establishing a reliable relation with customers and undeniably to grow the business.Design/methodology/approachAn experimental quantitative research method was conducted in which the ML model was evaluated with diversified performance metrics and five RE algorithms by combining offline evaluation on historical and simulated movie data set, and the online evaluation on business-alike near-real-time data set to uncover the best-fitting RE.FindingsThe rule-based automated evaluation of RE has changed the testing landscape, with the removal of longer duration of manual testing and not being comprehensive. It leads to minimal manual effort with high-quality results and can possibly bring a new revolution in the testing practice to start a service line “Machine Learning Testing as a service” (MLTaaS) and the possibility of integrating with DevOps that can specifically help agile team to ship a fail-safe RE evaluation product targeting SaaS (software as a service) or cloud deployment.Research limitations/implicationsA small data set was considered for A/B phase study and was captured for ten movies from three theaters operating in a single location in India, and simulation phase study was captured for two movies from three theaters operating from the same location in India. The research was limited to Bollywood and Ollywood movies for A/B phase, and Ollywood movies for simulation phase.Practical implicationsThe best-fitting RE facilitates the business to make personalized recommendations, long-term customer loyalty forecasting, predicting the company's future performance, introducing customers to new products/services and shaping customer's future preferences and behaviors.Originality/valueThe proposed rule-based ML approach named “2-stage locking evaluation” is self-learned, automated by design and largely produces time-bound conclusive result and improved decision-making process. It is the first of a kind to examine the business domain and task of interest. In each stage of the evaluation, low-performer REs are excluded which leads to time-optimized and cost-optimized solution. Additionally, the combination of offline and online evaluation methods offer benefits, such as improved quality with self-learning algorithm, faster time to decision-making by significantly reducing manual efforts with end-to-end test coverage, cognitive aiding for early feedback and unattended evaluation and traceability by identifying the missing test metrics coverage.


Sign in / Sign up

Export Citation Format

Share Document