scholarly journals Efficient General Reflectarray Design and Direct Layout Optimization with a Simple and Accurate Database Using Multilinear Interpolation

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 191
Author(s):  
Daniel R. Prado ◽  
Jesús A. López-Fernández ◽  
Manuel Arrebola

In this work, a simple, efficient and accurate database in the form of a lookup table to use in reflectarray design and direct layout optimization is presented. The database uses N-linear interpolation internally to estimate the reflection coefficients at coordinates that are not stored within it. The speed and accuracy of this approach were measured against the use of the full-wave technique based on local periodicity to populate the database. In addition, it was also compared with a machine learning technique, namely, support vector machines applied to regression in the same conditions, to elucidate the advantages and disadvantages of each one of these techniques. The results obtained from the application to the layout design, analysis and crosspolar optimization of a very large reflectarray for space applications show that, despite using a simple N-linear interpolation, the database offers sufficient accuracy, while considerably accelerating the overall design process as long as it is conveniently populated.

2010 ◽  
Vol 139-141 ◽  
pp. 2532-2536 ◽  
Author(s):  
Hou Yao Zhu ◽  
Chun Liang Zhang ◽  
Xia Yue

This paper mainly introduced the basic theory of Hidden Markov Model (HMM) and Support Vector Machines (SVM). HMM has strong capability of handling dynamic process of time series and the timing pattern classification, particularly for the analysis of non-stationary, poor reproducibility signals. It has good ability to learn and re-learn and high adaptability. SVM has strong generalization ability of small samples, which is suitable for handling classification problems, to a greater extent, reflecting the differences between categories. Based on the advantages and disadvantages between the two models, this paper presented a hybrid model of HMM-SVM. Experiments showed that the HMM-SVM model was more effective and more accurate than the two single separate models. The paper also explored the application of its database system development, which could help the managers to get and handle the data quickly and effectively.


2020 ◽  
Vol 10 (2) ◽  
pp. 21 ◽  
Author(s):  
Gopi Battineni ◽  
Getu Gamo Sagaro ◽  
Nalini Chinatalapudi ◽  
Francesco Amenta

This paper reviews applications of machine learning (ML) predictive models in the diagnosis of chronic diseases. Chronic diseases (CDs) are responsible for a major portion of global health costs. Patients who suffer from these diseases need lifelong treatment. Nowadays, predictive models are frequently applied in the diagnosis and forecasting of these diseases. In this study, we reviewed the state-of-the-art approaches that encompass ML models in the primary diagnosis of CD. This analysis covers 453 papers published between 2015 and 2019, and our document search was conducted from PubMed (Medline), and Cumulative Index to Nursing and Allied Health Literature (CINAHL) libraries. Ultimately, 22 studies were selected to present all modeling methods in a precise way that explains CD diagnosis and usage models of individual pathologies with associated strengths and limitations. Our outcomes suggest that there are no standard methods to determine the best approach in real-time clinical practice since each method has its advantages and disadvantages. Among the methods considered, support vector machines (SVM), logistic regression (LR), clustering were the most commonly used. These models are highly applicable in classification, and diagnosis of CD and are expected to become more important in medical practice in the near future.


Author(s):  
Hui Liu ◽  
Gang Hao ◽  
Bin Xing

AbstractSupport vector machine (SVM) is one of the effective classifiers in the field of network intrusion detection; however, some important information related to classification might be lost in the reprocessing. In this paper, we propose a granular classifier based on entropy clustering method and support vector machine to overcome this limitation. The overall design of classifier is realized with the aid of if-then rules that consists of a premise part and conclusion part. The premise part realized by the entropy clustering method is used here to address the problem of a possible curse of dimensionality, while the conclusion part realized by support vector machines is utilized to build local models. In contrast to the conventional SVM, the proposed entropy clustering-based granular classifiers (ECGC) can be regarded as an entropy-based support function machine. Moreover, an opposition-based genetic algorithm is proposed to optimize the design parameters of the granular classifiers. Experimental results show the effectiveness of the ECGC when compared with some classical models reported in the literatures.


2021 ◽  
Vol 25 ◽  
Author(s):  
Nathalie Hernández ◽  
Nicolas Caradot ◽  
Hauke Sonnenberg ◽  
Pascale Rouault ◽  
Andrés Torres

Objective: this paper focused on: (i) developing a deterioration model based on support vector machines (SVM) from its regression approach to separate the prediction of the structural condition of sewer pipes from a classification by grades and predict the scores obtained by failures found in CCTV inspections; and (ii) comparing the prediction results of the proposed model with the ones obtained by a deterioration model based on SVM classification tasks to explore the advantages and disadvantages of their predictions from different perspectives. Materials and methods: The sewer network of Bogota was the case study for this work in which a dataset consisting of the characteristics of 5031 pipes inspected by CCTV (obtained by GIS) was considered, as well as information on external variables (e.g., age, sewerage, and road type). Probability density functions (PDF) were used to convert the scores given by failures found in CCTV into structural grades. In addition, three techniques were used to evaluate the predictions from different perspectives: positive likelihood rate (PLR), performance curve and deviation analysis. Results: it was found that: (i) SVM-based deterioration model used from its regression approach is suitable to predict critical structural conditions of uninspected sewer pipes because this model showed a PLR value around 6.8 (the highest value among the predictions of all structural conditions for both models) and 74 % of successful predictions for the first 100 pipes with the highest probability of being in critical conditions; and (ii) SVM-based deterioration model used from its classification approach is suitable to predict other structural conditions because this model showed homogeneous PLR values for the prediction of all structural conditions (PLR values between 1.67 and 3.88) and deviation analysis results for all structural conditions are lower than the ones for the SVM-based model from its regression approach.


2021 ◽  
Author(s):  
Jose Llanes-Jurado ◽  
Lucía Amalia Carrasco-Ribelles ◽  
Mariano Alcañiz ◽  
Javier Marín-Morales

Abstract Scholars are increasingly using electrodermal activity (EDA) to assess cognitive-emotional states in laboratory environments, while recent applications have recorded EDA in uncontrolled settings, such as daily-life and virtual reality (VR) contexts, in which users can freely walk and move their hands. However, these records can be affected by major artifacts stemming from movements that can obscure valuable information. Previous work has analyzed signal correction methods to improve the quality of the signal or proposed artifact recognition models based on time windows. Despite these efforts, the correction of EDA signals in uncontrolled environments is still limited, and no existing research has used a signal manually corrected by an expert as a benchmark. This work investigates different machine learning and deep learning architectures, including support vector machines, recurrent neural networks (RNNs), and convolutional neural networks, for the automatic artifact recognition of EDA signals. The data from 44 subjects during an immersive VR task were collected and cleaned by two experts as ground truth. The best model, which used an RNN fed with the raw signal, recognized 72% of the artifacts and had an accuracy of 87%. An automatic correction was performed on the detected artifacts through a combination of linear interpolation and a high degree polynomial. The evaluation of this correction showed that the automatically and manually corrected signals did not present differences in terms of phasic components, while both showed differences to the raw signal. This work provides a tool to automatically correct artifacts of EDA signals which can be used in uncontrolled conditions, allowing for the development of intelligent systems based on EDA monitoring without human intervention.


2016 ◽  
Vol 40 (4) ◽  
pp. 541-549
Author(s):  
Zengshou Dong ◽  
Zhaojing Ren ◽  
You Dong

Mechanical fault vibration signals are non-stationary, which causes system instability. The traditional methods are difficult to accurately extract fault information and this paper proposes a local mean decomposition and least squares support vector machine fault identification method. The article introduces waveform matching to solve the original features of signals at the endpoints, using linear interpolation to get local mean and envelope function, then obtain production function PF vector through making use of the local mean decomposition. The energy entropy of PF vector take as identification input vectors. These vectors are respectively inputted BP neural networks, support vector machines, least squares support vector machines to identify faults. Experimental result show that the accuracy of least squares support vector machine with higher classification accuracy has been improved.


Tecnura ◽  
2019 ◽  
Vol 23 (59) ◽  
pp. 13-26 ◽  
Author(s):  
José Antonio Valero Medina ◽  
Beatriz Elena Alzate Atehortúa

Context: Nowadays, the images of the Earth surface and the algorithms for their classification are widely available. In particular, the algorithms are promising in the differentiating of cotton crops stages, but it is necessary to establish the capabilities of the different algorithms in order to identify their advantages, and disadvantages. Method: This paper describes the assessment process in which the Support Vector Machines (SVM) and random-forest technique (decision trees) are compared with the maximum likelihood estimation when differentiating the stages of cotton crops. A RapidEye satellite image of a geographic area in the municipality of San Pelayo, Cordoba (Colombia), is used for the study. Using a set of sampling polygons, a random sample of 6000 pixels was taken (2000 training and 4000 for validating the classifications.) Confusion matrices, and R (data processing and analysis software) were used during the validation process Results: The maximun likelihood estimation presented a correct classification percentage of 68.95%. SVM correctly classified 81.325% of the cases and the decision trees correctly classified 78.925%. The confidence test for the classifications showed non-overlapping intervals, and SVM obtained the highest values. Conclusions: It was possible to confirm the superiority of the technique based on support vector machines for the proposed verification zones. However, this technique requires a number of classes that comprehensively represent the variations of the image (in order to guarantee a minimum number of support vectors) to avoid confusion in the classification of non-sampled areas. This was less evident in the other two classification techniques analysed.


2020 ◽  
Author(s):  
Lewis Mervin ◽  
Avid M. Afzal ◽  
Ola Engkvist ◽  
Andreas Bender

In the context of bioactivity prediction, the question of how to calibrate a score produced by a machine learning method into reliable probability of binding to a protein target is not yet satisfactorily addressed. In this study, we compared the performance of three such methods, namely Platt Scaling, Isotonic Regression and Venn-ABERS in calibrating prediction scores for ligand-target prediction comprising the Naïve Bayes, Support Vector Machines and Random Forest algorithms with bioactivity data available at AstraZeneca (40 million data points (compound-target pairs) across 2112 targets). Performance was assessed using Stratified Shuffle Split (SSS) and Leave 20% of Scaffolds Out (L20SO) validation.


Sign in / Sign up

Export Citation Format

Share Document