scholarly journals Scattering Model-Based Frequency-Hopping RCS Reconstruction Using SPICE Methods

2021 ◽  
Vol 13 (18) ◽  
pp. 3689
Author(s):  
Yingjun Li ◽  
Wenpeng Zhang ◽  
Biao Tian ◽  
Wenhao Lin ◽  
Yongxiang Liu

RCS reconstruction is an important way to reduce the measurement time in anechoic chambers and expand the radar original data, which can solve the problems of data scarcity and a high measurement cost. The greedy pursuit, convex relaxation, and sparse Bayesian learning-based sparse recovery methods can be used for parameter estimation. However, these sparse recovery methods either have problems in solving accuracy or selecting auxiliary parameters, or need to determine the probability distribution of noise in advance. To solve these problems, a non-parametric Sparse Iterative Covariance Estimation (SPICE) algorithm with global convergence property based on the sparse Geometrical Theory of Diffraction (GTD) model (GTD–SPICE) is employed for the first time for RCS reconstruction. Furthermore, an improved coarse-to-fine two-stage SPICE method (DE–GTD–SPICE) based on the Damped Exponential (DE) model and the GTD model (DE–GTD) is proposed to reduce the computational cost. Experimental results show that both the GTD–SPICE method and the DE–GTD–SPICE method are reliable and effective for RCS reconstruction. Specifically, the DE–GTD–SPICE method has a shorter computational time.

2020 ◽  
Author(s):  
Hossein Foroozand ◽  
Steven V. Weijs

<p>Machine learning is the fast-growing branch of data-driven models, and its main objective is to use computational methods to become more accurate in predicting outcomes without being explicitly programmed. In this field, a way to improve model predictions is to use a large collection of models (called ensemble) instead of a single one. Each model is then trained on slightly different samples of the original data, and their predictions are averaged. This is called bootstrap aggregating, or Bagging, and is widely applied. A recurring question in previous works was: how to choose the ensemble size of training data sets for tuning the weights in machine learning? The computational cost of ensemble-based methods scales with the size of the ensemble, but excessively reducing the ensemble size comes at the cost of reduced predictive performance. The choice of ensemble size was often determined based on the size of input data and available computational power, which can become a limiting factor for larger datasets and complex models’ training. In this research, it is our hypothesis that if an ensemble of artificial neural networks (ANN) models or any other machine learning technique uses the most informative ensemble members for training purpose rather than all bootstrapped ensemble members, it could reduce the computational time substantially without negatively affecting the performance of simulation.</p>


Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Israel F. Araujo ◽  
Daniel K. Park ◽  
Francesco Petruccione ◽  
Adenilton J. da Silva

AbstractAdvantages in several fields of research and industry are expected with the rise of quantum computers. However, the computational cost to load classical data in quantum computers can impose restrictions on possible quantum speedups. Known algorithms to create arbitrary quantum states require quantum circuits with depth O(N) to load an N-dimensional vector. Here, we show that it is possible to load an N-dimensional vector with exponential time advantage using a quantum circuit with polylogarithmic depth and entangled information in ancillary qubits. Results show that we can efficiently load data in quantum devices using a divide-and-conquer strategy to exchange computational time for space. We demonstrate a proof of concept on a real quantum device and present two applications for quantum machine learning. We expect that this new loading strategy allows the quantum speedup of tasks that require to load a significant volume of information to quantum devices.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


2021 ◽  
Vol 11 (2) ◽  
pp. 813
Author(s):  
Shuai Teng ◽  
Zongchao Liu ◽  
Gongfa Chen ◽  
Li Cheng

This paper compares the crack detection performance (in terms of precision and computational cost) of the YOLO_v2 using 11 feature extractors, which provides a base for realizing fast and accurate crack detection on concrete structures. Cracks on concrete structures are an important indicator for assessing their durability and safety, and real-time crack detection is an essential task in structural maintenance. The object detection algorithm, especially the YOLO series network, has significant potential in crack detection, while the feature extractor is the most important component of the YOLO_v2. Hence, this paper employs 11 well-known CNN models as the feature extractor of the YOLO_v2 for crack detection. The results confirm that a different feature extractor model of the YOLO_v2 network leads to a different detection result, among which the AP value is 0.89, 0, and 0 for ‘resnet18’, ‘alexnet’, and ‘vgg16’, respectively meanwhile, the ‘googlenet’ (AP = 0.84) and ‘mobilenetv2’ (AP = 0.87) also demonstrate comparable AP values. In terms of computing speed, the ‘alexnet’ takes the least computational time, the ‘squeezenet’ and ‘resnet18’ are ranked second and third respectively; therefore, the ‘resnet18’ is the best feature extractor model in terms of precision and computational cost. Additionally, through the parametric study (influence on detection results of the training epoch, feature extraction layer, and testing image size), the associated parameters indeed have an impact on the detection results. It is demonstrated that: excellent crack detection results can be achieved by the YOLO_v2 detector, in which an appropriate feature extractor model, training epoch, feature extraction layer, and testing image size play an important role.


2018 ◽  
Vol 2018 ◽  
pp. 1-19 ◽  
Author(s):  
Ziyan Luo ◽  
Xiaoyu Li ◽  
Naihua Xiu

In this paper, we propose a sparse optimization approach to maximize the utilization of regenerative energy produced by braking trains for energy-efficient timetabling in metro railway systems. By introducing the cardinality function and the square of the Euclidean norm function as the objective function, the resulting sparse optimization model can characterize the utilization of the regenerative energy appropriately. A two-stage alternating direction method of multipliers is designed to efficiently solve the convex relaxation counterpart of the original NP-hard problem and then to produce an energy-efficient timetable of trains. The resulting approach is applied to Beijing Metro Yizhuang Line with different instances of service for case study. Comparison with the existing two-step linear program approach is also conducted which illustrates the effectiveness of our proposed sparse optimization model in terms of the energy saving rate and the efficiency of our numerical optimization algorithm in terms of computational time.


2019 ◽  
Vol 99 (2) ◽  
pp. 1105-1130 ◽  
Author(s):  
Kun Yang ◽  
Vladimir Paramygin ◽  
Y. Peter Sheng

Abstract The joint probability method (JPM) is the traditional way to determine the base flood elevation due to storm surge, and it usually requires simulation of storm surge response from tens of thousands of synthetic storms. The simulated storm surge is combined with probabilistic storm rates to create flood maps with various return periods. However, the map production requires enormous computational cost if state-of-the-art hydrodynamic models with high-resolution numerical grids are used; hence, optimal sampling (JPM-OS) with a small number of (~ 100–200) optimal (representative) storms is preferred. This paper presents a significantly improved JPM-OS, where a small number of optimal storms are objectively selected, and simulated storm surge responses of tens of thousands of storms are accurately interpolated from those for the optimal storms using a highly efficient kriging surrogate model. This study focuses on Southwest Florida and considers ~ 150 optimal storms that are selected based on simulations using either the low fidelity (with low resolution and simple physics) SLOSH model or the high fidelity (with high resolution and comprehensive physics) CH3D model. Surge responses to the optimal storms are simulated using both SLOSH and CH3D, and the flood elevations are calculated using JPM-OS with highly efficient kriging interpolations. For verification, the probabilistic inundation maps are compared to those obtained by the traditional JPM and variations of JPM-OS that employ different interpolation schemes, and computed probabilistic water levels are compared to those calculated by historical storm methods. The inundation maps obtained with the JPM-OS differ less than 10% from those obtained with JPM for 20,625 storms, with only 4% of the computational time.


Author(s):  
K H Groves ◽  
P Bonello ◽  
P M Hai

Essential to effective aeroengine design is the rapid simulation of the dynamic performance of a variety of engine and non-linear squeeze-film damper (SFD) bearing configurations. Using recently introduced non-linear solvers combined with non-parametric identification of high-accuracy bearing models it is possible to run full-engine rotordynamic simulations, in both the time and frequency domains, at a fraction of the previous computational cost. Using a novel reduced form of Chebyshev polynomial fits, efficient and accurate identification of the numerical solution to the two-dimensional Reynolds equation (RE) is achieved. The engine analysed is a twin-spool five-SFD engine model provided by a leading manufacturer. Whole-engine simulations obtained using Chebyshev-identified bearing models of the finite difference (FD) solution to the RE are compared with those obtained from the original FD bearing models. For both time and frequency domain analysis, the Chebyshev-identified bearing models are shown to mimic accurately and consistently the simulations obtained from the FD models in under 10 per cent of the computational time. An illustrative parameter study is performed to demonstrate the unparalleled capabilities of the combination of recently developed and novel techniques utilised in this paper.


2011 ◽  
Vol 11 (04) ◽  
pp. 571-587 ◽  
Author(s):  
WILLIAM ROBSON SCHWARTZ ◽  
HELIO PEDRINI

Fractal image compression is one of the most promising techniques for image compression due to advantages such as resolution independence and fast decompression. It exploits the fact that natural scenes present self-similarity to remove redundancy and obtain high compression rates with smaller quality degradation compared to traditional compression methods. The main drawback of fractal compression is its computationally intensive encoding process, due to the need for searching regions with high similarity in the image. Several approaches have been developed to reduce the computational cost to locate similar regions. In this work, we propose a method based on robust feature descriptors to speed up the encoding time. The use of robust features provides more discriminative and representative information for regions of the image. When the regions are better represented, the search for similar parts of the image can be reduced to focus only on the most likely matching candidates, which leads to reduction on the computational time. Our experimental results show that the use of robust feature descriptors reduces the encoding time while keeping high compression rates and reconstruction quality.


2021 ◽  
Author(s):  
Carlo Cristiano Stabile ◽  
Marco Barbiero ◽  
Giorgio Fighera ◽  
Laura Dovera

Abstract Optimizing well locations for a green field is critical to mitigate development risks. Performing such workflows with reservoir simulations is very challenging due to the huge computational cost. Proxy models can instead provide accurate estimates at a fraction of the computing time. This study presents an application of new generation functional proxies to optimize the well locations in a real oil field with respect to the actualized oil production on all the different geological realizations. Proxies are built with the Universal Trace Kriging and are functional in time allowing to actualize oil flows over the asset lifetime. Proxies are trained on the reservoir simulations using randomly sampled well locations. Two proxies are created for a pessimistic model (P10) and a mid-case model (P50) to capture the geological uncertainties. The optimization step uses the Non-dominated Sorting Genetic Algorithm, with discounted oil productions of the two proxies, as objective functions. An adaptive approach was employed: optimized points found from a first optimization were used to re-train the proxy models and a second run of optimization was performed. The methodology was applied on a real oil reservoir to optimize the location of four vertical production wells and compared against reference locations. 111 geological realizations were available, in which one relevant uncertainty is the presence of possible compartments. The decision space represented by the horizontal translation vectors for each well was sampled using Plackett-Burman and Latin-Hypercube designs. A first application produced a proxy with poor predictive quality. Redrawing the areas to avoid overlaps and to confine the decision space of each well in one compartment, improved the quality. This suggests that the proxy predictive ability deteriorates in presence of highly non-linear responses caused by sealing faults or by well interchanging positions. We then followed a 2-step adaptive approach: a first optimization was performed and the resulting Pareto front was validated with reservoir simulations; to further improve the proxy quality in this region of the decision space, the validated Pareto front points were added to the initial dataset to retrain the proxy and consequently rerun the optimization. The final well locations were validated on all 111 realizations with reservoir simulations and resulted in an overall increase of the discounted production of about 5% compared to the reference development strategy. The adaptive approach, combined with functional proxy, proved to be successful in improving the workflow by purposefully increasing the training set samples with data points able to enhance the optimization step effectiveness. Each optimization run performed relies on about 1 million proxy evaluations which required negligible computational time. The same workflow carried out with standard reservoir simulations would have been practically unfeasible.


Sign in / Sign up

Export Citation Format

Share Document