synthetic test
Recently Published Documents


TOTAL DOCUMENTS

172
(FIVE YEARS 39)

H-INDEX

13
(FIVE YEARS 2)

2022 ◽  
Vol 12 (2) ◽  
pp. 824
Author(s):  
Kamran Javed ◽  
Nizam Ud Din ◽  
Ghulam Hussain ◽  
Tahir Farooq

Face photographs taken on a bright sunny day or in floodlight contain unnecessary shadows of objects on the face. Most previous works deal with removing shadow from scene images and struggle with doing so for facial images. Faces have a complex semantic structure, due to which shadow removal is challenging. The aim of this research is to remove the shadow of an object in facial images. We propose a novel generative adversarial network (GAN) based image-to-image translation approach for shadow removal in face images. The first stage of our model automatically produces a binary segmentation mask for the shadow region. Then, the second stage, which is a GAN-based network, removes the object shadow and synthesizes the effected region. The generator network of our GAN has two parallel encoders—one is standard convolution path and the other is a partial convolution. We find that this combination in the generator results not only in learning an incorporated semantic structure but also in disentangling visual discrepancies problems under the shadow area. In addition to GAN loss, we exploit low level L1, structural level SSIM and perceptual loss from a pre-trained loss network for better texture and perceptual quality, respectively. Since there is no paired dataset for the shadow removal problem, we created a synthetic shadow dataset for training our network in a supervised manner. The proposed approach effectively removes shadows from real and synthetic test samples, while retaining complex facial semantics. Experimental evaluations consistently show the advantages of the proposed method over several representative state-of-the-art approaches.


2022 ◽  
Vol 8 ◽  
Author(s):  
Shane D. McLean ◽  
Emil Alexander Juul Hansen ◽  
Paul Pop ◽  
Silviu S. Craciunas

Modern Advanced Driver-Assistance Systems (ADAS) combine critical real-time and non-critical best-effort tasks and messages onto an integrated multi-core multi-SoC hardware platform. The real-time safety-critical software tasks have complex interdependencies in the form of end-to-end latency chains featuring, e.g., sensing, processing/sensor fusion, and actuating. The underlying real-time operating systems running on top of the multi-core platform use static cyclic scheduling for the software tasks, while the communication backbone is either realized through PCIe or Time-Sensitive Networking (TSN). In this paper, we address the problem of configuring ADAS platforms for automotive applications, which means deciding the mapping of tasks to processing cores and the scheduling of tasks and messages. Time-critical messages are transmitted in a scheduled manner via the timed-gate mechanism described in IEEE 802.1Qbv according to the pre-computed Gate Control List (GCL) schedule. We study the computation of the assignment of tasks to the available platform CPUs/cores, the static schedule tables for the real-time tasks, as well as the GCLs, such that task and message deadlines, as well as end-to-end task chain latencies, are satisfied. This is an intractable combinatorial optimization problem. As the ADAS platforms and applications become increasingly complex, such problems cannot be optimally solved and require problem-specific heuristics or metaheuristics to determine good quality feasible solutions in a reasonable time. We propose two metaheuristic solutions, a Genetic Algorithm (GA) and one based on Simulated Annealing (SA), both creating static schedule tables for tasks by simulating Earliest Deadline First (EDF) dispatching with different task deadlines and offsets. Furthermore, we use a List Scheduling-based heuristic to create the GCLs in platforms featuring a TSN backbone. We evaluate the proposed solution with real-world and synthetic test cases scaled to fit the future requirements of ADAS systems. The results show that our heuristic strategy can find correct solutions that meet the complex timing and dependency constraints at a higher rate than the related work approaches, i.e., the jitter constraints are satisfied in over 6 times more cases, and the task chain constraints are satisfied in 41% more cases on average. Our method scales well with the growing trend of ADAS platforms.


2021 ◽  
Vol 2 (4) ◽  
pp. 891-910
Author(s):  
Károly Jármai ◽  
Csaba Barcsák ◽  
Gábor Zoltán Marcsák

In engineering, metaheuristic algorithms have been used to solve complex optimization problems. This paper investigates and compares various algorithms. On one hand, the study seeks to ascertain the advantages and disadvantages of the newly presented heuristic techniques. The efficiency of the algorithms is highly dependent on the nature of the problem. The ability to change the complexity of the problem and the knowledge of global optimal locations are two advantages of using synthetic test functions for algorithm benchmarking. On the other hand, real-world design issues may frequently give more meaningful information into the effectiveness of optimization strategies. A new synthetic test function generator has been built to examine various optimization techniques. The objective function noisiness increased significantly with different transformations (Euclidean distance-based weighting, Gaussian weighting and Gabor-like weighting), while the positions of the optima remained the same. The test functions were created to assess and compare the performance of the algorithms in preparation for further development. The ideal proportions of the primary girder of an overhead crane have also been discovered. By evaluating the performance of fifteen metaheuristic algorithms, the optimum solution to thirteen mathematical optimization techniques, as well as the box-girder design, is identified. Some conclusions were drawn about the efficiency of the different optimization techniques at the test function and the transformed noisy functions. The overhead travelling crane girder design shows the real-life application.


2021 ◽  
Author(s):  
Mark Jessell ◽  
Jiateng Guo ◽  
Yunqiang Li ◽  
Mark Lindsay ◽  
Richard Scalzo ◽  
...  

Abstract. Unlike some other well-known challenges such as facial recognition, where Machine Learning and Inversion algorithms are widely developed, the geosciences suffer from a lack of large, labelled datasets that can be used to validate or train robust Machine Learning and inversion schemes. Publicly available 3D geological models are far too restricted in both number and the range of geological scenarios to serve these purposes. With reference to inverting geophysical data this problem is further exacerbated as in most cases real geophysical observations result from unknown 3D geology, and synthetic test datasets are often not particularly geological, nor geologically diverse. To overcome these limitations, we have used the Noddy modelling platform to generate one million models, which represent the first publicly accessible massive training set for 3D geology and resulting gravity and magnetic datasets. This model suite can be used to train Machine Learning systems, and to provide comprehensive test suites for geophysical inversion. We describe the methodology for producing the model suite, and discuss the opportunities such a model suit affords, as well as its limitations, and how we can grow and access this resource.


Author(s):  
Andrea Sciacchitano ◽  
Benjamin Leclaire ◽  
Andreas Schroeder

This work presents the main results of the first Data Assimilation (DA) challenge, conducted within the framework of the European Union’s Horizon 2020 project HOMER (Holistic Optical Metrology for Aero-Elastic Research), grant agreement number 769237. The challenge was jointly organised by the research groups of DLR, ONERA and TU Delft. The same synthetic test case as in the Lagrangian Particle Tracking (LPT) challenge (also presented in this symposium) was considered, reproducing the flow in the wake of a cylinder in proximity of a flat wall. The participants were provided with three datasets containing the measured particles locations and their trajectories identification numbers, at increasing tracers concentrations from 0.04 to 1.4 particles/mm3 . The requested outputs were the three components of the velocity, the nine components of the velocity gradient and the static pressure, defined on a Cartesian grid at one specific time instant. The results were analysed in terms of errors of the output quantities and their distributions. Additionally, the performances of the different DA algorithms were compared with that of a standard linear interpolation approach. Although the velocity errors were found to be in the same range as those of the linear interpolation algorithm, typically between 3% and 12% of the bulk velocity, the use of the DA algorithms enabled an increase of the measurement spatial resolution by a factor between 3 and 4. The errors of the velocity gradient were of the order of 10-15% of the peak vorticity magnitude. Accurate pressure reconstruction was achieved in the flow field, whereas the evaluation of the surface pressure revealed more challenging.


Author(s):  
Laura Gazzola ◽  
Gazzola Ferronato ◽  
Matteo Frigo ◽  
Carlo Janna ◽  
Pietro Teatini ◽  
...  

AbstractAnthropogenic land subsidence can be evaluated and predicted by numerical models, which are often built over deterministic analyses. However, uncertainties and approximations are present, as in any other modeling activity of real-world phenomena. This study aims at combining data assimilation techniques with a physically-based numerical model of anthropogenic land subsidence in a novel and comprehensive workflow, to overcome the main limitations concerning the way traditional deterministic analyses use the available measurements. The proposed methodology allows to reduce uncertainties affecting the model, identify the most appropriate rock constitutive behavior and characterize the most significant governing geomechanical parameters. The proposed methodological approach has been applied in a synthetic test case representative of the Upper Adriatic basin, Italy. The integration of data assimilation techniques into geomechanical modeling appears to be a useful and effective tool for a more reliable study of anthropogenic land subsidence.


Author(s):  
Yangkang Chen ◽  
Omar M. Saad ◽  
Min Bai ◽  
Xingye Liu ◽  
Sergey Fomel

Abstract Microseismic source-location imaging is important for inferring the dynamic status of reservoirs during hydraulic fracturing. The accuracy and resolution of the located microseismic sources are closely related to the imaging technique. We present an open-source program for high-fidelity and high-resolution 3D microseismic source-location imaging. The presented code is compact in the sense that all required subroutines are included in one single C program, based on which seismic wavefields can be propagated either forward during a synthetic test or backward during a real time-reversal imaging process. The compact C program is accompanied by a Python script known as the SConstruct file in the Madagascar open-source platform to compile and run the C program. The velocity model and recorded microseismic data can be input using the Python script. This compact program is useful for educational purposes and for future algorithm development. We introduce the basics of the imaging method used in the presented package and present one representative synthetic example and a field data example. The results show that the presented program can be reliably used to locate source locations using a passive seismic dataset.


Author(s):  
Martyn P. Clark ◽  
Reza Zolfaghari ◽  
Kevin R. Green ◽  
Sean Trim ◽  
Wouter J. M. Knoben ◽  
...  

AbstractThe intent of this paper is to encourage improved numerical implementation of land models. Our contributions in this paper are two-fold. First, we present a unified framework to formulate and implement land model equations. We separate the representation of physical processes from their numerical solution, enabling the use of established robust numerical methods to solve the model equations. Second, we introduce a set of synthetic test cases (the laugh tests) to evaluate the numerical implementation of land models. The test cases include storage and transmission of water in soils, lateral sub-surface flow, coupled hydrological and thermodynamic processes in snow, and cryosuction processes in soil. We consider synthetic test cases as “laugh tests” for land models because they provide the most rudimentary test of model capabilities. The laugh tests presented in this paper are all solved with the Structure for Unifying Multiple Modeling Alternatives model (SUMMA) implemented using the SUite of Nonlinear and DIfferential/Algebraic equation Solvers (SUNDIALS). The numerical simulations from SUMMA/SUNDIALS are compared against (1) solutions to the synthetic test cases from other models documented in the peer-reviewed literature; (2) analytical solutions; and (3) observations made in laboratory experiments. In all cases, the numerical simulations are similar to the benchmarks, building confidence in the numerical model implementation. We posit that some land models may have difficulty in solving these benchmark problems. Dedicating more effort to solving synthetic test cases is critical in order to build confidence in the numerical implementation of land models.


Author(s):  
Jagadeesh Peddapudi, Et. al.

The most basic transient a circuit breaker needs to suffer during its activity is the transient recovery voltage (TRV), started by the electric force system as a characteristic response on flow interference. To test high voltage CBs, direct testing utilizing the force system or short out alternators are not practical. The testing of high voltage Circuit Breakers (CBs) of bigger limit requires huge limit of testing station. An equal infusion of short out current and transient voltage to medium and high voltage circuit breaker (CB) by a synthetic model is examined. Transient recovery voltage is made by a capacitor bank and is applied to CB. An optical set off spark gap has been utilized to interrupt short circuit and to introduce of transient recovery voltage that is applied across the contacts of circuit breaker. Transient recovery voltage examination can never be done totally, as the advancement of circuit breaker development and organization configuration goes on. The most widely recognized way to deal with TRV examination is concerning the supposed planned TRV, in which a suspicion of dismissing association between circuit breaker itself and the innate system recovery voltage is being made. Notwithstanding, it actually is by all accounts qualified to examine what circuit breaker means for transient recovery voltage. An ideal grouping to open/close of reinforcement test article and helper circuit breakers inside suitable chance to infuse of recovery voltage. The impact of reactance of inductive flaw current limiter just as distance to blame in short line issue condition on pace of ascent of recovery voltage. A 4-boundaries TRV synthetic test circuit dependent on equal current infusion technique is planned and mimicked for testing 145kV rating circuit-breakers according to new TRV prerequisites given in IEC 62271-100.


Author(s):  
Maria Koroni ◽  
Jeannot Trampert

Summary We present a novel approach for imaging global mantle discontinuities based on full-waveform inversion (FWI). Over the past decades, extensive research has been done on imaging mantle discontinuities at approximately 400 km and 670 km depth. Accurate knowledge of their topography can put strong constraints on thermal and compositional variations and hence geodynamic modelling. So far, however, there is little consensus on their topography. We present an approach based on adjoint tomography, which has the advantage that Fréchet derivatives for discontinuities and measurements, to be inverted for, are fully consistent. Rather than working with real data, we focus on synthetic tests, where the answer is known in order to be able to evaluate the performance of the developed method. All calculations are based on the community code SPECFEM3D_GLOBE. We generate data in fixed 1-D or 3-D elastic background models of mantle velocity. Our ‘data’ to be inverted contain topography along the 400 km and 670 km mantle discontinuities. To investigate the approach, we perform several tests: (i) In a situation where we know the elastic background model 1-D or 3-D, we recover the target topography fast and accurately, (ii) The exact misfit is not of great importance here, except in terms of convergence speed, similar to a different inverse algorithm, (iii) In a situation where the background model is not known, the convergence is markedly slower, but there is reasonable convergence towards the correct target model of discontinuity topography. It has to be noted that our synthetic test is idealised and in a real data situation, the convergence to and uncertainty of the inferred model is bound to be larger. However, the use of data consistent with Fréchet kernels seems to pay off and might improve our consensus on the nature of mantle discontinuities. Our workflow could be incorporated in future FWI mantle models to adequately infer boundary interface topography.


Sign in / Sign up

Export Citation Format

Share Document