scholarly journals ADVERSARIAL OPEN DOMAIN ADAPTION FRAMEWORK (AODA): SKETCH-TO-PHOTO SYNTHESIS

Author(s):  
Amey Thakur ◽  
Mega Satish

This paper aims to demonstrate the efficiency of the Adversarial Open Domain Adaption framework for sketch-to-photo synthesis. The unsupervised open domain adaption for generating realistic photos from a hand-drawn sketch is challenging as there is no such sketch of that class for training data. The absence of learning supervision and the huge domain gap between both the freehand drawing and picture domains make it hard. We present an approach that learns both sketch-to-photo and photo-to-sketch generation to synthesise the missing freehand drawings from pictures. Due to the domain gap between synthetic sketches and genuine ones, the generator trained on false drawings may produce unsatisfactory results when dealing with drawings of lacking classes. To address this problem, we offer a simple but effective open-domain sampling and optimization method that “tricks” the generator into considering false drawings as genuine. Our approach generalises the learnt sketch-to-photo and photo-to-sketch mappings from in-domain input to open-domain categories. On the Scribble and SketchyCOCO datasets, we compared our technique to the most current competing methods. For many types of open-domain drawings, our model outperforms impressive results in synthesising accurate colour, substance, and retaining the structural layout.

2021 ◽  
Author(s):  
Faruk Alpak ◽  
Yixuan Wang ◽  
Guohua Gao ◽  
Vivek Jain

Abstract Recently, a novel distributed quasi-Newton (DQN) derivative-free optimization (DFO) method was developed for generic reservoir performance optimization problems including well-location optimization (WLO) and well-control optimization (WCO). DQN is designed to effectively locate multiple local optima of highly nonlinear optimization problems. However, its performance has neither been validated by realistic applications nor compared to other DFO methods. We have integrated DQN into a versatile field-development optimization platform designed specifically for iterative workflows enabled through distributed-parallel flow simulations. DQN is benchmarked against alternative DFO techniques, namely, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method hybridized with Direct Pattern Search (BFGS-DPS), Mesh Adaptive Direct Search (MADS), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA). DQN is a multi-thread optimization method that distributes an ensemble of optimization tasks among multiple high-performance-computing nodes. Thus, it can locate multiple optima of the objective function in parallel within a single run. Simulation results computed from one DQN optimization thread are shared with others by updating a unified set of training data points composed of responses (implicit variables) of all successful simulation jobs. The sensitivity matrix at the current best solution of each optimization thread is approximated by a linear-interpolation technique using all or a subset of training-data points. The gradient of the objective function is analytically computed using the estimated sensitivities of implicit variables with respect to explicit variables. The Hessian matrix is then updated using the quasi-Newton method. A new search point for each thread is solved from a trust-region subproblem for the next iteration. In contrast, other DFO methods rely on a single-thread optimization paradigm that can only locate a single optimum. To locate multiple optima, one must repeat the same optimization process multiple times starting from different initial guesses for such methods. Moreover, simulation results generated from a single-thread optimization task cannot be shared with other tasks. Benchmarking results are presented for synthetic yet challenging WLO and WCO problems. Finally, DQN method is field-tested on two realistic applications. DQN identifies the global optimum with the least number of simulations and the shortest run time on a synthetic problem with known solution. On other benchmarking problems without a known solution, DQN identified compatible local optima with reasonably smaller numbers of simulations compared to alternative techniques. Field-testing results reinforce the auspicious computational attributes of DQN. Overall, the results indicate that DQN is a novel and effective parallel algorithm for field-scale development optimization problems.


2021 ◽  
Vol 14 (2) ◽  
pp. 127-135
Author(s):  
Fadhil Yusuf Rahadika ◽  
Novanto Yudistira ◽  
Yuita Arum Sari

During the COVID-19 pandemic, many offline activities are turned into online activities via video meetings to prevent the spread of the COVID 19 virus. In the online video meeting, some micro-interactions are missing when compared to direct social interactions. The use of machines to assist facial expression recognition in online video meetings is expected to increase understanding of the interactions among users. Many studies have shown that CNN-based neural networks are quite effective and accurate in image classification. In this study, some open facial expression datasets were used to train CNN-based neural networks with a total number of training data of 342,497 images. This study gets the best results using ResNet-50 architecture with Mish activation function and Accuracy Booster Plus block. This architecture is trained using the Ranger and Gradient Centralization optimization method for 60000 steps with a batch size of 256. The best results from the training result in accuracy of AffectNet validation data of 0.5972, FERPlus validation data of 0.8636, FERPlus test data of 0.8488, and RAF-DB test data of 0.8879. From this study, the proposed method outperformed plain ResNet in all test scenarios without transfer learning, and there is a potential for better performance with the pre-training model. The code is available at https://github.com/yusufrahadika-facial-expressions-essay.


2021 ◽  
Vol 9 ◽  
pp. 929-944
Author(s):  
Omar Khattab ◽  
Christopher Potts ◽  
Matei Zaharia

Abstract Systems for Open-Domain Question Answering (OpenQA) generally depend on a retriever for finding candidate passages in a large corpus and a reader for extracting answers from those passages. In much recent work, the retriever is a learned component that uses coarse-grained vector representations of questions and passages. We argue that this modeling choice is insufficiently expressive for dealing with the complexity of natural language questions. To address this, we define ColBERT-QA, which adapts the scalable neural retrieval model ColBERT to OpenQA. ColBERT creates fine-grained interactions between questions and passages. We propose an efficient weak supervision strategy that iteratively uses ColBERT to create its own training data. This greatly improves OpenQA retrieval on Natural Questions, SQuAD, and TriviaQA, and the resulting system attains state-of-the-art extractive OpenQA performance on all three datasets.


Author(s):  
Ramasubramanian Sundararajan ◽  
Hima Patel ◽  
Manisha Srivastava

Traditionally supervised learning algorithms are built using labeled training data. Accurate labels are essential to guide the classifier towards an optimal separation between the classes. However, there are several real world scenarios where the class labels at an instance level may be unavailable or imprecise or difficult to obtain, or in situations where the problem is naturally posed as one of classifying instance groups. To tackle these challenges, we draw your attention towards Multi Instance Learning (MIL) algorithms where labels are available at a bag level rather than at an instance level. In this chapter, we motivate the need for MIL algorithms and describe an ensemble based method, wherein the members of the ensemble are lazy learning classifiers using the Citation Nearest Neighbour method. Diversity among the ensemble methods is achieved by optimizing their parameters using a multi-objective optimization method, with the objective being to maximize positive class accuracy and minimize false positive rate. We demonstrate results of the methodology on the standard Musk 1 dataset.


2019 ◽  
Vol 21 (Supplement_6) ◽  
pp. vi167-vi167
Author(s):  
Lujia Wang ◽  
Hyunsoo Yoon ◽  
Andrea Hawkins-Daarud ◽  
Kyle Singleton ◽  
Kamala Clark-Swanson ◽  
...  

Abstract BACKGROUND An important challenge in radiomics research is reproducibility. Images are collected on different image scanners and protocols, which introduces significant variability even for the same type of image across institutions. In the present proof-of-concept study, we address the reproducibility issue by using domain adaptation – an algorithm that transforms the radiomic features of each new patient to align with the distribution of features formed by the patient samples in a training set. METHOD Our dataset included 18 patients in training with a total of 82 biopsy sample. The pathological tumor cell density was available for each sample. Radiomic (statistical + texture) features were extracted from the region of six image contrasts locally matched with each biopsy sample. A Gaussian Process (GP) classifier was built to predict tumor cell density using radiomic features. Another 6 patients were used to test the training model. These patients had a total of 31 biopsy samples. The images of each test patient were purposely normalized using a different approach, i.e., using the CSF instead of the whole brain as the reference. This was to mimic the practical scenario of image source discrepancy between different patients. Domain adaptation was applied to each test patient. RESULTS Among the 18 training patients, the leave-one-patient-out cross validation accuracy is 0.81 AUC, 0.78 sensitivity, and 0.83 specificity. When the trained model was applied to the 6 test patients (purposely normalized using a different approach than that of the training data), the accuracy dramatically reduced to 0.39 AUC, 0.08 sensitivity, and 0.61 specificity. After using domain adaption, the accuracy improved to 0.68 AUC, 0.62 sensitivity, and 0.72 specificity. CONCLUSION We provide candidate enabling tools to address reproducibility in radiomics models by using domain adaption algorithms to account for discrepancy of the images between different patients.


SPE Journal ◽  
2015 ◽  
Vol 20 (04) ◽  
pp. 701-716 ◽  
Author(s):  
Guohua Gao ◽  
Jeroen C. Vink ◽  
Faruk O. Alpak ◽  
W.. Mo

Summary In-situ upgrading process (IUP) is an attractive technology for developing unconventional extraheavy-oil reserves. Decisions are generally made on field-scale economics evaluated with dedicated commercial tools. However, it is difficult to conduct an automated IUP optimization process because of unavailable interface between the economic evaluator and commercial simulator/optimizer, and because IUP is such a highly complex process that full-field simulations are generally not feasible. In this paper, we developed an efficient optimization work flow by addressing three technical challenges for field-scale IUP developments. The first challenge was deriving an upscaling factor modeled after analytical superposition formulation; proposing an effective method of scaling up simulation results and economic terms generated from a single-pattern IUP reservoir-simulation model to field scale; and validating this approach numerically. The second challenge was proposing a response-surface model (RSM) of field economics to analytically compute key field economical indicators, such as net present value (NPV), by use of only a few single-pattern economic terms together with the upscaling factor, and validating this approach with a commercial tool. The proposed RSM approach is more efficient, accurate, and convenient because it requires only 15–20 simulation cases as training data, compared with thousands of simulation runs required by conventional methods. The third challenge is developing a new optimization method with many attractive features: well-parallelized, highly efficient and robust, and with a much-wider spectrum of applications than gradient-based or derivative-free methods, applicable to problems without any derivative, with derivatives available for some variables, or with derivatives available for all variables. This work flow allows us to perform automated field IUP optimizations by maximizing a full-field economics target while honoring all field-level facility constraints effectively. We have applied the work flow to optimize the IUP development of a carbonate heavy-oil asset. Our results show that the approach is robust and efficient, and leads to development options with a significantly improved field-scale NPV. This work flow can also be applied to other kinds of pattern-based field developments of shale gas and oil, and thermal processes such as steamdrive or steam-assisted gravity drainage.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5282
Author(s):  
Luca De Vito ◽  
Enrico Picariello ◽  
Francesco Picariello ◽  
Sergio Rapuano ◽  
Ioan Tudosa

This paper presents a new approach for the optimization of a dictionary used in ECG signal compression and reconstruction systems, based on Compressed Sensing (CS). Alternatively to fully data driven methods, which learn the dictionary from the training data, the proposed approach uses an over complete wavelet dictionary, which is then reduced by means of a training phase. Moreover, the alignment of the frames according to the position of the R-peak is proposed, such that the dictionary optimization can exploit the different scaling features of the ECG waves. Therefore, at first, a training phase is performed in order to optimize the overcomplete dictionary matrix by reducing its number of columns. Then, the optimized matrix is used in combination with a dynamic sensing matrix to compress and reconstruct the ECG waveform. In this paper, the mathematical formulation of the patient-specific optimization is presented and three optimization algorithms have been evaluated. For each of them, an experimental tuning of the convergence parameter is carried out, in order to ensure that the algorithm can work in its most suitable conditions. The performance of each considered algorithm is evaluated by assessing the Percentage of Root-mean-squared Difference (PRD) and compared with the state of the art techniques. The obtained experimental results demonstrate that: (i) the utilization of an optimized dictionary matrix allows a better performance to be reached in the reconstruction quality of the ECG signals when compared with other methods, (ii) the regularization parameters of the optimization algorithms should be properly tuned to achieve the best reconstruction results, and (iii) the Multiple Orthogonal Matching Pursuit (M-OMP) algorithm is the better suited algorithm among those examined.


2014 ◽  
Vol 2014 ◽  
pp. 1-5 ◽  
Author(s):  
Yuqiao Zheng ◽  
Rongzhen Zhao ◽  
Hong Liu

This paper presents a recently developed numerical multidisciplinary optimization method for design of wind turbine blade. The objective was the highest possible blade weight under specified atmospheric conditions, determined by the design giving girder layer and location parameter. Wind turbine blade on box-section beams girder is calculated by ply thickness, main girder and trailing edge. In this study, a realistic 30 m blade from a 1.2 MW wind turbine model of blade girder parameters is established. The optimization evolves a structure which transforms along the length of the blade, changing from a design with spar caps at the maximum thickness and a trailing edge mass to a design with spar caps toward the tip. In addition, the cross-section structural properties and the modal characteristics of a 62 m rotor blade were predicted by the developed beam finite element. In summary, these findings indicate that the conventional structural layout of a wind turbine blade is suboptimal under the static load conditions, suggesting an opportunity to reduce blade weight and cost.


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1148
Author(s):  
Ilok Jung ◽  
Jongin Lim ◽  
Huykang Kim

The number of studies on applying machine learning to cyber security has increased over the past few years. These studies, however, are facing difficulties with making themselves usable in the real world, mainly due to the lack of training data and reusability of a created model. While transfer learning seems like a solution to these problems, the number of studies in the field of intrusion detection is still insufficient. Therefore, this study proposes payload feature-based transfer learning as a solution to the lack of training data when applying machine learning to intrusion detection by using the knowledge from an already known domain. Firstly, it expands the extracting range of information from header to payload to accurately deliver the information by using an effective hybrid feature extraction method. Secondly, this study provides an improved optimization method for the extracted features to create a labeled dataset for a target domain. This proposal was validated on publicly available datasets, using three distinctive scenarios, and the results confirmed its usability in practice by increasing the accuracy of the training data created from the transfer learning by 30%, compared to that of the non-transfer learning method. In addition, we showed that this approach can help in identifying previously unknown attacks and reusing models from different domains.


Author(s):  
Lin Htet Tun ◽  
P.V. Prosuntsov

The paper presents the methodology for designing the load bearing elements of tail section of a light aircraft through the sequential application of methods of parametric and topological optimization. First, we analyzed the loads acting on the aircraft at its maneuvering in the vertical and horizontal planes. Then, for these loads, by the parametric optimization method, we selected the locations of ribs of the tail section of the aircraft, which were subsequently used to develop individual forms of ribs based on the topology optimization method. Next, we carried out parametric optimization of layup angles of polymer composite material, intended for the production of ribs. Finally, we developed a structural layout for the load bearing elements of the fuselage, which meets the criteria of minimum weight when restrictions are imposed on the level of stresses in some layers of the composite material.


Sign in / Sign up

Export Citation Format

Share Document