scholarly journals UXO detection and identification based on intrinsic target polarizabilities — A case history

Geophysics ◽  
2009 ◽  
Vol 74 (1) ◽  
pp. B1-B8 ◽  
Author(s):  
Erika Gasperikova ◽  
J. Torquil Smith ◽  
H. Frank Morrison ◽  
Alex Becker ◽  
Karl Kappler

Electromagnetic induction data parameterized in time-dependent object intrinsic polarizabilities can discriminate unexploded ordnance (UXO) from false targets (scrap metal). Data from a cart-mounted system designed to discriminate UXO of [Formula: see text] in diameter are used. Discriminating UXO from irregular scrap metal is based on the principal dipole polarizabilities of a target. Nearly intact UXO displays a single major polarizability coincident with the long axis of the object and two equal, smaller transverse polarizabilities, whereas metal scraps have distinct polarizability signatures that rarely mimic those of elongated symmetric bodies. Based on a training data set of known targets, objects were identified by estimating the probability that an object is a single UXO. Our test survey took place on a military base where [Formula: see text] mortar shells and scrap metal were present. We detected and correctly discriminated all [Formula: see text] mortars, and in that process we added 7% and 17%, respectively, of dry holes (digging scrap) to the total number of excavations in two different survey modes. We also demonstrated a mode of operation that might be more cost effective than current practice.

2019 ◽  
Vol 8 (4) ◽  
pp. 12842-12845

Automating the analysis of facial expressions of individuals is one of the challenging tasks in opinion mining. In this work, the proposed technique for identifying the face of an individual and the emotions, if present from a live camera. Expression detection is one of the sub-areas of computer visions which is capable of finding a person from a digital image and identify the facial expression which are the key factors of nonverbal communication. Complexity involves mainly in two cases viz., 1)if more than one emotions coexist on a face. 2) expressing same emotion between individuals is not exactly same. Our aim was to make the processes automatic by identify the expressions of people in a live video. In this system OpenCV library containing face recognizer module for detecting the face and for training the model. It was able to identify the seven different expressions with 75-85% accuracy. The expressions identified are happy, sadness, disgust, fear, anger, surprise and neutral. The this an image frame from is captured from the video, locate the face in it and then test it against the training data for predicting the emotion and update the result. This process is continued till the video input exists. On top of this the data set for training should be in such a way that , it prediction should be independent of age, gender, skin color orientation of the human face in the video and also the lamination around the subject of reference


Buildings ◽  
2019 ◽  
Vol 9 (12) ◽  
pp. 239 ◽  
Author(s):  
Janghyun Kim ◽  
Stephen Frank ◽  
Piljae Im ◽  
James E. Braun ◽  
David Goldwasser ◽  
...  

Automated fault detection and diagnosis (AFDD) tools based on machine-learning algorithms hold promise for lowering cost barriers for AFDD in small commercial buildings; however, access to high-quality training data for such algorithms is often difficult to obtain. To fill the gap in this research area, this study covers the development (Part I) and validation (Part II) of fault models that can be used with the building energy modeling software EnergyPlus® and OpenStudio® to generate a cost-effective training data set for developing AFDD algorithms. Part II (this paper) first presents a methodology of validating fault models with OpenStudio and then presents validation results, which are compared against measurements from a reference building. We discuss the results of our experiments with eight different faults in the reference building (a total of 39 different baseline and faulted scenarios), including our methodology for using fault models along with the reference building model to simulate the same faulted scenarios. Then, we present validation of the fault models by comparing results of simulations and experiments either quantitatively or qualitatively.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. E41-E46 ◽  
Author(s):  
Laurens Beran ◽  
Barry Zelt ◽  
Leonard Pasion ◽  
Stephen Billings ◽  
Kevin Kingdon ◽  
...  

We have developed practical strategies for discriminating between buried unexploded ordnance (UXO) and metallic clutter. These methods are applicable to time-domain electromagnetic data acquired with multistatic, multicomponent sensors designed for UXO classification. Each detected target is characterized by dipole polarizabilities estimated via inversion of the observed sensor data. The polarizabilities are intrinsic target features and so are used to distinguish between UXO and clutter. We tested this processing with four data sets from recent field demonstrations, with each data set characterized by metrics of data and model quality. We then developed techniques for building a representative training data set and determined how the variable quality of estimated features affects overall classification performance. Finally, we devised a technique to optimize classification performance by adapting features during target prioritization.


Buildings ◽  
2019 ◽  
Vol 9 (11) ◽  
pp. 233 ◽  
Author(s):  
Janghyun Kim ◽  
Stephen Frank ◽  
James E. Braun ◽  
David Goldwasser

Small commercial buildings (those with less than approximately 1000 m2 of total floor area) often do not have access to cost-effective automated fault detection and diagnosis (AFDD) tools for maintaining efficient building operations. AFDD tools based on machine-learning algorithms hold promise for lowering cost barriers for AFDD in small commercial buildings; however, such algorithms require access to high-quality training data that is often difficult to obtain. To fill the gap in this research area, this study covers the development (Part I) and validation (Part II) of fault models that can be used with the building energy modeling software EnergyPlus® and OpenStudio® to generate a cost-effective training data set for developing AFDD algorithms. Part I (this paper) presents a library of fault models, including detailed descriptions of each fault model structure and their implementation with EnergyPlus. This paper also discusses a case study of training data set generation, representing an actual building.


2019 ◽  
Vol 12 (2) ◽  
pp. 120-127 ◽  
Author(s):  
Wael Farag

Background: In this paper, a Convolutional Neural Network (CNN) to learn safe driving behavior and smooth steering manoeuvring, is proposed as an empowerment of autonomous driving technologies. The training data is collected from a front-facing camera and the steering commands issued by an experienced driver driving in traffic as well as urban roads. Methods: This data is then used to train the proposed CNN to facilitate what it is called “Behavioral Cloning”. The proposed Behavior Cloning CNN is named as “BCNet”, and its deep seventeen-layer architecture has been selected after extensive trials. The BCNet got trained using Adam’s optimization algorithm as a variant of the Stochastic Gradient Descent (SGD) technique. Results: The paper goes through the development and training process in details and shows the image processing pipeline harnessed in the development. Conclusion: The proposed approach proved successful in cloning the driving behavior embedded in the training data set after extensive simulations.


Author(s):  
Ritu Khandelwal ◽  
Hemlata Goyal ◽  
Rajveer Singh Shekhawat

Introduction: Machine learning is an intelligent technology that works as a bridge between businesses and data science. With the involvement of data science, the business goal focuses on findings to get valuable insights on available data. The large part of Indian Cinema is Bollywood which is a multi-million dollar industry. This paper attempts to predict whether the upcoming Bollywood Movie would be Blockbuster, Superhit, Hit, Average or Flop. For this Machine Learning techniques (classification and prediction) will be applied. To make classifier or prediction model first step is the learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations. Methods: All the techniques related to classification and Prediction such as Support Vector Machine(SVM), Random Forest, Decision Tree, Naïve Bayes, Logistic Regression, Adaboost, and KNN will be applied and try to find out efficient and effective results. All these functionalities can be applied with GUI Based workflows available with various categories such as data, Visualize, Model, and Evaluate. Result: To make classifier or prediction model first step is learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations Conclusion: This paper focuses on Comparative Analysis that would be performed based on different parameters such as Accuracy, Confusion Matrix to identify the best possible model for predicting the movie Success. By using Advertisement Propaganda, they can plan for the best time to release the movie according to the predicted success rate to gain higher benefits. Discussion: Data Mining is the process of discovering different patterns from large data sets and from that various relationships are also discovered to solve various problems that come in business and helps to predict the forthcoming trends. This Prediction can help Production Houses for Advertisement Propaganda and also they can plan their costs and by assuring these factors they can make the movie more profitable.


2008 ◽  
Vol 53 (No. 3) ◽  
pp. 97-104 ◽  
Author(s):  
M. Zouhar ◽  
M. Marek ◽  
O. Douda ◽  
J. Mazáková ◽  
P. Ryšánek

<i>Ditylenchus dipsaci</i>, the stem nematode, is a migratory endoparasite of over 500 species of angiosperms. The main method of <i>D. dipsaci</i> control is crop rotation, but the presence of morphologically indistinguishable host races with different host preferences makes rotation generally ineffective. Therefore, a sensitive, rapid, reliable, as well as cost effective technique is needed for identification of <i>D. dipsaci</i> in biological samples. This study describes the development of species-specific pairs of PCR oligonucleotides for detection and identification of the <i>D. dipsaci</i> stem nematode in various plant hosts. Designed DIT-2 primer pair specifically amplified a fragment of 325 bp, while DIT-5 primer pair always produced a fragment of 245 bp in all <i>D. dipsaci</i> isolates. Two developed SCAR primer pairs were further tested using template DNA extracted from a collection of twelve healthy plant hosts; no amplification was however observed. The developed PCR protocol has proved to be quite sensitive and able to specifically detect <i>D. dipsaci</i> in artificially infested plant tissues.


2019 ◽  
Vol 9 (6) ◽  
pp. 1128 ◽  
Author(s):  
Yundong Li ◽  
Wei Hu ◽  
Han Dong ◽  
Xueyan Zhang

Using aerial cameras, satellite remote sensing or unmanned aerial vehicles (UAV) equipped with cameras can facilitate search and rescue tasks after disasters. The traditional manual interpretation of huge aerial images is inefficient and could be replaced by machine learning-based methods combined with image processing techniques. Given the development of machine learning, researchers find that convolutional neural networks can effectively extract features from images. Some target detection methods based on deep learning, such as the single-shot multibox detector (SSD) algorithm, can achieve better results than traditional methods. However, the impressive performance of machine learning-based methods results from the numerous labeled samples. Given the complexity of post-disaster scenarios, obtaining many samples in the aftermath of disasters is difficult. To address this issue, a damaged building assessment method using SSD with pretraining and data augmentation is proposed in the current study and highlights the following aspects. (1) Objects can be detected and classified into undamaged buildings, damaged buildings, and ruins. (2) A convolution auto-encoder (CAE) that consists of VGG16 is constructed and trained using unlabeled post-disaster images. As a transfer learning strategy, the weights of the SSD model are initialized using the weights of the CAE counterpart. (3) Data augmentation strategies, such as image mirroring, rotation, Gaussian blur, and Gaussian noise processing, are utilized to augment the training data set. As a case study, aerial images of Hurricane Sandy in 2012 were maximized to validate the proposed method’s effectiveness. Experiments show that the pretraining strategy can improve of 10% in terms of overall accuracy compared with the SSD trained from scratch. These experiments also demonstrate that using data augmentation strategies can improve mAP and mF1 by 72% and 20%, respectively. Finally, the experiment is further verified by another dataset of Hurricane Irma, and it is concluded that the paper method is feasible.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ryoya Shiode ◽  
Mototaka Kabashima ◽  
Yuta Hiasa ◽  
Kunihiro Oka ◽  
Tsuyoshi Murase ◽  
...  

AbstractThe purpose of the study was to develop a deep learning network for estimating and constructing highly accurate 3D bone models directly from actual X-ray images and to verify its accuracy. The data used were 173 computed tomography (CT) images and 105 actual X-ray images of a healthy wrist joint. To compensate for the small size of the dataset, digitally reconstructed radiography (DRR) images generated from CT were used as training data instead of actual X-ray images. The DRR-like images were generated from actual X-ray images in the test and adapted to the network, and high-accuracy estimation of a 3D bone model from a small data set was possible. The 3D shape of the radius and ulna were estimated from actual X-ray images with accuracies of 1.05 ± 0.36 and 1.45 ± 0.41 mm, respectively.


Genetics ◽  
2021 ◽  
Author(s):  
Marco Lopez-Cruz ◽  
Gustavo de los Campos

Abstract Genomic prediction uses DNA sequences and phenotypes to predict genetic values. In homogeneous populations, theory indicates that the accuracy of genomic prediction increases with sample size. However, differences in allele frequencies and in linkage disequilibrium patterns can lead to heterogeneity in SNP effects. In this context, calibrating genomic predictions using a large, potentially heterogeneous, training data set may not lead to optimal prediction accuracy. Some studies tried to address this sample size/homogeneity trade-off using training set optimization algorithms; however, this approach assumes that a single training data set is optimum for all individuals in the prediction set. Here, we propose an approach that identifies, for each individual in the prediction set, a subset from the training data (i.e., a set of support points) from which predictions are derived. The methodology that we propose is a Sparse Selection Index (SSI) that integrates Selection Index methodology with sparsity-inducing techniques commonly used for high-dimensional regression. The sparsity of the resulting index is controlled by a regularization parameter (λ); the G-BLUP (the prediction method most commonly used in plant and animal breeding) appears as a special case which happens when λ = 0. In this study, we present the methodology and demonstrate (using two wheat data sets with phenotypes collected in ten different environments) that the SSI can achieve significant (anywhere between 5-10%) gains in prediction accuracy relative to the G-BLUP.


Sign in / Sign up

Export Citation Format

Share Document