scholarly journals Applications of a Realistic Model for CCD Imaging

1995 ◽  
Vol 167 ◽  
pp. 371-372
Author(s):  
Steve B. Howell ◽  
William J. Merline

We have constructed a computer model for simulation of point sources imaged on CCDs. An attempt has been made to ensure that the model produces “data” that mimic real data taken with two-D detectors. To be realistic, such simulations must include randomly generated noise of the appropriate type from all sources. The synthetic data are output as simple one-D integrations, as two-D radial slices, and as three-D intensity plots. Each noise source can be turned on or off so they can be studied independently as well as in combination to provide insight into the image components.

Solid Earth ◽  
2019 ◽  
Vol 10 (4) ◽  
pp. 1301-1319 ◽  
Author(s):  
Joeri Brackenhoff ◽  
Jan Thorbecke ◽  
Kees Wapenaar

Abstract. We aim to monitor and characterize signals in the subsurface by combining these passive signals with recorded reflection data at the surface of the Earth. To achieve this, we propose a method to create virtual receivers from reflection data using the Marchenko method. By applying homogeneous Green’s function retrieval, these virtual receivers are then used to monitor the responses from subsurface sources. We consider monopole point sources with a symmetric source signal, for which the full wave field without artifacts in the subsurface can be obtained. Responses from more complex source mechanisms, such as double-couple sources, can also be used and provide results with comparable quality to the monopole responses. If the source signal is not symmetric in time, our technique based on homogeneous Green’s function retrieval provides an incomplete signal, with additional artifacts. The duration of these artifacts is limited and they are only present when the source of the signal is located above the virtual receiver. For sources along a fault rupture, this limitation is also present and more severe due to the source activating over a longer period of time. Part of the correct signal is still retrieved, as is the source location of the signal. These artifacts do not occur in another method that creates virtual sources as well as receivers from reflection data at the surface. This second method can be used to forecast responses to possible future induced seismicity sources (monopoles, double-couple sources and fault ruptures). This method is applied to field data, and similar results to the ones on synthetic data are achieved, which shows the potential for application on real data signals.


2019 ◽  
Author(s):  
Joeri Brackenhoff ◽  
Jan Thorbecke ◽  
Kees Wapenaar

Abstract. We aim to monitor and characterize signals in the subsurface by combining these passive signals with recorded reflection data at the surface of the Earth. To achieve this, we propose a method to create virtual receivers from reflection data using the Marchenko method. By applying homogeneous Green’s function retrieval, these virtual receivers are then used to monitor the responses from subsurface sources. We consider monopole point sources with a symmetric source signal, where the full wavefield without artefacts in the subsurface can be obtained. Responses from more complex source mechanisms, such as double-couple sources, can also be used and provide results with comparable quality as the monopole responses. If the source signal is not symmetric in time, our technique that is based on homogeneous Green’s function retrieval provides an incomplete signal, with additional artefacts. The duration of these artefacts is limited and they are only present when the source of the signal is located above the virtual receiver. For sources along a fault rupture, this limitation is also present and more severe due to the source activating over a longer period of time. Part of the correct signal is still retrieved, as well as the source location of the signal. These aretefacts do not occur in another method which creates virtual sources as well as receivers from reflection data at the surface. This second method can be used to forecast responses to possible future induced seismicity sources (monopoles, double-couple sources and fault ruptures). This method is applied to field data, where similar results to synthetic data are achieved, which shows the potential for the application on real data signals.


2020 ◽  
Vol 222 (1) ◽  
pp. 544-559
Author(s):  
Lianqing Zhou ◽  
Xiaodong Song ◽  
Richard L Weaver

SUMMARY Ambient noise correlation has been used extensively to retrieve traveltimes of surface waves. However, studies of retrieving amplitude information and attenuation from ambient noise are limited. In this study, we develop methods and strategies to extract Rayleigh wave amplitude and attenuation from ambient noise correlation, based on theoretical derivation, numerical simulation, and practical considerations of real seismic data. The synthetic data included a numerical simulation of a highly anisotropic noise source and Earth-like temporally varying strength. Results from synthetic data validate that amplitudes and attenuations can indeed be extracted from noise correlations for a linear array. A temporal flattening procedure is effective in speeding up convergence while preserving relative amplitudes. The traditional one-bit normalization and other types of temporal normalization that are applied to each individual station separately are problematic in recovering attenuation and should be avoided. In this study, we propose an ‘asynchronous’ temporal flattening procedure for real data that does not require all stations to have data at the same time. Furthermore, we present the detailed procedure for amplitude retrieval from ambient noise. Tests on real data suggest attenuations extracted from our noise-based methods are comparable with those from earthquakes. Our study shows an exciting promise of retrieving amplitude and attenuation information from ambient noise correlations and suggests practical considerations for applications to real data.


Author(s):  
Cheng-Han (Lance) Tsai ◽  
Jen-Yuan (James) Chang

Abstract Artificial Intelligence (AI) has been widely used in different domains such as self-driving, automated optical inspection, and detection of object locations for the robotic pick and place operations. Although the current results of using AI in the mentioned fields are good, the biggest bottleneck for AI is the need for a vast amount of data and labeling of the corresponding answers for a sufficient training. Evidentially, these efforts still require significant manpower. If the quality of the labelling is unstable, the trained AI model becomes unstable and as consequence, so do the results. To resolve this issue, the auto annotation system is proposed in this paper with methods including (1) highly realistic model generation with real texture, (2) domain randomization algorithm in the simulator to automatically generate abundant and diverse images, and (3) visibility tracking algorithm to calculate the occlusion effect objects cause on each other for different picking strategy labels. From our experiments, we will show 10,000 images can be generated per hour, each having multiple objects and each object being labelled in different classes based on their visibility. Instance segmentation AI models can also be trained with these methods to verify the gaps between performance synthetic data for training and real data for testing, indicating that even at mAP 70 the mean average precision can reach 70%!


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


Author(s):  
Alma Andersson ◽  
Joakim Lundeberg

Abstract Motivation Collection of spatial signals in large numbers has become a routine task in multiple omics-fields, but parsing of these rich datasets still pose certain challenges. In whole or near-full transcriptome spatial techniques, spurious expression profiles are intermixed with those exhibiting an organized structure. To distinguish profiles with spatial patterns from the background noise, a metric that enables quantification of spatial structure is desirable. Current methods designed for similar purposes tend to be built around a framework of statistical hypothesis testing, hence we were compelled to explore a fundamentally different strategy. Results We propose an unexplored approach to analyze spatial transcriptomics data, simulating diffusion of individual transcripts to extract genes with spatial patterns. The method performed as expected when presented with synthetic data. When applied to real data, it identified genes with distinct spatial profiles, involved in key biological processes or characteristic for certain cell types. Compared to existing methods, ours seemed to be less informed by the genes’ expression levels and showed better time performance when run with multiple cores. Availabilityand implementation Open-source Python package with a command line interface (CLI), freely available at https://github.com/almaan/sepal under an MIT licence. A mirror of the GitHub repository can be found at Zenodo, doi: 10.5281/zenodo.4573237. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
Vol 15 (4) ◽  
pp. 1-20
Author(s):  
Georg Steinbuss ◽  
Klemens Böhm

Benchmarking unsupervised outlier detection is difficult. Outliers are rare, and existing benchmark data contains outliers with various and unknown characteristics. Fully synthetic data usually consists of outliers and regular instances with clear characteristics and thus allows for a more meaningful evaluation of detection methods in principle. Nonetheless, there have only been few attempts to include synthetic data in benchmarks for outlier detection. This might be due to the imprecise notion of outliers or to the difficulty to arrive at a good coverage of different domains with synthetic data. In this work, we propose a generic process for the generation of datasets for such benchmarking. The core idea is to reconstruct regular instances from existing real-world benchmark data while generating outliers so that they exhibit insightful characteristics. We propose and describe a generic process for the benchmarking of unsupervised outlier detection, as sketched so far. We then describe three instantiations of this generic process that generate outliers with specific characteristics, like local outliers. To validate our process, we perform a benchmark with state-of-the-art detection methods and carry out experiments to study the quality of data reconstructed in this way. Next to showcasing the workflow, this confirms the usefulness of our proposed process. In particular, our process yields regular instances close to the ones from real data. Summing up, we propose and validate a new and practical process for the benchmarking of unsupervised outlier detection.


Sign in / Sign up

Export Citation Format

Share Document