scholarly journals Augmenting Seismic Data Using Generative Adversarial Network for Low-cost MEMS Sensors

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Aming Wu ◽  
Juyong Shin ◽  
Jae-Kwang Ahn ◽  
Young-Woo Kwon
Author(s):  
Xinyi Li ◽  
Liqiong Chang ◽  
Fangfang Song ◽  
Ju Wang ◽  
Xiaojiang Chen ◽  
...  

This paper focuses on a fundamental question in Wi-Fi-based gesture recognition: "Can we use the knowledge learned from some users to perform gesture recognition for others?". This problem is also known as cross-target recognition. It arises in many practical deployments of Wi-Fi-based gesture recognition where it is prohibitively expensive to collect training data from every single user. We present CrossGR, a low-cost cross-target gesture recognition system. As a departure from existing approaches, CrossGR does not require prior knowledge (such as who is currently performing a gesture) of the target user. Instead, CrossGR employs a deep neural network to extract user-agnostic but gesture-related Wi-Fi signal characteristics to perform gesture recognition. To provide sufficient training data to build an effective deep learning model, CrossGR employs a generative adversarial network to automatically generate many synthetic training data from a small set of real-world examples collected from a small number of users. Such a strategy allows CrossGR to minimize the user involvement and the associated cost in collecting training examples for building an accurate gesture recognition system. We evaluate CrossGR by applying it to perform gesture recognition across 10 users and 15 gestures. Experimental results show that CrossGR achieves an accuracy of over 82.6% (up to 99.75%). We demonstrate that CrossGR delivers comparable recognition accuracy, but uses an order of magnitude less training samples collected from the end-users when compared to state-of-the-art recognition systems.


Author(s):  
Anatoliy Parfenov ◽  
Peter Sychov

CAPTCHA recognition is certainly not a new research topic. Over the past decade, researchers have demonstrated various ways to automatically recognize text-based CAPTCHAs. However, in such methods, the recognition setup requires a large participation of experts and carries a laborious process of collecting and marking data. This article presents a general, low-cost, but effective approach to automatically solving text-based CAPTCHAs based on deep learning. This approach is based on the architecture of a generative-competitive network, which will significantly reduce the number of real required CAPTCHAs.


2021 ◽  
Author(s):  
Brydon Lowney ◽  
Lewis Whiting ◽  
Ivan Lokmer ◽  
Gareth O'Brien ◽  
Christopher Bean

<p>Diffraction imaging is the technique of separating diffraction energy from the source wavefield and processing it independently. As diffractions are formed from objects and discontinuities, or diffractors, which are small in comparison to the wavelength, if the diffraction energy is imaged, so too are the diffractors. These diffractors take many forms such as faults, fractures, and pinch-out points, and are therefore geologically significant. Diffraction imaging has been applied here to the Porcupine Basin; a hyperextended basin located 200km to the southwest of Ireland with a rich geological history. The basin has seen interest both academically and industrially as a study on hyperextension and a potential source of hydrocarbons. The data is characterised by two distinct, basin-wide, fractured carbonates nestled between faulted sandstones and mudstones. Additionally, there are both mass-transport deposits and fans present throughout the data, which pose a further challenge for diffraction imaging. Here, we propose the usage of diffraction imaging to better image structures both within the carbonate, such as fractures, and below.</p><p>To perform diffraction imaging, we have utilised a trained Generative Adversarial Network (GAN) which automatically locates and separates the diffraction energy on pre-migrated seismic data. The data has then been migrated to create a diffraction image. This image is used in conjunction with the conventional image as an attribute, akin to coherency or semblance, to identify diffractors which may be geologically significant. Using this technique, we highlight the fracture network of a large Cretaceous chalk body present in the Porcupine, the internal structure of mass-transport deposits, potential fan edges, and additional faults within the data which may affect fluid flow pathways.</p>


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2250
Author(s):  
Leyuan Liu ◽  
Rubin Jiang ◽  
Jiao Huo ◽  
Jingying Chen

Facial expression recognition (FER) is a challenging problem due to the intra-class variation caused by subject identities. In this paper, a self-difference convolutional network (SD-CNN) is proposed to address the intra-class variation issue in FER. First, the SD-CNN uses a conditional generative adversarial network to generate the six typical facial expressions for the same subject in the testing image. Second, six compact and light-weighted difference-based CNNs, called DiffNets, are designed for classifying facial expressions. Each DiffNet extracts a pair of deep features from the testing image and one of the six synthesized expression images, and compares the difference between the deep feature pair. In this way, any potential facial expression in the testing image has an opportunity to be compared with the synthesized “Self”—an image of the same subject with the same facial expression as the testing image. As most of the self-difference features of the images with the same facial expression gather tightly in the feature space, the intra-class variation issue is significantly alleviated. The proposed SD-CNN is extensively evaluated on two widely-used facial expression datasets: CK+ and Oulu-CASIA. Experimental results demonstrate that the SD-CNN achieves state-of-the-art performance with accuracies of 99.7% on CK+ and 91.3% on Oulu-CASIA, respectively. Moreover, the model size of the online processing part of the SD-CNN is only 9.54 MB (1.59 MB ×6), which enables the SD-CNN to run on low-cost hardware.


2021 ◽  
Author(s):  
Soheil Soltani ◽  
Ashkan Ojaghi ◽  
Hui Qiao ◽  
Nischita Kaza ◽  
Xinyang Li ◽  
...  

Abstract Identifying prostate cancer patients that are harboring aggressive forms of prostate cancer remains a significant clinical challenge. Here we develop an approach based on multispectral deep-ultraviolet (UV) microscopy that provides novel quantitative insight into the aggressiveness and grade of this disease, thus providing a new tool to help address this important challenge. We find that UV spectral signatures from endogenous molecules give rise to a phenotypical continuum that provides unique structural insight (i.e., molecular maps or “optical stains") of thin tissue sections with subcellular (nanoscale) resolution. We show that this phenotypical continuum can also be applied as a surrogate biomarker of prostate cancer malignancy, where patients with the most aggressive tumors show a ubiquitous glandular phenotypical shift. In addition to providing several novel “optical stains” with contrast for disease, we also adapt a two-part Cycle-consistent Generative Adversarial Network to translate the label-free deep-UV images into virtual hematoxylin and eosin (H&E) stained images, thus providing multiple stains (including the gold-standard H&E) from the same unlabeled specimen. Agreement between the virtual H&E images and the H&E-stained tissue sections is evaluated by a panel of pathologists who find that the two modalities are in excellent agreement. This work has significant implications towards improving our ability to objectively quantify prostate cancer grade and aggressiveness, thus improving the management and clinical outcomes of prostate cancer patients. This same approach can also be applied broadly in other tumor types to achieve low-cost, stain-free, quantitative histopathological analysis.


Geophysics ◽  
2021 ◽  
pp. 1-154
Author(s):  
Qing Wei ◽  
xiangyang Li ◽  
Mingpeng Song

During acquisition, due to economic and natural reasons, irregular missing seismic data are always observed. To improve accuracy in subsequent processing, the missing data should be interpolated. A conditional generative adversarial network (cGAN) consisting of two networks, a generator and a discriminator, is a deep learning model that can be used to interpolate the missing data. However, because cGAN is typically dataset-oriented, the trained network is unable to interpolate a dataset from an area different from that of the training dataset. We design a cGAN based on Pix2Pix GAN to interpolate irregular missing seismic data. A synthetic dataset synthesized from two models is used to train the network. Further, we add a Gaussian-noise layer in the discriminator to fix a vanishing gradient, allowing us to train a more powerful generator. Two synthetic datasets synthesized by two new geological models and two field datasets are used to test the trained cGAN. The test results and the calculated recovered signal-to-noise ratios indicate that although the cGAN is trained using synthetic data, the network can reconstruct irregular missing field seismic data with high accuracy using the Gaussian-noise layer. We test the performances of cGANs trained with different patch sizes in the discriminator to determine the best structure, and we train the networks using different training datasets for different missing rates, demonstrating the best training dataset. Compared with conventional methods, the cGAN based interpolation method does not need different parameter selections for different datasets to obtain the best interpolation data. Furthermore, it is also an efficient technique as the cost is because of the training, and after training, the processing time is negligible.


Sign in / Sign up

Export Citation Format

Share Document