Cycle Generative Adversarial Network: Towards A Low-Cost Vegetation Index Estimation

Author(s):  
Patricia L. Suarez ◽  
Angel D. Sappa ◽  
Boris X. Vintimilla
Author(s):  
Xinyi Li ◽  
Liqiong Chang ◽  
Fangfang Song ◽  
Ju Wang ◽  
Xiaojiang Chen ◽  
...  

This paper focuses on a fundamental question in Wi-Fi-based gesture recognition: "Can we use the knowledge learned from some users to perform gesture recognition for others?". This problem is also known as cross-target recognition. It arises in many practical deployments of Wi-Fi-based gesture recognition where it is prohibitively expensive to collect training data from every single user. We present CrossGR, a low-cost cross-target gesture recognition system. As a departure from existing approaches, CrossGR does not require prior knowledge (such as who is currently performing a gesture) of the target user. Instead, CrossGR employs a deep neural network to extract user-agnostic but gesture-related Wi-Fi signal characteristics to perform gesture recognition. To provide sufficient training data to build an effective deep learning model, CrossGR employs a generative adversarial network to automatically generate many synthetic training data from a small set of real-world examples collected from a small number of users. Such a strategy allows CrossGR to minimize the user involvement and the associated cost in collecting training examples for building an accurate gesture recognition system. We evaluate CrossGR by applying it to perform gesture recognition across 10 users and 15 gestures. Experimental results show that CrossGR achieves an accuracy of over 82.6% (up to 99.75%). We demonstrate that CrossGR delivers comparable recognition accuracy, but uses an order of magnitude less training samples collected from the end-users when compared to state-of-the-art recognition systems.


Author(s):  
Anatoliy Parfenov ◽  
Peter Sychov

CAPTCHA recognition is certainly not a new research topic. Over the past decade, researchers have demonstrated various ways to automatically recognize text-based CAPTCHAs. However, in such methods, the recognition setup requires a large participation of experts and carries a laborious process of collecting and marking data. This article presents a general, low-cost, but effective approach to automatically solving text-based CAPTCHAs based on deep learning. This approach is based on the architecture of a generative-competitive network, which will significantly reduce the number of real required CAPTCHAs.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2250
Author(s):  
Leyuan Liu ◽  
Rubin Jiang ◽  
Jiao Huo ◽  
Jingying Chen

Facial expression recognition (FER) is a challenging problem due to the intra-class variation caused by subject identities. In this paper, a self-difference convolutional network (SD-CNN) is proposed to address the intra-class variation issue in FER. First, the SD-CNN uses a conditional generative adversarial network to generate the six typical facial expressions for the same subject in the testing image. Second, six compact and light-weighted difference-based CNNs, called DiffNets, are designed for classifying facial expressions. Each DiffNet extracts a pair of deep features from the testing image and one of the six synthesized expression images, and compares the difference between the deep feature pair. In this way, any potential facial expression in the testing image has an opportunity to be compared with the synthesized “Self”—an image of the same subject with the same facial expression as the testing image. As most of the self-difference features of the images with the same facial expression gather tightly in the feature space, the intra-class variation issue is significantly alleviated. The proposed SD-CNN is extensively evaluated on two widely-used facial expression datasets: CK+ and Oulu-CASIA. Experimental results demonstrate that the SD-CNN achieves state-of-the-art performance with accuracies of 99.7% on CK+ and 91.3% on Oulu-CASIA, respectively. Moreover, the model size of the online processing part of the SD-CNN is only 9.54 MB (1.59 MB ×6), which enables the SD-CNN to run on low-cost hardware.


2021 ◽  
Author(s):  
Soheil Soltani ◽  
Ashkan Ojaghi ◽  
Hui Qiao ◽  
Nischita Kaza ◽  
Xinyang Li ◽  
...  

Abstract Identifying prostate cancer patients that are harboring aggressive forms of prostate cancer remains a significant clinical challenge. Here we develop an approach based on multispectral deep-ultraviolet (UV) microscopy that provides novel quantitative insight into the aggressiveness and grade of this disease, thus providing a new tool to help address this important challenge. We find that UV spectral signatures from endogenous molecules give rise to a phenotypical continuum that provides unique structural insight (i.e., molecular maps or “optical stains") of thin tissue sections with subcellular (nanoscale) resolution. We show that this phenotypical continuum can also be applied as a surrogate biomarker of prostate cancer malignancy, where patients with the most aggressive tumors show a ubiquitous glandular phenotypical shift. In addition to providing several novel “optical stains” with contrast for disease, we also adapt a two-part Cycle-consistent Generative Adversarial Network to translate the label-free deep-UV images into virtual hematoxylin and eosin (H&E) stained images, thus providing multiple stains (including the gold-standard H&E) from the same unlabeled specimen. Agreement between the virtual H&E images and the H&E-stained tissue sections is evaluated by a panel of pathologists who find that the two modalities are in excellent agreement. This work has significant implications towards improving our ability to objectively quantify prostate cancer grade and aggressiveness, thus improving the management and clinical outcomes of prostate cancer patients. This same approach can also be applied broadly in other tumor types to achieve low-cost, stain-free, quantitative histopathological analysis.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6311
Author(s):  
Eoin Brophy ◽  
Maarten De Vos ◽  
Geraldine Boylan ◽  
Tomás Ward

Ischemic heart disease is the highest cause of mortality globally each year. This puts a massive strain not only on the lives of those affected, but also on the public healthcare systems. To understand the dynamics of the healthy and unhealthy heart, doctors commonly use an electrocardiogram (ECG) and blood pressure (BP) readings. These methods are often quite invasive, particularly when continuous arterial blood pressure (ABP) readings are taken, and not to mention very costly. Using machine learning methods, we develop a framework capable of inferring ABP from a single optical photoplethysmogram (PPG) sensor alone. We train our framework across distributed models and data sources to mimic a large-scale distributed collaborative learning experiment that could be implemented across low-cost wearables. Our time-series-to-time-series generative adversarial network (T2TGAN) is capable of high-quality continuous ABP generation from a PPG signal with a mean error of 2.95 mmHg and a standard deviation of 19.33 mmHg when estimating mean arterial pressure on a previously unseen, noisy, independent dataset. To our knowledge, this framework is the first example of a GAN capable of continuous ABP generation from an input PPG signal that also uses a federated learning methodology.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7817
Author(s):  
Chi-En Huang ◽  
Yung-Hui Li ◽  
Muhammad Saqlain Aslam ◽  
Ching-Chun Chang

There exist many types of intelligent security sensors in the environment of the Internet of Things (IoT) and cloud computing. Among them, the sensor for biometrics is one of the most important types. Biometric sensors capture the physiological or behavioral features of a person, which can be further processed with cloud computing to verify or identify the user. However, a low-resolution (LR) biometrics image causes the loss of feature details and reduces the recognition rate hugely. Moreover, the lack of resolution negatively affects the performance of image-based biometric technology. From a practical perspective, most of the IoT devices suffer from hardware constraints and the low-cost equipment may not be able to meet various requirements, particularly for image resolution, because it asks for additional storage to store high-resolution (HR) images, and a high bandwidth to transmit the HR image. Therefore, how to achieve high accuracy for the biometric system without using expensive and high-cost image sensors is an interesting and valuable issue in the field of intelligent security sensors. In this paper, we proposed DDA-SRGAN, which is a generative adversarial network (GAN)-based super-resolution (SR) framework using the dual-dimension attention mechanism. The proposed model can be trained to discover the regions of interest (ROI) automatically in the LR images without any given prior knowledge. The experiments were performed on the CASIA-Thousand-v4 and the CelebA datasets. The experimental results show that the proposed method is able to learn the details of features in crucial regions and achieve better performance in most cases.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Qiyue Huang

In view of the growing depletion of traditional fossil fuels and their adverse impact on natural environment, wind energy has gained increasing popularity across the globe. Characterized by wide distribution, low cost, and well-rounded technology, it has achieved fast-growing installed capacity in recent years. However, wind power is volatile and random in nature and the power ramping events caused by extreme weather always threaten the safe, stable, and economic operation of the power grid. To address the problems of insufficient sample data and low prediction accuracy in existing ramping prediction methods, a new way of wind power prediction considering ramping events based on Generative Adversarial Network (GAN) is proposed. First of all, the ramping events get identified and separated from the database of historical wind power, and the feature set of historical ramping events is then extracted according to the waveform and meteorological factors. Taking the feature set which integrates similar feature with historical one as the input of GAN, the simulated ramping data are continuously produced through the adversarial training of the generator and discriminator, thus enriching the ramping database. After that, the expanded ramping database can be applied to predict the ramping power through the LSTM model. An experiment based on the wind power dataset in a certain area of northwest China further verifies the effectiveness and superiority of this method compared with traditional ones.


Sign in / Sign up

Export Citation Format

Share Document