scholarly journals Semi-Supervised Methods to Identify Individual Crowns of Lowland Tropical Canopy Species Using Imaging Spectroscopy and LiDAR

2012 ◽  
Vol 4 (8) ◽  
pp. 2457-2476 ◽  
Author(s):  
Jean-Baptiste Féret ◽  
Gregory P. Asner
2006 ◽  
Author(s):  
David D. Kohler ◽  
W. P. Bissett
Keyword(s):  

2006 ◽  
Author(s):  
David D. Kohler ◽  
W. P. Bissett
Keyword(s):  

2021 ◽  
Vol 14 (2) ◽  
pp. 201-214
Author(s):  
Danilo Croce ◽  
Giuseppe Castellucci ◽  
Roberto Basili

In recent years, Deep Learning methods have become very popular in classification tasks for Natural Language Processing (NLP); this is mainly due to their ability to reach high performances by relying on very simple input representations, i.e., raw tokens. One of the drawbacks of deep architectures is the large amount of annotated data required for an effective training. Usually, in Machine Learning this problem is mitigated by the usage of semi-supervised methods or, more recently, by using Transfer Learning, in the context of deep architectures. One recent promising method to enable semi-supervised learning in deep architectures has been formalized within Semi-Supervised Generative Adversarial Networks (SS-GANs) in the context of Computer Vision. In this paper, we adopt the SS-GAN framework to enable semi-supervised learning in the context of NLP. We demonstrate how an SS-GAN can boost the performances of simple architectures when operating in expressive low-dimensional embeddings; these are derived by combining the unsupervised approximation of linguistic Reproducing Kernel Hilbert Spaces and the so-called Universal Sentence Encoders. We experimentally evaluate the proposed approach over a semantic classification task, i.e., Question Classification, by considering different sizes of training material and different numbers of target classes. By applying such adversarial schema to a simple Multi-Layer Perceptron, a classifier trained over a subset derived from 1% of the original training material achieves 92% of accuracy. Moreover, when considering a complex classification schema, e.g., involving 50 classes, the proposed method outperforms state-of-the-art alternatives such as BERT.


2021 ◽  
Vol 13 (2) ◽  
pp. 292
Author(s):  
Megan Seeley ◽  
Gregory P. Asner

As humans continue to alter Earth systems, conservationists look to remote sensing to monitor, inventory, and understand ecosystems and ecosystem processes at large spatial scales. Multispectral remote sensing data are commonly integrated into conservation decision-making frameworks, yet imaging spectroscopy, or hyperspectral remote sensing, is underutilized in conservation. The high spectral resolution of imaging spectrometers captures the chemistry of Earth surfaces, whereas multispectral satellites indirectly represent such surfaces through band ratios. Here, we present case studies wherein imaging spectroscopy was used to inform and improve conservation decision-making and discuss potential future applications. These case studies include a broad array of conservation areas, including forest, dryland, and marine ecosystems, as well as urban applications and methane monitoring. Imaging spectroscopy technology is rapidly developing, especially with regard to satellite-based spectrometers. Improving on and expanding existing applications of imaging spectroscopy to conservation, developing imaging spectroscopy data products for use by other researchers and decision-makers, and pioneering novel uses of imaging spectroscopy will greatly expand the toolset for conservation decision-makers.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 35834-35845
Author(s):  
Limin Xia ◽  
Jiahui Zhu ◽  
Zhimin Yu

Technologies ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 2
Author(s):  
Ashish Jaiswal ◽  
Ashwin Ramesh Babu ◽  
Mohammad Zaki Zadeh ◽  
Debapriya Banerjee ◽  
Fillia Makedon

Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress.


2021 ◽  
Vol 259 ◽  
pp. 112405
Author(s):  
Martin van Leeuwen ◽  
Henry Aaron Frye ◽  
Adam Michael Wilson

Sign in / Sign up

Export Citation Format

Share Document