scholarly journals Use of Neural Network Supervised Learning to Enhance the Light Environment Adaptation Ability and Validity of Green BIM

Author(s):  
Shang-yuan Chen
Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 403
Author(s):  
Xun Zhang ◽  
Lanyan Yang ◽  
Bin Zhang ◽  
Ying Liu ◽  
Dong Jiang ◽  
...  

The problem of extracting meaningful data through graph analysis spans a range of different fields, such as social networks, knowledge graphs, citation networks, the World Wide Web, and so on. As increasingly structured data become available, the importance of being able to effectively mine and learn from such data continues to grow. In this paper, we propose the multi-scale aggregation graph neural network based on feature similarity (MAGN), a novel graph neural network defined in the vertex domain. Our model provides a simple and general semi-supervised learning method for graph-structured data, in which only a very small part of the data is labeled as the training set. We first construct a similarity matrix by calculating the similarity of original features between all adjacent node pairs, and then generate a set of feature extractors utilizing the similarity matrix to perform multi-scale feature propagation on graphs. The output of multi-scale feature propagation is finally aggregated by using the mean-pooling operation. Our method aims to improve the model representation ability via multi-scale neighborhood aggregation based on feature similarity. Extensive experimental evaluation on various open benchmarks shows the competitive performance of our method compared to a variety of popular architectures.


Plant Science ◽  
2022 ◽  
Vol 314 ◽  
pp. 111118
Author(s):  
Danyan Chen ◽  
Kaikai Yuan ◽  
Junhua Zhang ◽  
Zhisheng Wang ◽  
Zhangtong Sun ◽  
...  

2020 ◽  
Author(s):  
Jinxin Wei

<p><b>According to kids’ learning process, an auto</b><b>-</b><b>encoder</b><b> is designed</b><b> which can be split into two parts. The two parts can work well separately.The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. The network can achieve its intended functionality through testing by mnist dataset and convolution neural network.</b><b> R</b><b>ound function</b><b> is added between the abstract network and concrete network in order</b><b> to get the the representative generation of class.</b><b> T</b><b>he generation ability </b><b> can be increased </b><b>by adding jump connection and negative feedback. At last, the characteristics of </b><b>the</b><b> network</b><b> is discussed</b><b>. </b><b>T</b><b>he input can </b><b>be </b><b>change</b><b>d </b><b>to any form by encoder and then change it back by decoder through inverse function. The concrete network can be seen as the memory stored by the parameters.</b><b> </b><b>Lethe is that when new knowledge input,</b><b> </b><b>the training process make</b><b>s</b><b> the parameter</b><b>s</b><b> change.</b><b></b></p>


2021 ◽  
Author(s):  
Long Ngo Hoang Truong ◽  
Edward Clay ◽  
Omar E. Mora ◽  
Wen Cheng ◽  
Maninder Kaur ◽  
...  

Author(s):  
Shaolei Wang ◽  
Zhongyuan Wang ◽  
Wanxiang Che ◽  
Sendong Zhao ◽  
Ting Liu

Spoken language is fundamentally different from the written language in that it contains frequent disfluencies or parts of an utterance that are corrected by the speaker. Disfluency detection (removing these disfluencies) is desirable to clean the input for use in downstream NLP tasks. Most existing approaches to disfluency detection heavily rely on human-annotated data, which is scarce and expensive to obtain in practice. To tackle the training data bottleneck, in this work, we investigate methods for combining self-supervised learning and active learning for disfluency detection. First, we construct large-scale pseudo training data by randomly adding or deleting words from unlabeled data and propose two self-supervised pre-training tasks: (i) a tagging task to detect the added noisy words and (ii) sentence classification to distinguish original sentences from grammatically incorrect sentences. We then combine these two tasks to jointly pre-train a neural network. The pre-trained neural network is then fine-tuned using human-annotated disfluency detection training data. The self-supervised learning method can capture task-special knowledge for disfluency detection and achieve better performance when fine-tuning on a small annotated dataset compared to other supervised methods. However, limited in that the pseudo training data are generated based on simple heuristics and cannot fully cover all the disfluency patterns, there is still a performance gap compared to the supervised models trained on the full training dataset. We further explore how to bridge the performance gap by integrating active learning during the fine-tuning process. Active learning strives to reduce annotation costs by choosing the most critical examples to label and can address the weakness of self-supervised learning with a small annotated dataset. We show that by combining self-supervised learning with active learning, our model is able to match state-of-the-art performance with just about 10% of the original training data on both the commonly used English Switchboard test set and a set of in-house annotated Chinese data.


Sign in / Sign up

Export Citation Format

Share Document