scholarly journals Learning to Caricature via Semantic Shape Transform

Author(s):  
Wenqing Chu ◽  
Wei-Chih Hung ◽  
Yi-Hsuan Tsai ◽  
Yu-Ting Chang ◽  
Yijun Li ◽  
...  

AbstractCaricature is an artistic drawing created to abstract or exaggerate facial features of a person. Rendering visually pleasing caricatures is a difficult task that requires professional skills, and thus it is of great interest to design a method to automatically generate such drawings. To deal with large shape changes, we propose an algorithm based on a semantic shape transform to produce diverse and plausible shape exaggerations. Specifically, we predict pixel-wise semantic correspondences and perform image warping on the input photo to achieve dense shape transformation. We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures. In addition, our model allows users to manipulate the shape via the semantic map. We demonstrate the effectiveness of our approach on a large photograph-caricature benchmark dataset with comparisons to the state-of-the-art methods.

2020 ◽  
Author(s):  
Yanhua Gao ◽  
Yuan Zhu ◽  
Bo Liu ◽  
Yue Hu ◽  
Youmin Guo

ObjectiveIn Transthoracic echocardiographic (TTE) examination, it is essential to identify the cardiac views accurately. Computer-aided recognition is expected to improve the accuracy of the TTE examination.MethodsThis paper proposes a new method for automatic recognition of cardiac views based on deep learning, including three strategies. First, A spatial transform network is performed to learn cardiac shape changes during the cardiac cycle, which reduces intra-class variability. Second, a channel attention mechanism is introduced to adaptively recalibrates channel-wise feature responses. Finally, unlike conventional deep learning methods, which learned each input images individually, the structured signals are applied by a graph of similarities among images. These signals are transformed into the graph-based image embedding, which act as unsupervised regularization constraints to improve the generalization accuracy.ResultsThe proposed method was trained and tested in 171792 cardiac images from 584 subjects. Compared with the known result of the state of the art, the overall accuracy of the proposed method on cardiac image classification is 99.10% vs. 91.7%, and the mean AUC is 99.36%. Moreover, the overall accuracy is 98.15%, and the mean AUC is 98.96% on an independent test set with 34211 images from 100 subjects.ConclusionThe method of this paper achieved the results of the state of the art, which is expected to be an automated recognition tool for cardiac views recognition. The work confirms the potential of deep learning on ultrasound medicine.


2019 ◽  
Vol 7 ◽  
pp. 343-356 ◽  
Author(s):  
Rui Cai ◽  
Mirella Lapata

In this paper we focus on learning dependency aware representations for semantic role labeling without recourse to an external parser. The backbone of our model is an LSTM-based semantic role labeler jointly trained with two auxiliary tasks: predicting the dependency label of a word and whether there exists an arc linking it to the predicate. The auxiliary tasks provide syntactic information that is specific to semantic role labeling and are learned from training data (dependency annotations) without relying on existing dependency parsers, which can be noisy (e.g., on out-of-domain data or infrequent constructions). Experimental results on the CoNLL-2009 benchmark dataset show that our model outperforms the state of the art in English, and consistently improves performance in other languages, including Chinese, German, and Spanish.


2020 ◽  
Vol 34 (05) ◽  
pp. 8799-8806
Author(s):  
Yuming Shang ◽  
He-Yan Huang ◽  
Xian-Ling Mao ◽  
Xin Sun ◽  
Wei Wei

The noisy labeling problem has been one of the major obstacles for distant supervised relation extraction. Existing approaches usually consider that the noisy sentences are useless and will harm the model's performance. Therefore, they mainly alleviate this problem by reducing the influence of noisy sentences, such as applying bag-level selective attention or removing noisy sentences from sentence-bags. However, the underlying cause of the noisy labeling problem is not the lack of useful information, but the missing relation labels. Intuitively, if we can allocate credible labels for noisy sentences, they will be transformed into useful training data and benefit the model's performance. Thus, in this paper, we propose a novel method for distant supervised relation extraction, which employs unsupervised deep clustering to generate reliable labels for noisy sentences. Specifically, our model contains three modules: a sentence encoder, a noise detector and a label generator. The sentence encoder is used to obtain feature representations. The noise detector detects noisy sentences from sentence-bags, and the label generator produces high-confidence relation labels for noisy sentences. Extensive experimental results demonstrate that our model outperforms the state-of-the-art baselines on a popular benchmark dataset, and can indeed alleviate the noisy labeling problem.


Author(s):  
T. A. Welton

Various authors have emphasized the spatial information resident in an electron micrograph taken with adequately coherent radiation. In view of the completion of at least one such instrument, this opportunity is taken to summarize the state of the art of processing such micrographs. We use the usual symbols for the aberration coefficients, and supplement these with £ and 6 for the transverse coherence length and the fractional energy spread respectively. He also assume a weak, biologically interesting sample, with principal interest lying in the molecular skeleton remaining after obvious hydrogen loss and other radiation damage has occurred.


2003 ◽  
Vol 48 (6) ◽  
pp. 826-829 ◽  
Author(s):  
Eric Amsel
Keyword(s):  

1968 ◽  
Vol 13 (9) ◽  
pp. 479-480
Author(s):  
LEWIS PETRINOVICH
Keyword(s):  

1984 ◽  
Vol 29 (5) ◽  
pp. 426-428
Author(s):  
Anthony R. D'Augelli

1991 ◽  
Vol 36 (2) ◽  
pp. 140-140
Author(s):  
John A. Corson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document