structured output
Recently Published Documents


TOTAL DOCUMENTS

121
(FIVE YEARS 25)

H-INDEX

18
(FIVE YEARS 3)

Author(s):  
Waleed Mustafa ◽  
Yunwen Lei ◽  
Antoine Ledent ◽  
Marius Kloft

In machine learning we often encounter structured output prediction problems (SOPPs), i.e. problems where the output space admits a rich internal structure. Application domains where SOPPs naturally occur include natural language processing, speech recognition, and computer vision. Typical SOPPs have an extremely large label set, which grows exponentially as a function of the size of the output. Existing generalization analysis implies generalization bounds with at least a square-root dependency on the cardinality d of the label set, which can be vacuous in practice. In this paper, we significantly improve the state of the art by developing novel high-probability bounds with a logarithmic dependency on d. Furthermore, we leverage the lens of algorithmic stability to develop generalization bounds in expectation without any dependency on d. Our results therefore build a solid theoretical foundation for learning in large-scale SOPPs. Furthermore, we extend our results to learning with weakly dependent data.


2021 ◽  
Author(s):  
Yifan Liu ◽  
Hao Chen ◽  
Yu Chen ◽  
Wei Yin ◽  
Chunhua Shen
Keyword(s):  

2021 ◽  
Author(s):  
Najat Alabdullah

This research paper presents a quasi-experimental empirical study investigating the effects of structured input and structured output tasks on the acquisition of English causative forms. This research is framed on VanPatten’s (1996) input processing theory. The grammatical form chosen for this investigation is affected by a processing strategy called the First Noun Principle. There are three variables included that make this study significant. These variables are having participants that are young learners who speak Arabic as an L1 and using discourse-level instrumentation. These variables make this study significant because the studies that investigated the effectiveness of structured input practice with these variables are in the minority. The study’s main questions are: (i) What are the short-term effects of structured input and structured output on the acquisition of English causative forms as measured with discourse-level interpretation tasks? (ii) What are the short-term effects of structured input and structured output on the learners’ ability to acquire the English causative forms as measured with discourse-level production tasks? Participants were school-age learners (aged 12-13) from an Arabic background with Arabic as an L1 who studied English as a second language in Kuwait. A pre and post-test procedure was adopted in this study. Two instructional groups were created, which are: (i) structured input; (ii) structured output. Discourse-level tasks were used in the study to assess the effectiveness of the two instructional treatments. Results were analyzed using descriptive statistics and ANOVA. The main findings support the view that discourse-level structured input tasks are a useful pedagogical intervention in helping young L2 learners from an Arabic background with Arabic as an L1 to process, interpret and produce accurate English causative forms. The main findings have theoretical and pedagogical implications for language learning and teaching.


2021 ◽  
Vol 12 (1) ◽  
pp. 270-292
Author(s):  
Najat Alabdullah

This research paper presents a quasi-experimental empirical study investigating the effects of structured input and structured output tasks on the acquisition of English causative forms. This research is framed on VanPatten’s (1996) input processing theory. The grammatical form chosen for this investigation is affected by a processing strategy called the First Noun Principle. There are three variables included that make this study significant. These variables are having participants that are young learners who speak Arabic as an L1 and using discourse-level instrumentation. These variables make this study significant because the studies that investigated the effectiveness of structured input practice with these variables are in the minority. The study’s main questions are: (i) What are the short-term effects of structured input and structured output on the acquisition of English causative forms as measured with discourse-level interpretation tasks? (ii) What are the short-term effects of structured input and structured output on the learners’ ability to acquire the English causative forms as measured with discourse-level production tasks? Participants were school-age learners (aged 12-13) from an Arabic background with Arabic as an L1 who studied English as a second language in Kuwait. A pre and post-test procedure was adopted in this study. Two instructional groups were created, which are: (i) structured input; (ii) structured output. Discourse-level tasks were used in the study to assess the effectiveness of the two instructional treatments. Results were analyzed using descriptive statistics and ANOVA. The main findings support the view that discourse-level structured input tasks are a useful pedagogical intervention in helping young L2 learners from an Arabic background with Arabic as an L1 to process, interpret and produce accurate English causative forms. The main findings have theoretical and pedagogical implications for language learning and teaching.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Kaiyun Yang ◽  
Xuedong Wu ◽  
Jingxiang Xu

The structured output tracking algorithm is a visual target tracking algorithm with excellent comprehensive performance in recent years. However, the algorithm classifier will produce error information and result in target loss or tracking failure when the target is occluded or the scale changes in the process of tracking. In this work, a real-time structured output tracker with scale adaption is proposed: (1) the target position prediction is added in the process of target tracking to improve the real-time tracking performance; (2) the adaptive scheme of target scale discrimination is proposed in the structured support to improve the overall tracking accuracy; and (3) the Kalman filter is used to solve the occlusion problem of continuous tracking. Extensive evaluations on the OTB-2015 benchmark dataset with 100 sequences have shown that the proposed tracking algorithm can run at a highly efficient speed of 84 fps and perform favorably against other tracking algorithms.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1508
Author(s):  
Kun Zhang ◽  
Yuanjie Zheng ◽  
Xiaobo Deng ◽  
Weikuan Jia ◽  
Jian Lian ◽  
...  

The goal of the few-shot learning method is to learn quickly from a low-data regime. Structured output tasks like segmentation are challenging for few-shot learning, due to their being high-dimensional and statistically dependent. For this problem, we propose improved guided networks and combine them with a fully connected conditional random field (CRF). The guided network extracts task representations from annotated support images through feature fusion to do fast, accurate inference on new unannotated query images. By bringing together few-shot learning methods and fully connected CRFs, our method can do accurate object segmentation by overcoming poor localization properties of deep convolutional neural networks and can quickly updating tasks, without further optimization, when faced with new data. Our guided network is at the forefront of accuracy for the terms of annotation volume and time.


Author(s):  
Quan Guo ◽  
Hossein Rajaby Faghihi ◽  
Yue Zhang ◽  
Andrzej Uszok ◽  
Parisa Kordjamshidi

Structured learning algorithms usually involve an inference phase that selects the best global output variables assignments based on the local scores of all possible assignments. We extend deep neural networks with structured learning to combine the power of learning representations and leveraging the use of domain knowledge in the form of output constraints during training. Introducing a non-differentiable inference module to gradient-based training is a critical challenge. Compared to using conventional loss functions that penalize every local error independently, we propose an inference-masked loss that takes into account the effect of inference and does not penalize the local errors that can be corrected by the inference. We empirically show the inference-masked loss combined with the negative log-likelihood loss improves the performance on different tasks, namely entity relation recognition on CoNLL04 and ACE2005 corpora, and spatial role labeling on CLEF 2017 mSpRL dataset. We show the proposed approach helps to achieve better generalizability, particularly in the low-data regime.


Sign in / Sign up

Export Citation Format

Share Document