i-Vectors in Language Modeling: An Efficient Way of Domain Adaptation for Feed-Forward Models

Author(s):  
Karel Beneš ◽  
Santosh Kesiraju ◽  
Lukáš Burget
2018 ◽  
Vol 32 (15) ◽  
pp. 5041-5052 ◽  
Author(s):  
Georgios N. Kouziokas ◽  
Alexander Chatzigeorgiou ◽  
Konstantinos Perakis

Author(s):  
Jarmo Nurmi ◽  
Jouni Mattila

Hydraulic manipulators on mobile machines, whose hydraulic actuators are usually controlled by mobile hydraulic valves, are being considered for robotic closed-loop control. A feed-forward-based strategy combining position and velocity feedback has been found to be an effective method for the motion control of pressure-compensated mobile hydraulic valves that have a significant dead zone. The feed-forward can be manually identified. However, manually identifying the feed-forward models for each valve-actuator pair is often very time-consuming and error-prone. For this practical reason, we propose an automated feed-forward learning method based on velocity and position feedback. We present experimental results for a heavy-duty hydraulic manipulator on a forest forwarder to demonstrate the effectiveness of the proposed method. These results motivate the automated identification of velocity feed-forward models for motion control of heavy-duty hydraulic manipulators controlled by pressure-compensated mobile hydraulic valves that have a significant input dead zone.


2020 ◽  
Vol 34 (05) ◽  
pp. 9114-9121
Author(s):  
David Vilares ◽  
Michalina Strzyz ◽  
Anders Søgaard ◽  
Carlos Gómez-Rodríguez

Recent analyses suggest that encoders pretrained for language modeling capture certain morpho-syntactic structure. However, probing frameworks for word vectors still do not report results on standard setups such as constituent and dependency parsing. This paper addresses this problem and does full parsing (on English) relying only on pretraining architectures – and no decoding. We first cast constituent and dependency parsing as sequence tagging. We then use a single feed-forward layer to directly map word vectors to labels that encode a linearized tree. This is used to: (i) see how far we can reach on syntax modelling with just pretrained encoders, and (ii) shed some light about the syntax-sensitivity of different word vectors (by freezing the weights of the pretraining network during training). For evaluation, we use bracketing F1-score and las, and analyze in-depth differences across representations for span lengths and dependency displacements. The overall results surpass existing sequence tagging parsers on the ptb (93.5%) and end-to-end en-ewt ud (78.8%).


2013 ◽  
Vol 13 (9) ◽  
pp. 1047-1047
Author(s):  
M. C. Potter ◽  
C. E. Hagmann ◽  
B. Wyble
Keyword(s):  

2021 ◽  
Author(s):  
Constantinos Karouzos ◽  
Georgios Paraskevopoulos ◽  
Alexandros Potamianos

Author(s):  
Marco Iosa ◽  
Leonardo Gizzi ◽  
Federica Tamburella ◽  
Nadia Dominici

Sign in / Sign up

Export Citation Format

Share Document