Evolving graph convolutional networks for neural architecture search

Author(s):  
George Kyriakides ◽  
Konstantinos Margaritis
2020 ◽  
Vol 9 (5) ◽  
pp. 2082-2089
Author(s):  
Fredy H. Martínez S. ◽  
Faiber Robayo Betancourt ◽  
Mario Arbulú

Sign languages (or signed languages) are languages that use visual techniques, primarily with the hands, to transmit information and enable communication with deaf-mutes people. This language is traditionally only learned by people with this limitation, which is why communication between deaf and non-deaf people is difficult. To solve this problem we propose an autonomous model based on convolutional networks to translate the Colombian Sign Language (CSL) into normal Spanish text. The scheme uses characteristic images of each static sign of the language within a base of 24000 images (1000 images per category, with 24 categories) to train a deep convolutional network of the NASNet type (Neural Architecture Search Network). The images in each category were taken from different people with positional variations to cover any angle of view. The performance evaluation showed that the system is capable of recognizing all 24 signs used with an 88% recognition rate.


2020 ◽  
Vol 34 (05) ◽  
pp. 9000-9007
Author(s):  
Sowmya S Sundaram ◽  
Deepak P ◽  
Savitha Sam Abraham

We consider the task of learning distributed representations for arithmetic word problems. We outline the characteristics of the domain of arithmetic word problems that make generic text embedding methods inadequate, necessitating a specialized representation learning method to facilitate the task of retrieval across a wide range of use cases within online learning platforms. Our contribution is two-fold; first, we propose several 'operators' that distil knowledge of the domain of arithmetic word problems and schemas into word problem transformations. Second, we propose a novel neural architecture that combines LSTMs with graph convolutional networks to leverage word problems and their operator-transformed versions to learn distributed representations for word problems. While our target is to ensure that the distributed representations are schema-aligned, we do not make use of schema labels in the learning process, thus yielding an unsupervised representation learning method. Through an evaluation on retrieval over a publicly available corpus of word problems, we illustrate that our framework is able to consistently improve upon contemporary generic text embeddings in terms of schema-alignment.


1992 ◽  
Author(s):  
William Ross ◽  
Ennio Mingolla

Author(s):  
Hanna Mazzawi ◽  
Xavi Gonzalvo ◽  
Aleks Kracun ◽  
Prashant Sridhar ◽  
Niranjan Subrahmanya ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document