Cross-device matching approaches: word embedding and supervised learning

2021 ◽  
Author(s):  
Frank Yeong-Sung Lin ◽  
Chiu-Han Hsiao ◽  
Si-Yuan Zhang ◽  
Yi-Ping Rung ◽  
Yu-Xuan Chen
2017 ◽  
Vol 43 (5) ◽  
pp. 330-340
Author(s):  
Deokseong Seo ◽  
Kyoung Hyun Mo ◽  
Jaesun Park ◽  
Gichang Lee ◽  
Pilsung Kang

Petir ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 247-257
Author(s):  
Meredita Susanty ◽  
Sahrul Sukardi

Named-Entity Recognition (NER) is used to extract information from text by identifying entities such as the name of the person, organization, location, time, and other entities. Recently, machine learning approaches, particularly deep-learning, are widely used to recognize patterns of entities in sentences. Embedding, a process to convert text data into a number or vector of numbers, translates high dimensional vectors into relatively low-dimensional space. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words. The embedding process can be performed using the supervised learning method, which requires a large number of labeled data sets or an unsupervised learning approach. This study compares the two embedding methods; trainable embedding layer (supervised learning) and pre-trained word embedding (unsupervised learning).  The trainable embedding layer uses the embedding layer provided by the Keras library while pre-trained word embedding uses word2vec, GloVe, and fastText to build NER using the BiLSTM architecture. The results show that GloVe had better performance than other embedding techniques with a micro average f1 score of 76.48.


2018 ◽  
Vol 2018 (15) ◽  
pp. 132-1-1323
Author(s):  
Shijie Zhang ◽  
Zhengtian Song ◽  
G. M. Dilshan P. Godaliyadda ◽  
Dong Hye Ye ◽  
Atanu Sengupta ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document