scholarly journals Widget Captioning: Generating Natural Language Description for Mobile User Interface Elements

Author(s):  
Yang Li ◽  
Gang Li ◽  
Luheng He ◽  
Jingjie Zheng ◽  
Hong Li ◽  
...  
Author(s):  
Ruihua Ji ◽  
Junyu Pei ◽  
Wenhua Yang ◽  
Juan Zhai ◽  
Minxue Pan ◽  
...  
Keyword(s):  

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Qianwen Yang ◽  
Xiang Gong

PurposeThe engagement–addiction dilemma has been commonly observed in the information technology (IT) industry. However, this issue has received limited research attention in the information system (IS) discipline. Drawing on the stimulus–organism–response (SOR) framework, this study explores the engagement–addiction dilemma in the use of mobile games and highlights the impacts of game design features, namely, mobile user interface and mobile game affordance.Design/methodology/approachThe research model was empirically validated using a longitudinal survey data from 410 mobile game users in China.FindingsThe empirical results offer several key findings. First, mobile user interface and mobile game affordance positively affect telepresence and social presence, which lead to meaningful engagement and mobile game addiction. Second, a high-quality of mobile user interface positively moderates the effects of mobile game affordance on telepresence and social presence.Originality/valueThis study contributes to the literature by theorizing and empirically testing the impacts of game design features on the engagement-addiction dilemma.


2019 ◽  
Author(s):  
Leonardo Nozaki ◽  
Luciana Zaina

O uso de Padrões de Design da Interface do Usuário (User Interface Design Patterns) (UIDP) é visto como uma boa prática para o desenvolvimento de softwares interativos. No entanto, ao aplicar esses padrões, o designer da interface pode introduzir problemas de acessibilidade no software. O objetivo deste trabalho é apresentar o processo de avaliação de um conjunto de recomendações, para uso de UIDP no desenvolvimento de aplicações para dispositivos móveis que evitem a inserção de problemas de acessibilidade.


Author(s):  
Md. Asifuzzaman Jishan ◽  
Khan Raqib Mahmud ◽  
Abul Kalam Al Azad

We presented a learning model that generated natural language description of images. The model utilized the connections between natural language and visual data by produced text line based contents from a given image. Our Hybrid Recurrent Neural Network model is based on the intricacies of Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Bi-directional Recurrent Neural Network (BRNN) models. We conducted experiments on three benchmark datasets, e.g., Flickr8K, Flickr30K, and MS COCO. Our hybrid model utilized LSTM model to encode text line or sentences independent of the object location and BRNN for word representation, this reduced the computational complexities without compromising the accuracy of the descriptor. The model produced better accuracy in retrieving natural language based description on the dataset.


Sign in / Sign up

Export Citation Format

Share Document