scholarly journals Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation

Author(s):  
Wanrong Zhu ◽  
Xin Wang ◽  
Tsu-Jui Fu ◽  
An Yan ◽  
Pradyumna Narayana ◽  
...  
2019 ◽  
Vol 3 (2) ◽  
pp. 27
Author(s):  
Siti Sarah Fitriani ◽  
Nira Erdiana ◽  
Iskandar Abdul Samad

Visualisation has been used for decades as a strategy to help readers construct meaning from reading passages. Teachers across the globe have introduced visualisation mostly to primary students with native language background. They used the strategy to understand their own language. Little is known how this strategy works for university students who learn foreign language. Visualisation can be done internally (by creating mental imagery) and externally (by drawing visual representation). The product of visualising texts by using both models can be further investigated to find out if the meaning represented is appropriate to the meaning written in the text. This study therefore aims at exploring meaning by analysing the visual representations drawn by 26 English Education Department students of Syiah Kuala University after they read a narrative text. The exploration was conducted by looking at the image-word relations in the drawings. To do so, we consulted Chan and Unsworth (2011), Chan (2010) and Unsworth and Chan (2009) on the image-language interaction in multimodal text. The results of the analysis have found that the equivalence, additive and interdependent relations are mostly involved in their visual representations; and these relations really help in representing meanings. Meanwhile, the other three relations which are word-specific, picture specific and parallel are rarely used by the students. In addition, most students created the representations in a form of a design which is relevant to represent a narrative text. Further discussion of the relation between image-word relations, types of design and students’ comprehension is also presented in this paper.


2019 ◽  
Author(s):  
Utsav Krishnan ◽  
Akshal Sharma ◽  
Pratik Chattopadhyay

2021 ◽  
Vol 11 (15) ◽  
pp. 7034
Author(s):  
Hee-Deok Yang

Artificial intelligence technologies and vision systems are used in various devices, such as automotive navigation systems, object-tracking systems, and intelligent closed-circuit televisions. In particular, outdoor vision systems have been applied across numerous fields of analysis. Despite their widespread use, current systems work well under good weather conditions. They cannot account for inclement conditions, such as rain, fog, mist, and snow. Images captured under inclement conditions degrade the performance of vision systems. Vision systems need to detect, recognize, and remove noise because of rain, snow, and mist to boost the performance of the algorithms employed in image processing. Several studies have targeted the removal of noise resulting from inclement conditions. We focused on eliminating the effects of raindrops on images captured with outdoor vision systems in which the camera was exposed to rain. An attentive generative adversarial network (ATTGAN) was used to remove raindrops from the images. This network was composed of two parts: an attentive-recurrent network and a contextual autoencoder. The ATTGAN generated an attention map to detect rain droplets. A de-rained image was generated by increasing the number of attentive-recurrent network layers. We increased the number of visual attentive-recurrent network layers in order to prevent gradient sparsity so that the entire generation was more stable against the network without preventing the network from converging. The experimental results confirmed that the extended ATTGAN could effectively remove various types of raindrops from images.


Author(s):  
Xide Xia ◽  
Tianfan Xue ◽  
Wei-sheng Lai ◽  
Zheng Sun ◽  
Abby Chang ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document