scholarly journals Visual Analysis of Multimodal Movie Network Data Based on the Double-Layered View

2015 ◽  
Vol 2015 ◽  
pp. 1-16 ◽  
Author(s):  
Ying Tang ◽  
Jia Yu ◽  
Chen Li ◽  
Jing Fan

Multimodal visualization of network data is a method considering various types of nodes and visualizing them based on their types, or modes. Compared to traditional network visualization of nodes of the same mode, the new method treats different modes of entities in corresponding ways and presents the relations between them more clearly. In this paper, we apply the new method to visualize movie network data, a typical multimodal graph data that contains nodes of different types and connections between them. We use an improved force-directed layout algorithm to present the movie persons as the foreground and a density map to present films as the background. By combining the foreground and background, the movie network data are presented in one picture properly. User interactions are provided including detailed pie charts visible/invisible, zooming, and panning. We apply our visualization method to the Chinese movie data from Douban website. In order to testify the effectiveness of our method, we design and perform the user study of which the statistics are analyzed.

PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251057
Author(s):  
Miquel Mascaró ◽  
Francisco J. Serón ◽  
Francisco J. Perales ◽  
Javier Varona ◽  
Ramon Mas

Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character’s appearance.


2021 ◽  
Vol 12 (4) ◽  
pp. 1-28
Author(s):  
Guodao Sun ◽  
Hao Wu ◽  
Lin Zhu ◽  
Chaoqing Xu ◽  
Haoran Liang ◽  
...  

With the rapid development of mobile Internet, the popularity of video capture devices has brought a surge in multimedia video resources. Utilizing machine learning methods combined with well-designed features, we could automatically obtain video summarization to relax video resource consumption and retrieval issues. However, there always exists a gap between the summarization obtained by the model and the ones annotated by users. How to help users understand the difference, provide insights in improving the model, and enhance the trust in the model remains challenging in the current study. To address these challenges, we propose VSumVis under a user-centered design methodology, a visual analysis system with multi-feature examination and multi-level exploration, which could help users explore and analyze video content, as well as the intrinsic relationship that existed in our video summarization model. The system contains multiple coordinated views, i.e., video view, projection view, detail view, and sequential frames view. A multi-level analysis process to integrate video events and frames are presented with clusters and nodes visualization in our system. Temporal patterns concerning the difference between the manual annotation score and the saliency score produced by our model are further investigated and distinguished with sequential frames view. Moreover, we propose a set of rich user interactions that enable an in-depth, multi-faceted analysis of the features in our video summarization model. We conduct case studies and interviews with domain experts to provide anecdotal evidence about the effectiveness of our approach. Quantitative feedback from a user study confirms the usefulness of our visual system for exploring the video summarization model.


Information ◽  
2019 ◽  
Vol 10 (12) ◽  
pp. 366 ◽  
Author(s):  
Roberto Yuri da Silva Franco ◽  
Rodrigo Santos do Amor Divino Lima ◽  
Rafael do Monte Paixão ◽  
Carlos Gustavo Resque dos Santos ◽  
Bianchi Serique Meiguins

This paper presents UXmood, a tool that provides quantitative and qualitative information to assist researchers and practitioners in the evaluation of user experience and usability. The tool uses and combines data from video, audio, interaction logs and eye trackers, presenting them in a configurable dashboard on the web. The UXmood works analogously to a media player, in which evaluators can review the entire user interaction process, fast-forwarding irrelevant sections and rewinding specific interactions to repeat them if necessary. Besides, sentiment analysis techniques are applied to video, audio and transcribed text content to obtain insights on the user experience of participants. The main motivations to develop UXmood are to support joint analysis of usability and user experience, to use sentiment analysis for supporting qualitative analysis, to synchronize different types of data in the same dashboard and to allow the analysis of user interactions from any device with a web browser. We conducted a user study to assess the data communication efficiency of the visualizations, which provided insights on how to improve the dashboard.


2018 ◽  
Vol 2 (2) ◽  
pp. 70-82 ◽  
Author(s):  
Binglu Wang ◽  
Yi Bu ◽  
Win-bin Huang

AbstractIn the field of scientometrics, the principal purpose for author co-citation analysis (ACA) is to map knowledge domains by quantifying the relationship between co-cited author pairs. However, traditional ACA has been criticized since its input is insufficiently informative by simply counting authors’ co-citation frequencies. To address this issue, this paper introduces a new method that reconstructs the raw co-citation matrices by regarding document unit counts and keywords of references, named as Document- and Keyword-Based Author Co-Citation Analysis (DKACA). Based on the traditional ACA, DKACA counted co-citation pairs by document units instead of authors from the global network perspective. Moreover, by incorporating the information of keywords from cited papers, DKACA captured their semantic similarity between co-cited papers. In the method validation part, we implemented network visualization and MDS measurement to evaluate the effectiveness of DKACA. Results suggest that the proposed DKACA method not only reveals more insights that are previously unknown but also improves the performance and accuracy of knowledge domain mapping, representing a new basis for further studies.


Author(s):  
Bernardo Breve ◽  
Stefano Cirillo ◽  
Mariano Cuofano ◽  
Domenico Desiato

AbstractGestural expressiveness plays a fundamental role in the interaction with people, environments, animals, things, and so on. Thus, several emerging application domains would exploit the interpretation of movements to support their critical designing processes. To this end, new forms to express the people’s perceptions could help their interpretation, like in the case of music. In this paper, we investigate the user’s perception associated with the interpretation of sounds by highlighting how sounds can be exploited for helping users in adapting to a specific environment. We present a novel algorithm for mapping human movements into MIDI music. The algorithm has been implemented in a system that integrates a module for real-time tracking of movements through a sample based synthesizer using different types of filters to modulate frequencies. The system has been evaluated through a user study, in which several users have participated in a room experience, yielding significant results about their perceptions with respect to the environment they were immersed.


2021 ◽  
Vol 10 (2) ◽  
pp. 97
Author(s):  
Jaeyoung Song ◽  
Kiyun Yu

This paper presents a new framework to classify floor plan elements and represent them in a vector format. Unlike existing approaches using image-based learning frameworks as the first step to segment the image pixels, we first convert the input floor plan image into vector data and utilize a graph neural network. Our framework consists of three steps. (1) image pre-processing and vectorization of the floor plan image; (2) region adjacency graph conversion; and (3) the graph neural network on converted floor plan graphs. Our approach is able to capture different types of indoor elements including basic elements, such as walls, doors, and symbols, as well as spatial elements, such as rooms and corridors. In addition, the proposed method can also detect element shapes. Experimental results show that our framework can classify indoor elements with an F1 score of 95%, with scale and rotation invariance. Furthermore, we propose a new graph neural network model that takes the distance between nodes into account, which is a valuable feature of spatial network data.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 721
Author(s):  
Ao Feng ◽  
Hongxiang Li ◽  
Zixi Liu ◽  
Yuanjiang Luo ◽  
Haibo Pu ◽  
...  

The thousand grain weight is an index of size, fullness and quality in crop seed detection and is an important basis for field yield prediction. To detect the thousand grain weight of rice requires the accurate counting of rice. We collected a total of 5670 images of three different types of rice seeds with different qualities to construct a model. Considering the different shapes of different types of rice, this study used an adaptive Gaussian kernel to convolve with the rice coordinate function to obtain a more accurate density map, which was used as an important basis for determining the results of subsequent experiments. A Multi-Column Convolutional Neural Network was used to extract the features of different sizes of rice, and the features were fused by the fusion network to learn the mapping relationship from the original map features to the density map features. An advanced prior step was added to the original algorithm to estimate the density level of the image, which weakened the effect of the rice adhesion condition on the counting results. Extensive comparison experiments show that the proposed method is more accurate than the original MCNN algorithm.


2017 ◽  
Vol 11 (01) ◽  
pp. 65-84 ◽  
Author(s):  
Denny Stohr ◽  
Iva Toteva ◽  
Stefan Wilk ◽  
Wolfgang Effelsberg ◽  
Ralf Steinmetz

Instant sharing of user-generated video recordings has become a widely used service on platforms such as YouNow, Facebook.Live or uStream. Yet, providing such services with a high QoE for viewers is still challenging, given that mobile upload speed and capacities are limited, and the recording quality on mobile devices greatly depends on the users’ capabilities. One proposed solution to address these issues is video composition. It allows to switch between multiple recorded video streams, selecting the best source at any given time, for composing a live video with a better overall quality for the viewers. Previous approaches have required an in-depth visual analysis of the video streams, which usually limited the scalability of these systems. In contrast, our work allows the stream selection to be realized solely on context information, based on video- and service-quality aspects from sensor and network measurements. The implemented monitoring service for a context-aware upload of video streams is evaluated in different network conditions, with diverse user behavior, including camera shaking and user mobility. We have evaluated the system’s performance based on two studies. First, in a user study, we show that a higher efficiency for the video upload as well as a better QoE for viewers can be achieved when using our proposed system. Second, by examining the overall delay for the switching between streams based on sensor readings, we show that a composition view change can efficiently be achieved in approximately four seconds.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
V. V. Singh ◽  
Abubakkar Idris Mohhammad ◽  
Kabiru Hamisu Ibrahim ◽  
Ibrahim Yusuf

PurposeThis paper analyzed a complex system consisting n-identical units under a k-out-of-n: G; configuration via a new method which has not been studied by previous researchers. The computed results are more supportable for repairable system performability analysis.Design/methodology/approachIn this paper, the authors have analyzed a complex system consisting n-identical units under a k-out-of-n: G; configuration via a new method which has not been studied by previous researchers. The supplementary variable technique has employed for analyzing the performance of the system.FindingsReliability measures have been computed for different types of configuration. It generalized the results for purely series and purely parallel configurations.Research limitations/implicationsThis research may be beneficial for industrial system performances whereas a k-out-of-n-type configuration exists.Practical implicationsNot sure as it is a theoretical assessment.Social implicationsThis research may not have social implications.Originality/valueThis work is the sole work of authors that have not been communicated to any other journal before.


2021 ◽  
Vol 16 (1) ◽  
pp. 21
Author(s):  
Chung-Yi Hou ◽  
Matthew S. Mayernik

For research data repositories, web interfaces are usually the primary, if not the only, method that data users have to interact with repository systems. Data users often search, discover, understand, access, and sometimes use data directly through repository web interfaces. Given that sub-par user interfaces can reduce the ability of users to locate, obtain, and use data, it is important to consider how repositories’ web interfaces can be evaluated and improved in order to ensure useful and successful user interactions. This paper discusses how usability assessment techniques are being applied to improve the functioning of data repository interfaces at the National Center for Atmospheric Research (NCAR). At NCAR, a new suite of data system tools is being developed and collectively called the NCAR Digital Asset Services Hub (DASH). Usability evaluation techniques have been used throughout the NCAR DASH design and implementation cycles in order to ensure that the systems work well together for the intended user base. By applying user study, paper prototype, competitive analysis, journey mapping, and heuristic evaluation, the NCAR DASH Search and Repository experiences provide examples for how data systems can benefit from usability principles and techniques. Integrating usability principles and techniques into repository system design and implementation workflows helps to optimize the systems’ overall user experience.


Sign in / Sign up

Export Citation Format

Share Document