interactive feature
Recently Published Documents


TOTAL DOCUMENTS

76
(FIVE YEARS 19)

H-INDEX

9
(FIVE YEARS 2)

2021 ◽  
Vol 12 ◽  
Author(s):  
Weiwei Cai ◽  
Ming Gao ◽  
Runmin Liu ◽  
Jie Mao

Understanding human emotions and psychology is a critical step toward realizing artificial intelligence, and correct recognition of facial expressions is essential for judging emotions. However, the differences caused by changes in facial expression are very subtle, and different expression features are less distinguishable, making it difficult for computers to recognize human facial emotions accurately. Therefore, this paper proposes a novel multi-layer interactive feature fusion network model with angular distance loss. To begin, a multi-layer and multi-scale module is designed to extract global and local features of facial emotions in order to capture part of the feature relationships between different scales, thereby improving the model's ability to discriminate subtle features of facial emotions. Second, a hierarchical interactive feature fusion module is designed to address the issue of loss of useful feature information caused by layer-by-layer convolution and pooling of convolutional neural networks. In addition, the attention mechanism is also used between convolutional layers at different levels. Improve the neural network's discriminative ability by increasing the saliency of information about different features on the layers and suppressing irrelevant information. Finally, we use the angular distance loss function to improve the proposed model's inter-class feature separation and intra-class feature clustering capabilities, addressing the issues of large intra-class differences and high inter-class similarity in facial emotion recognition. We conducted comparison and ablation experiments on the FER2013 dataset. The results illustrate that the performance of the proposed MIFAD-Net is 1.02–4.53% better than the compared methods, and it has strong competitiveness.


2021 ◽  
Author(s):  
Jonathon Fleming ◽  
Skylar W Marvel ◽  
Alison A Motsinger-Reif ◽  
David M Reif

Background: Presenting a comprehensive picture of geographic data comprising multiple factors is an inherently integrative undertaking. Visualizing such data in an interactive form is essential for public sharing and geographic information systems (GIS) analysis. The Toxicological Prioritization Index (ToxPi) framework has been used as an integrative model layered atop geospatial data, and its deployment within the dynamic ArcGIS universe would open up powerful new avenues for sophisticated, interactive GIS analysis. Objective: We propose an actively developed suite of software, the ToxPi*GIS Toolkit, for creating, viewing, sharing, and analyzing interactive ToxPi figures in ArcGIS. Methods: The ToxPi*GIS Toolkit is a collection of methods for creating interactive feature layers that contain ToxPi diagrams. It currently includes an ArcGIS Toolbox (ToxPiToolbox.tbx) for drawing geographically located ToxPi diagrams onto a feature layer, a collection of modular Python scripts that create predesigned layer files containing ToxPi feature layers from the command line, and a collection of Python routines for useful data manipulation and preprocessing. We present workflows documenting ToxPi feature layer creation, sharing, and embedding for both novice and advanced users looking for additional customizability. Results: Map visualizations created with the ToxPi*GIS Toolkit can be made freely available on public URLs, allowing users without ArcGIS Pro access or expertise to view and interact with them. Novice users with ArcGIS Pro access can create de novo custom maps, and advanced users can exploit additional customization options. The ArcGIS Toolbox provides a simple means for generating ToxPi feature layers. We illustrate its usage with current COVID-19 data to compare drivers of pandemic vulnerability in counties across the United States. Significance: Development of new features, which will advance the interests of the scientific community in many fields, is ongoing for the ToxPi*GIS Toolkit, which can be accessed from www.toxpi.org.


2021 ◽  
Vol 8 (8) ◽  
pp. 58-62
Author(s):  
Yaser Mohammad Mohammad Al Sawy ◽  
◽  
Hisham Saad Zaghloul ◽  

The study aimed at linking geographic information systems and their use in library and information science, as they represent spatial and geographical information represented in processing in machine-readable cataloging (MARC) fields, which are represented in the Resources Description and Access in the form of an internationally agreed drawing or scheme, and geographic information is of interest to a wide range of beneficiaries in various fields, and to develop work in the field of libraries and information in light of the rules for characterization and availability of resources and in view of the lack of previous studies dealing with this topic; It was necessary to think about good planning to equip libraries and information centers at a high level so that they would be able to deal with information sources and the correct representation of geographical data through geographic information systems, the study was keen to apply the standards of the analytical and applied approach where all the appropriate fields to represent data geographically are reviewed. and the application of the appropriate subfields to it, the study reached the possibility of using the field 651 specifically and activating the hyperlink feature through it to display more links that include drawings, maps, data, and vital statistics associated with it, and thus the field 651 turns into an interactive feature to display bibliography, geography and information data with linking to all Pages and links via the Internet or in full-text databases as well as abstract databases, and innovative addition to the performance of field 651 to become a descriptive field and a tool for geographical and informational representation at the same time.


Author(s):  
Norman Walsh ◽  
C. M. Sperberg-McQueen

One of the most obvious differences between documents physically printed on pages of paper and documents displayed on electronic devices is that the latter can be interactive in ways that the former cannot. More than 50 years ago, this is what convinced Ted Nelson and others that when used well computers would dramatically change our relation with text. What kinds of interactivity are possible, and to what extent interactivity adds value to a document, are challenging questions that require careful analysis. Deciding that some specific interactive feature would add value immediately raises a new challenge: how is that feature going to be realized? In this paper, we look at three different technologies that can be used to add interactivity to a document presented on the web: “plain old JavaScript”, Saxon-JS, and XForms. We examine a specific feature and compare the differences between similar implementations across these three platforms.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253442
Author(s):  
JohnMark Taylor ◽  
Yaoda Xu

To interact with real-world objects, any effective visual system must jointly code the unique features defining each object. Despite decades of neuroscience research, we still lack a firm grasp on how the primate brain binds visual features. Here we apply a novel network-based stimulus-rich representational similarity approach to study color and form binding in five convolutional neural networks (CNNs) with varying architecture, depth, and presence/absence of recurrent processing. All CNNs showed near-orthogonal color and form processing in early layers, but increasingly interactive feature coding in higher layers, with this effect being much stronger for networks trained for object classification than untrained networks. These results characterize for the first time how multiple basic visual features are coded together in CNNs. The approach developed here can be easily implemented to characterize whether a similar coding scheme may serve as a viable solution to the binding problem in the primate brain.


2021 ◽  
Vol 8 ◽  
Author(s):  
Bo Wang ◽  
Jingyi Yang ◽  
Hong Peng ◽  
Jingyang Ai ◽  
Lihua An ◽  
...  

Automatic segmentation of brain tumors from multi-modalities magnetic resonance image data has the potential to enable preoperative planning and intraoperative volume measurement. Recent advances in deep convolutional neural network technology have opened up an opportunity to achieve end-to-end segmenting the brain tumor areas. However, the medical image data used in brain tumor segmentation are relatively scarce and the appearance of brain tumors is varied, so that it is difficult to find a learnable pattern to directly describe tumor regions. In this paper, we propose a novel cross-modalities interactive feature learning framework to segment brain tumors from the multi-modalities data. The core idea is that the multi-modality MR data contain rich patterns of the normal brain regions, which can be easily captured and can be potentially used to detect the non-normal brain regions, i.e., brain tumor regions. The proposed multi-modalities interactive feature learning framework consists of two modules: cross-modality feature extracting module and attention guided feature fusing module, which aim at exploring the rich patterns cross multi-modalities and guiding the interacting and the fusing process for the rich features from different modalities. Comprehensive experiments are conducted on the BraTS 2018 benchmark, which show that the proposed cross-modality feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.


Author(s):  
Stephanie Patrick

Across four seasons of her Netflix hit comedy, Kimmy Schmidt emerged as a strong, female survivor of sexual violence. However, Unbreakable Kimmy Schmidt would often walk a fine line between post-feminist and feminist understandings of rape and gendered violence, while reinforcing harmful racial tropes rooted in ‘white feminism’. In 2020, Netflix brought Kimmy back for her ‘biggest adventure yet’ in Kimmy vs the Reverend, but, this time, the viewer had the power, as the tagline read, to ‘decide what happens’, with Netflix’s interactive feature. The article argues that Netflix’s interactivity feature is employed in potentially transformative ways, providing a call-to-action to fans and implicating the audience as both spectators and witnesses to injustices of systemic violence against women. However, the 2020 film's investment in, and deployment of white feminist politics mirrors a broader media erasure of the experiences of racialised women, while closing down the interactive potential of identification across difference.


Sign in / Sign up

Export Citation Format

Share Document