scholarly journals Pose Generation for Social Robots in Conversational Group Formations

2022 ◽  
Vol 8 ◽  
Author(s):  
Marynel Vázquez ◽  
Alexander Lew ◽  
Eden Gorevoy ◽  
Joe Connolly

We study two approaches for predicting an appropriate pose for a robot to take part in group formations typical of social human conversations subject to the physical layout of the surrounding environment. One method is model-based and explicitly encodes key geometric aspects of conversational formations. The other method is data-driven. It implicitly models key properties of spatial arrangements using graph neural networks and an adversarial training regimen. We evaluate the proposed approaches through quantitative metrics designed for this problem domain and via a human experiment. Our results suggest that the proposed methods are effective at reasoning about the environment layout and conversational group formations. They can also be used repeatedly to simulate conversational spatial arrangements despite being designed to output a single pose at a time. However, the methods showed different strengths. For example, the geometric approach was more successful at avoiding poses generated in nonfree areas of the environment, but the data-driven method was better at capturing the variability of conversational spatial formations. We discuss ways to address open challenges for the pose generation problem and other interesting avenues for future work.

2020 ◽  
Vol 53 (5) ◽  
pp. 420-425
Author(s):  
Hu Tian ◽  
Bowei Ye ◽  
Xiaolong Zheng ◽  
Desheng Dash Wu

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Qi Wang ◽  
Longfei Zhang

AbstractDirectly manipulating the atomic structure to achieve a specific property is a long pursuit in the field of materials. However, hindered by the disordered, non-prototypical glass structure and the complex interplay between structure and property, such inverse design is dauntingly hard for glasses. Here, combining two cutting-edge techniques, graph neural networks and swap Monte Carlo, we develop a data-driven, property-oriented inverse design route that managed to improve the plastic resistance of Cu-Zr metallic glasses in a controllable way. Swap Monte Carlo, as a sampler, effectively explores the glass landscape, and graph neural networks, with high regression accuracy in predicting the plastic resistance, serves as a decider to guide the search in configuration space. Via an unconventional strengthening mechanism, a geometrically ultra-stable yet energetically meta-stable state is unraveled, contrary to the common belief that the higher the energy, the lower the plastic resistance. This demonstrates a vast configuration space that can be easily overlooked by conventional atomistic simulations. The data-driven techniques, structural search methods and optimization algorithms consolidate to form a toolbox, paving a new way to the design of glassy materials.


2020 ◽  
Author(s):  
Santiago Papini ◽  
Mikael Rubin ◽  
Michael J Telch ◽  
Jasper A. J. Smits

Background. The application of psychopathological symptom networks requires reconciliation of the observed cross-sample heterogeneity. We leveraged the largest sample to be used in a PTSD network analysis (N = 28,594) to examine the impact of criteria-based and data-driven sampling approaches on the heterogeneity and interpretability of networks.Methods. Severity and diagnostic criteria identified four overlapping subsamples and cluster analysis identified three distinct data-derived profiles. Networks estimated on each subsample were compared to a respective benchmark network at the symptom-relation level by calculating sensitivity, specificity, correlation, and density of the edges. Negative edges were assessed for Berkson’s bias, a source of error that can be induced by threshold samples on severity.Results. Criteria-based networks showed reduced sensitivity, specificity, and density but edges remained highly correlated and a meaningfully higher proportion of negative edges was not observed relative to the benchmark network of all cases. Among the data-derived profile networks, the Low Severity network had the highest proportion of negative edges not present in the benchmark network of symptomatic cases. The High Severity profile also showed a higher proportion of negative edges, whereas the Medium Severity profile did not. Conclusion. Although networks showed differences, Berkson’s bias did not appear to be a meaningful source of systematic error. These results can guide expectations about the generalizability of symptom networks across samples that vary in their ranges of severity. Future work should continue to explore whether network heterogeneity is reflective of meaningful and interpretable differences in the symptom relations from which they are composed.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8009
Author(s):  
Abdulmajid Murad ◽  
Frank Alexander Kraemer ◽  
Kerstin Bach ◽  
Gavin Taylor

Data-driven forecasts of air quality have recently achieved more accurate short-term predictions. However, despite their success, most of the current data-driven solutions lack proper quantifications of model uncertainty that communicate how much to trust the forecasts. Recently, several practical tools to estimate uncertainty have been developed in probabilistic deep learning. However, there have not been empirical applications and extensive comparisons of these tools in the domain of air quality forecasts. Therefore, this work applies state-of-the-art techniques of uncertainty quantification in a real-world setting of air quality forecasts. Through extensive experiments, we describe training probabilistic models and evaluate their predictive uncertainties based on empirical performance, reliability of confidence estimate, and practical applicability. We also propose improving these models using “free” adversarial training and exploiting temporal and spatial correlation inherent in air quality data. Our experiments demonstrate that the proposed models perform better than previous works in quantifying uncertainty in data-driven air quality forecasts. Overall, Bayesian neural networks provide a more reliable uncertainty estimate but can be challenging to implement and scale. Other scalable methods, such as deep ensemble, Monte Carlo (MC) dropout, and stochastic weight averaging-Gaussian (SWAG), can perform well if applied correctly but with different tradeoffs and slight variations in performance metrics. Finally, our results show the practical impact of uncertainty estimation and demonstrate that, indeed, probabilistic models are more suitable for making informed decisions.


2021 ◽  
Author(s):  
Yi-Fan Li ◽  
Bo Dong ◽  
Latifur Khan ◽  
Bhavani Thuraisingham ◽  
Patrick T. Brandt ◽  
...  

2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Corene Matyas ◽  
Jingyin Tang

Although tropical cyclone (TC) rain fields assume varying spatial configurations, many studies only use areal coverage to compare TCs. To provide additional spatial information, this study calculates metrics of closure, or the tangential completeness of reflectivity regions surrounding the circulation center, and dispersion, or the spread of reflectivity outwards from the storm center. Two hurricanes that encountered different conditions after landfall are compared. Humberto (2007) experienced rapid intensification (RI), stronger vertical wind shear, and more moisture than Jeanne (2004), which was more intense, weakened gradually, and became extratropical. A GIS framework was used to convert radar reflectivity regions into polygons and measure their area, closure, and dispersion. Closure corresponded most closely to storm intensity, as the eye became exposed when both TCs weakened to tropical storm intensity. Dispersion increased by 10 km·hr−1 as both TCs developed precipitation along frontal boundaries. As closure tended to change earlier than dispersion and area, closure may be most sensitive to subtle changes in environmental conditions, particularly as the storm’s core experiences the entrainment of dry air and erodes. Displacement provided a combined radial and tangential component to the location of the rainfall regions to confirm placement along the frontal boundaries. Examining area alone cannot reveal these patterns. The spatial metrics reveal changes in TC structure, such as the lag between onset of RI and maximum closure, which should be generalizable to TCs experiencing similar conditions. Future work will calculate these metrics for additional TCs to quantify structural changes in response to their surrounding environment.


2020 ◽  
Vol 34 (04) ◽  
pp. 3438-3445 ◽  
Author(s):  
Deli Chen ◽  
Yankai Lin ◽  
Wei Li ◽  
Peng Li ◽  
Jie Zhou ◽  
...  

Graph Neural Networks (GNNs) have achieved promising performance on a wide range of graph-based tasks. Despite their success, one severe limitation of GNNs is the over-smoothing issue (indistinguishable representations of nodes in different classes). In this work, we present a systematic and quantitative study on the over-smoothing issue of GNNs. First, we introduce two quantitative metrics, MAD and MADGap, to measure the smoothness and over-smoothness of the graph nodes representations, respectively. Then, we verify that smoothing is the nature of GNNs and the critical factor leading to over-smoothness is the low information-to-noise ratio of the message received by the nodes, which is partially determined by the graph topology. Finally, we propose two methods to alleviate the over-smoothing issue from the topological view: (1) MADReg which adds a MADGap-based regularizer to the training objective; (2) AdaEdge which optimizes the graph topology based on the model predictions. Extensive experiments on 7 widely-used graph datasets with 10 typical GNN models show that the two proposed methods are effective for relieving the over-smoothing issue, thus improving the performance of various GNN models.


Author(s):  
Evaggelia Pitoura ◽  
Kostas Stefanidis ◽  
Georgia Koutrika

AbstractWe increasingly depend on a variety of data-driven algorithmic systems to assist us in many aspects of life. Search engines and recommender systems among others are used as sources of information and to help us in making all sort of decisions from selecting restaurants and books, to choosing friends and careers. This has given rise to important concerns regarding the fairness of such systems. In this work, we aim at presenting a toolkit of definitions, models and methods used for ensuring fairness in rankings and recommendations. Our objectives are threefold: (a) to provide a solid framework on a novel, quickly evolving and impactful domain, (b) to present related methods and put them into perspective and (c) to highlight open challenges and research paths for future work.


Author(s):  
Kaidi Xu ◽  
Hongge Chen ◽  
Sijia Liu ◽  
Pin-Yu Chen ◽  
Tsui-Wei Weng ◽  
...  

Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification. However, only few work has addressed the adversarial robustness of GNNs. In this paper, we first present a novel gradient-based attack method that facilitates the difficulty of tackling discrete graph data. When comparing to current adversarial attacks on GNNs, the results show that by only perturbing a small number of edge perturbations, including addition and deletion, our optimization-based attack can lead to a noticeable decrease in classification performance. Moreover, leveraging our gradient-based attack, we propose the first optimization-based adversarial training for GNNs. Our method yields higher robustness against both different gradient based and greedy attack methods without sacrifice classification accuracy on original graph.


2020 ◽  
Author(s):  
Daniel Bennett

We introduce an unobtrusive, computational method for measuring readiness-to-hand and task-engagement during interaction."Readiness-to-hand" is an influential concept describing fluid, intuitive tool use, with attention on task rather than tool; it has longbeen significant in HCI research, most recently via metrics of tool-embodiment and immersion. We build on prior work in cognitivescience which relates readiness-to-hand and task engagement to multifractality: a measure of complexity in behaviour. We conduct areplication study (N=28), and two new experiments (N=44, N=30), which show that multifractality correlates with task-engagement and other features of readiness-to-hand overlooked in previous measures, including familiarity with task. This is the first evaluation of multifractal measures of behaviour in HCI. Since multifractality occurs in a wide range of behaviours and input signals, we support future work by sharing scripts and data (https://osf.io/2hm9u/), and introducing a new data-driven approach to parameter selection


Sign in / Sign up

Export Citation Format

Share Document