multiple granularities
Recently Published Documents


TOTAL DOCUMENTS

52
(FIVE YEARS 15)

H-INDEX

9
(FIVE YEARS 1)

Author(s):  
Ronghan Li ◽  
Lifang Wang ◽  
Shengli Wang ◽  
Zejun Jiang

Multi-hop machine reading comprehension (MRC) task aims to enable models to answer the compound question according to the bridging information. Existing methods that use graph neural networks to represent multiple granularities such as entities and sentences in documents update all nodes synchronously, ignoring the fact that multi-hop reasoning has a certain logical order across granular levels. In this paper, we introduce an Asynchronous Multi-grained Graph Network (AMGN) for multi-hop MRC. First, we construct a multigrained graph containing entity and sentence nodes. Particularly, we use independent parameters to represent relationship groups defined according to the level of granularity. Second, an asynchronous update mechanism based on multi-grained relationships is proposed to mimic human multi-hop reading logic. Besides, we present a question reformulation mechanism to update the latent representation of the compound question with updated graph nodes. We evaluate the proposed model on the HotpotQA dataset and achieve top competitive performance in distractor setting compared with other published models. Further analysis shows that the asynchronous update mechanism can effectively form interpretable reasoning chains at different granularity levels.


2021 ◽  
Vol 2 (3) ◽  
pp. 1-23
Author(s):  
Josephine Lamp ◽  
Carlos E. Rubio-Medrano ◽  
Ziming Zhao ◽  
Gail-Joon Ahn

No longer just prophesied about, cyber-attacks to Energy Delivery Systems (EDS) (e.g., the power grid, gas and oil industries) are now very real dangers that result in non-trivial economical losses and inconveniences to modern societies. In such a context, risk analysis has been proposed as a valuable way to identify, analyze, and mitigate potential vulnerabilities, threats, and attack vectors. However, performing risk analysis for EDS is difficult due to their innate structural diversity and interdependencies, along with an always-increasing threatscape. Therefore, there is a need for a methodology to evaluate the current system state, identify vulnerabilities, and qualify risk at multiple granularities in a collaborative manner among different actors in the context of EDS. With this in mind, this article presents ExSol , a collaborative, real-time, risk assessment ecosystem that features an approach for modeling real-life EDS infrastructures, an ontology traversal technique that retrieves well-defined security requirements from well-reputed documents on cyber-protection for EDS infrastructures, as well as a methodology for calculating risk for a single asset and for an entire system. Moreover, we also provide experimental evidence involving a series of attack scenarios in both simulated and real-world EDS environments, which ultimately encourage the adoption of ExSol in practice.


2021 ◽  
Vol 7 ◽  
pp. e548
Author(s):  
Martin Grambow ◽  
Christoph Laaber ◽  
Philipp Leitner ◽  
David Bermbach

Performance problems in applications should ideally be detected as soon as they occur, i.e., directly when the causing code modification is added to the code repository. To this end, complex and cost-intensive application benchmarks or lightweight but less relevant microbenchmarks can be added to existing build pipelines to ensure performance goals. In this paper, we show how the practical relevance of microbenchmark suites can be improved and verified based on the application flow during an application benchmark run. We propose an approach to determine the overlap of common function calls between application and microbenchmarks, describe a method which identifies redundant microbenchmarks, and present a recommendation algorithm which reveals relevant functions that are not covered by microbenchmarks yet. A microbenchmark suite optimized in this way can easily test all functions determined to be relevant by application benchmarks after every code change, thus, significantly reducing the risk of undetected performance problems. Our evaluation using two time series databases shows that, depending on the specific application scenario, application benchmarks cover different functions of the system under test. Their respective microbenchmark suites cover between 35.62% and 66.29% of the functions called during the application benchmark, offering substantial room for improvement. Through two use cases—removing redundancies in the microbenchmark suite and recommendation of yet uncovered functions—we decrease the total number of microbenchmarks and increase the practical relevance of both suites. Removing redundancies can significantly reduce the number of microbenchmarks (and thus the execution time as well) to ~10% and ~23% of the original microbenchmark suites, whereas recommendation identifies up to 26 and 14 newly, uncovered functions to benchmark to improve the relevance. By utilizing the differences and synergies of application benchmarks and microbenchmarks, our approach potentially enables effective software performance assurance with performance tests of multiple granularities.


2021 ◽  
Vol 1775 (1) ◽  
pp. 012013
Author(s):  
Qian Kong ◽  
Chongfu Zhang ◽  
Zichuan Yi ◽  
Miao Yu ◽  
Limengnan Zhou ◽  
...  

Author(s):  
Jingjing Song ◽  
Huili Dou ◽  
Xiansheng Rao ◽  
Xiaojing Luo ◽  
Xuan Yan

As a feature selection technique in rough set theory, attribute reduction has been extensively explored from various viewpoints especially the aspect of granularity, and multi-granularity attribute reduction has attracted much attention. Nevertheless, it should be pointed out that multiple granularities require to be considered simultaneously to evaluate the significance of candidate attribute in the corresponding process of computing reduct, which may result in high elapsed time of searching reduct. To alleviate such a problem, an acceleration strategy for neighborhood based multi-granularity attribute reduction is proposed in this paper, which aims to improve the computational efficiency of searching reduct. Our proposed approach is actually realized through the positive approximation mechanism, and the processes of searching qualified attributes are executed through evaluating candidate attributes over the gradually reduced sample space rather than all samples. The experimental results over 12 UCI data sets demonstrate that the acceleration strategy can provide superior performance to the naive approach of deriving multi-granularity reduct in the elapsed time of computing reduct without generating different reducts.


2020 ◽  
Author(s):  
Anna Letizia Allegra Mascaro ◽  
Egidio Falotico ◽  
Spase Petkoski ◽  
Maria Pasquini ◽  
Lorenzo Vannucci ◽  
...  

ABSTRACTBeing able to replicate real experiments with computational simulations is a unique opportunity to refine and validate models with experimental data and redesign the experiments based on simulations. However, since it is technically demanding to model all components of an experiment, traditional approaches to modeling reduce the experimental setups as much as possible. In this study, our goal is to replicate all the relevant features of an experiment on motor control and motor rehabilitation after stroke. To this aim, we propose an approach that allows continuous integration of new experimental data into a computational modeling framework. First, results show that we could reproduce experimental object displacement with high accuracy via the simulated embodiment in the virtual world by feeding a spinal cord model with experimental registration of the cortical activity. Second, by using computational models of multiple granularities, our preliminary results show the possibility of simulating several features of the brain after stroke, from the local alteration in neuronal activity to long-range connectivity remodeling. Finally, strategies are proposed to merge the two pipelines. We further suggest that additional models could be integrated into the framework thanks to the versatility of the proposed approach, thus allowing many researchers to achieve continuously improved experimental design.


Sign in / Sign up

Export Citation Format

Share Document