Cemetery Organization, Brood Sorting, Data Analysis, and Graph Partitioning

Author(s):  
Eric Bonabeau ◽  
Marco Dorigo ◽  
Guy Theraulaz

In the previous two chapters, foraging and division of labor were shown to be useful metaphors to design optimization and resource allocation algrithms. In this chapter, we will see that the clustering and sorting behavior of ants has stimulated researchers to design new algorithms for data analysis and graph partitioning. Several species of ants cluster corpses to form a “cemetery,” or sort their larvae into several piles. This behavior is still not fully understood, but a simple model, in which agents move randomly in space and pick up and deposit items on the basis of local information, may account for some of the characteristic features of clustering and sorting in ants. The model can also be applied to data analysis and graph partitioning: objects with different attributes or the nodes of a graph can be considered items to be sorted. Objects placed next to each other by the sorting algorithm have similar attributes, and nodes placed next each other by the sorting algorithm are tightly connected in the graph. The sorting algorithm takes place in a two-dimensional space, thereby offering a low-dimensional representation of the objects or of the graph. Distributed clustering, and more recently sorting, by a swarm of robots have served as benchmarks for swarm-based robotics. In all cases, the robots exhibit extremely simple behavior, act on the basis of purely local information, and communicate indirectly except for collision avoidance. In several species of ants, workers have been reported to form piles of corpses— literally cemeteries—to clean up their nests. Chretien [72] has performed experiments with the ant Lasius niger to study the organization of cemeteries. Other experiments on the ant Pheidole pallidula are also reported in Deneubourg et al. [88], and many species actually organize a cemetery. Figure 4.1 shows the dynamics of cemetery organization in another ant, Messor sancta. If corpses, or, more precisely, sufficiently large parts of corposes are randomly distributed in space at the beginning of the experiment, the workers form cemetery clusters within a few hours.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Hirokazu Tanaka

AbstractEEG is known to contain considerable inter-trial and inter-subject variability, which poses a challenge in any group-level EEG analyses. A true experimental effect must be reproducible even with variabilities in trials, sessions, and subjects. Extracting components that are reproducible across trials and subjects benefits both understanding common mechanisms in neural processing of cognitive functions and building robust brain-computer interfaces. This study extends our previous method (task-related component analysis, TRCA) by maximizing not only trial-by-trial reproducibility within single subjects but also similarity across a group of subjects, hence referred to as group TRCA (gTRCA). The problem of maximizing reproducibility of time series across trials and subjects is formulated as a generalized eigenvalue problem. We applied gTRCA to EEG data recorded from 35 subjects during a steady-state visual-evoked potential (SSVEP) experiment. The results revealed: (1) The group-representative data computed by gTRCA showed higher and consistent spectral peaks than other conventional methods; (2) Scalp maps obtained by gTRCA showed estimated source locations consistently within the occipital lobe; And (3) the high-dimensional features extracted by gTRCA are consistently mapped to a low-dimensional space. We conclude that gTRCA offers a framework for group-level EEG data analysis and brain-computer interfaces alternative in complement to grand averaging.


Author(s):  
Muhammad Amjad

Advances in manifold learning have proven to be of great benefit in reducing the dimensionality of large complex datasets. Elements in an intricate dataset will typically belong in high-dimensional space as the number of individual features or independent variables will be extensive. However, these elements can be integrated into a low-dimensional manifold with well-defined parameters. By constructing a low-dimensional manifold and embedding it into high-dimensional feature space, the dataset can be simplified for easier interpretation. In spite of this elemental dimensionality reduction, the dataset’s constituents do not lose any information, but rather filter it with the hopes of elucidating the appropriate knowledge. This paper will explore the importance of this method of data analysis, its applications, and its extensions into topological data analysis.


NeuroImage ◽  
2021 ◽  
pp. 118200
Author(s):  
Sayan Ghosal ◽  
Qiang Chen ◽  
Giulio Pergola ◽  
Aaron L. Goldman ◽  
William Ulrich ◽  
...  

2021 ◽  
pp. 104973232110024
Author(s):  
Heather Burgess ◽  
Kate Jongbloed ◽  
Anna Vorobyova ◽  
Sean Grieve ◽  
Sharyle Lyndon ◽  
...  

Community-based participatory research (CBPR) has a long history within HIV research, yet little work has focused on facilitating team-based data analysis within CBPR. Our team adapted Thorne’s interpretive description (ID) for CBPR analysis, using a color-coded “sticky notes” system to conduct data fragmentation and synthesis. Sticky notes were used to record, visualize, and communicate emerging insights over the course of 11 in-person participatory sessions. Data fragmentation strategies were employed in an iterative four-step process that was reached by consensus. During synthesis, the team created and recreated mind maps of the 969 sticky notes, from which we developed categories and themes through discussion. Flexibility, trust, and discussion were key components that facilitated the evolution of the final process. An interactive, team-based approach was central to data co-creation and capacity building, whereas the “sticky notes” system provided a framework for identifying and sorting data.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4454 ◽  
Author(s):  
Marek Piorecky ◽  
Vlastimil Koudelka ◽  
Jan Strobl ◽  
Martin Brunovsky ◽  
Vladimir Krajca

Simultaneous recordings of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) are at the forefront of technologies of interest to physicians and scientists because they combine the benefits of both modalities—better time resolution (hdEEG) and space resolution (fMRI). However, EEG measurements in the scanner contain an electromagnetic field that is induced in leads as a result of gradient switching slight head movements and vibrations, and it is corrupted by changes in the measured potential because of the Hall phenomenon. The aim of this study is to design and test a methodology for inspecting hidden EEG structures with respect to artifacts. We propose a top-down strategy to obtain additional information that is not visible in a single recording. The time-domain independent component analysis algorithm was employed to obtain independent components and spatial weights. A nonlinear dimension reduction technique t-distributed stochastic neighbor embedding was used to create low-dimensional space, which was then partitioned using the density-based spatial clustering of applications with noise (DBSCAN). The relationships between the found data structure and the used criteria were investigated. As a result, we were able to extract information from the data structure regarding electrooculographic, electrocardiographic, electromyographic and gradient artifacts. This new methodology could facilitate the identification of artifacts and their residues from simultaneous EEG in fMRI.


2021 ◽  
Vol 12 (5) ◽  
pp. 1-25
Author(s):  
Shengwei Ji ◽  
Chenyang Bu ◽  
Lei Li ◽  
Xindong Wu

Graph edge partitioning, which is essential for the efficiency of distributed graph computation systems, divides a graph into several balanced partitions within a given size to minimize the number of vertices to be cut. Existing graph partitioning models can be classified into two categories: offline and streaming graph partitioning models. The former requires global graph information during the partitioning, which is expensive in terms of time and memory for large-scale graphs. The latter creates partitions based solely on the received graph information. However, the streaming model may result in a lower partitioning quality compared with the offline model. Therefore, this study introduces a Local Graph Edge Partitioning model, which considers only the local information (i.e., a portion of a graph instead of the entire graph) during the partitioning. Considering only the local graph information is meaningful because acquiring complete information for large-scale graphs is expensive. Based on the Local Graph Edge Partitioning model, two local graph edge partitioning algorithms—Two-stage Local Partitioning and Adaptive Local Partitioning—are given. Experimental results obtained on 14 real-world graphs demonstrate that the proposed algorithms outperform rival algorithms in most tested cases. Furthermore, the proposed algorithms are proven to significantly improve the efficiency of the real graph computation system GraphX.


2018 ◽  
Vol 37 (10) ◽  
pp. 1233-1252 ◽  
Author(s):  
Jonathan Hoff ◽  
Alireza Ramezani ◽  
Soon-Jo Chung ◽  
Seth Hutchinson

In this article, we present methods to optimize the design and flight characteristics of a biologically inspired bat-like robot. In previous, work we have designed the topological structure for the wing kinematics of this robot; here we present methods to optimize the geometry of this structure, and to compute actuator trajectories such that its wingbeat pattern closely matches biological counterparts. Our approach is motivated by recent studies on biological bat flight that have shown that the salient aspects of wing motion can be accurately represented in a low-dimensional space. Although bats have over 40 degrees of freedom (DoFs), our robot possesses several biologically meaningful morphing specializations. We use principal component analysis (PCA) to characterize the two most dominant modes of biological bat flight kinematics, and we optimize our robot’s parametric kinematics to mimic these. The method yields a robot that is reduced from five degrees of actuation (DoAs) to just three, and that actively folds its wings within a wingbeat period. As a result of mimicking synergies, the robot produces an average net lift improvesment of 89% over the same robot when its wings cannot fold.


2014 ◽  
Vol 30 (2) ◽  
pp. 463-475 ◽  
Author(s):  
Masaki Mitsuhiro ◽  
Hiroshi Yadohisa

Author(s):  
Lars Kegel ◽  
Claudio Hartmann ◽  
Maik Thiele ◽  
Wolfgang Lehner

AbstractProcessing and analyzing time series datasets have become a central issue in many domains requiring data management systems to support time series as a native data type. A core access primitive of time series is matching, which requires efficient algorithms on-top of appropriate representations like the symbolic aggregate approximation (SAX) representing the current state of the art. This technique reduces a time series to a low-dimensional space by segmenting it and discretizing each segment into a small symbolic alphabet. Unfortunately, SAX ignores the deterministic behavior of time series such as cyclical repeating patterns or a trend component affecting all segments, which may lead to a sub-optimal representation accuracy. We therefore introduce a novel season- and a trend-aware symbolic approximation and demonstrate an improved representation accuracy without increasing the memory footprint. Most importantly, our techniques also enable a more efficient time series matching by providing a match up to three orders of magnitude faster than SAX.


2020 ◽  
Author(s):  
Jessica Dafflon ◽  
Pedro F. Da Costa ◽  
František Váša ◽  
Ricardo Pio Monti ◽  
Danilo Bzdok ◽  
...  

AbstractFor most neuroimaging questions the huge range of possible analytic choices leads to the possibility that conclusions from any single analytic approach may be misleading. Examples of possible choices include the motion regression approach used and smoothing and threshold factors applied during the processing pipeline. Although it is possible to perform a multiverse analysis that evaluates all possible analytic choices, this can be computationally challenging and repeated sequential analyses on the same data can compromise inferential and predictive power. Here, we establish how active learning on a low-dimensional space that captures the inter-relationships between analysis approaches can be used to efficiently approximate the whole multiverse of analyses. This approach balances the benefits of a multiverse analysis without the accompanying cost to statistical power, computational power and the integrity of inferences. We illustrate this approach with a functional MRI dataset of functional connectivity across adolescence, demonstrating how a multiverse of graph theoretic and simple pre-processing steps can be efficiently navigated using active learning. Our study shows how this approach can identify the subset of analysis techniques (i.e., pipelines) which are best able to predict participants’ ages, as well as allowing the performance of different approaches to be quantified.


Sign in / Sign up

Export Citation Format

Share Document