scholarly journals Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution

2016 ◽  
Vol 94 (2) ◽  
Author(s):  
Ulisse Ferrari
2018 ◽  
Author(s):  
Trang-Anh Nghiem ◽  
Bartosz Telenczuk ◽  
Olivier Marre ◽  
Alain Destexhe ◽  
Ulisse Ferrari

Maximum Entropy models can be inferred from large data-sets to uncover how local interactions generate collective dynamics. Here, we employ such models to investigate the characteristics of neurons recorded by multielectrode arrays in the cortex of human and monkey throughout states of wakefulness and sleep. Taking advantage of the separation of excitatory and inhibitory types, we construct a model including this distinction. By comparing the performances of Maximum Entropy models at predicting neural activity in wakefulness and deep sleep, we identify the dominant interactions between neurons in each brain state. We find that during wakefulness, dominant functional interactions are pairwise while during sleep, interactions are population-wide. In particular, inhibitory neurons are shown to be strongly tuned to the inhibitory population. This shows that Maximum Entropy models can be useful to analyze data-sets with excitatory and inhibitory neurons, and can reveal the role of inhibitory neurons in organizing coherent dynamics in cerebral cortex.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


2020 ◽  
Vol 45 (s1) ◽  
pp. 535-559
Author(s):  
Christian Pentzold ◽  
Lena Fölsche

AbstractOur article examines how journalistic reports and online comments have made sense of computational politics. It treats the discourse around data-driven campaigns as its object of analysis and codifies four main perspectives that have structured the debates about the use of large data sets and data analytics in elections. We study American, British, and German sources on the 2016 United States presidential election, the 2017 United Kingdom general election, and the 2017 German federal election. There, groups of speakers maneuvered between enthusiastic, skeptical, agnostic, or admonitory stances and so cannot be clearly mapped onto these four discursive positions. Coming along with the inconsistent accounts, public sensemaking was marked by an atmosphere of speculation about the substance and effects of computational politics. We conclude that this equivocality helped journalists and commentators to sideline prior reporting on the issue in order to repeatedly rediscover the practices they had already covered.


2015 ◽  
Vol 639 ◽  
pp. 21-30 ◽  
Author(s):  
Stephan Purr ◽  
Josef Meinhardt ◽  
Arnulf Lipp ◽  
Axel Werner ◽  
Martin Ostermair ◽  
...  

Data-driven quality evaluation in the stamping process of car body parts is quite promising because dependencies in the process have not yet been sufficiently researched. However, the application of data mining methods for the process in stamping plants would require a large number of sample data sets. Today, acquiring these data represents a major challenge, because the necessary data are inadequately measured, recorded or stored. Thus, the preconditions for the sample data acquisition must first be created before being able to investigate any correlations. In addition, the process conditions change over time due to wear mechanisms. Therefore, the results do not remain valid and a constant data acquisition is required. In this publication, the current situation in stamping plants regarding the process robustness will be first discussed and the need for data-driven methods will be shown. Subsequently, the state of technology regarding the possibility of collecting the sample data sets for quality analysis in producing car body parts will be researched. At the end of this work, an overview will be provided concerning how this data collection was implemented at BMW as well as what kind of potential can be expected.


Sign in / Sign up

Export Citation Format

Share Document