scholarly journals Coping with demand volatility in retail pharmacies with the aid of big data exploration

2018 ◽  
Vol 98 ◽  
pp. 343-354 ◽  
Author(s):  
Christos I. Papanagnou ◽  
Omeiza Matthews-Amune
2018 ◽  
Vol 618 ◽  
pp. A13 ◽  
Author(s):  
Maarten A. Breddels ◽  
Jovan Veljanoski

We present a new Python library, called vaex, intended to handle extremely large tabular datasets such as astronomical catalogues like the Gaia catalogue, N-body simulations, or other datasets which can be structured in rows and columns. Fast computations of statistics on regular N-dimensional grids allows analysis and visualization in the order of a billion rows per second, for a high-end desktop computer. We use streaming algorithms, memory mapped files, and a zero memory copy policy to allow exploration of datasets larger than memory, for example out-of-core algorithms. Vaex allows arbitrary (mathematical) transformations using normal Python expressions and (a subset of) numpy functions which are “lazily” evaluated and computed when needed in small chunks, which avoids wasting of memory. Boolean expressions (which are also lazily evaluated) can be used to explore subsets of the data, which we call selections. Vaex uses a similar DataFrame API as Pandas, a very popular library, which helps migration from Pandas. Visualization is one of the key points of vaex, and is done using binned statistics in 1d (e.g. histogram), in 2d (e.g. 2d histograms with colourmapping) and 3d (using volume rendering). Vaex is split in in several packages: vaex-core for the computational part, vaex-viz for visualization mostly based on matplotlib, vaex-jupyter for visualization in the Jupyter notebook/lab based in IPyWidgets, vaex-server for the (optional) client-server communication, vaex-ui for the Qt based interface, vaex-hdf5 for HDF5 based memory mapped storage, vaex-astro for astronomy related selections, transformations, and memory mapped (column based) FITS storage.


2020 ◽  
Vol 102 ◽  
pp. 84-94 ◽  
Author(s):  
Michele Ianni ◽  
Elio Masciari ◽  
Giuseppe M. Mazzeo ◽  
Mario Mezzanzanica ◽  
Carlo Zaniolo
Keyword(s):  
Big Data ◽  

Author(s):  
Fernando Almeida ◽  
Pavel Kovalevski ◽  
Dovydas Sakalauskas

2017 ◽  
Author(s):  
Michael P. Milham ◽  
R. Cameron Craddock ◽  
Arno Klein

AbstractDespite decades of research, visions of transforming neuropsychiatry through the development of brain imaging-based ‘growth charts’ or ‘lab tests’ have remained out of reach. In recent years, there is renewed enthusiasm about the prospect of achieving clinically useful tools capable of aiding the diagnosis and management of neuropsychiatric disorders. The present work explores the basis for this enthusiasm. We assert that there is no single advance that currently has the potential to drive the field of clinical brain imaging forward. Instead, there has been a constellation of advances that, if combined, could lead to the identification of objective brain imaging-based markers of illness. In particular, we focus on advances that are helping to: 1) elucidate the research agenda for biological psychiatry (e.g., neuroscience focus, precision medicine), 2) shift research models for clinical brain imaging (e.g., big data exploration, standardization), 3) break down research silos (e.g., open science, calls for reproducibility and transparency), and 4) improve imaging technologies and methods. While an arduous road remains ahead, these advances are repositioning the brain imaging community for long-term success.


Sign in / Sign up

Export Citation Format

Share Document