scholarly journals A Study on the Application of Convolutional Neural Networks to Fall Detection Evaluated with Multiple Public Datasets

Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1466 ◽  
Author(s):  
Eduardo Casilari ◽  
Raúl Lora-Rivera ◽  
Francisco García-Lagos

Due to the repercussion of falls on both the health and self-sufficiency of older people and on the financial sustainability of healthcare systems, the study of wearable fall detection systems (FDSs) has gained much attention during the last years. The core of a FDS is the algorithm that discriminates falls from conventional Activities of Daily Life (ADLs). This work presents and evaluates a convolutional deep neural network when it is applied to identify fall patterns based on the measurements collected by a transportable tri-axial accelerometer. In contrast with most works in the related literature, the evaluation is performed against a wide set of public data repositories containing the traces obtained from diverse groups of volunteers during the execution of ADLs and mimicked falls. Although the method can yield very good results when it is hyper-parameterized for a certain dataset, the global evaluation with the other repositories highlights the difficulty of extrapolating to other testbeds the network architecture that was configured and optimized for a particular dataset.

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1831
Author(s):  
Armando Collado-Villaverde ◽  
Mario Cobos ◽  
Pablo Muñoz ◽  
David F. Barrero

People’s life expectancy is increasing, resulting in a growing elderly population. That population is subject to dependency issues, falls being a problematic one due to the associated health complications. Some projects are trying to enhance the independence of elderly people by monitoring their status, typically by means of wearable devices. These devices often feature Machine Learning (ML) algorithms for fall detection using accelerometers. However, the software deployed often lacks reliable data for the models’ training. To overcome such an issue, we have developed a publicly available fall simulator capable of recreating accelerometer fall samples of two of the most common types of falls: syncope and forward. Those simulated samples are like real falls recorded using real accelerometers in order to use them later as input for ML applications. To validate our approach, we have used different classifiers over both simulated falls and data from two public datasets based on real data. Our tests show that the fall simulator achieves a high accuracy for generating accelerometer data from a fall, allowing to create larger datasets for training fall detection software in wearable devices.


Sensors ◽  
2017 ◽  
Vol 17 (7) ◽  
pp. 1513 ◽  
Author(s):  
Eduardo Casilari ◽  
José-Antonio Santoyo-Ramón ◽  
José-Manuel Cano-García

2021 ◽  
Author(s):  
Alexander M Waldrop ◽  
John B Cheadle ◽  
Kira Bradford ◽  
Nathan T Braswell ◽  
Matt Watson ◽  
...  

As the number of public data resources continues to proliferate, identifying relevant datasets across heterogenous repositories is becoming critical to answering scientific questions. To help researchers navigate this data landscape, we developed Dug: a semantic search tool for biomedical datasets that utilizes evidence-based relationships from curated knowledge graphs to find relevant datasets and explain why those results are returned. Developed through the National Heart, Lung, and Blood Institute's (NHLBI) BioData Catalyst ecosystem, Dug can index more than 15,911 study variables from public datasets in just over 39 minutes. On a manually curated search dataset, Dug's mean recall (total relevant results/total results) of 0.79 outperformed default Elasticsearch's mean recall of 0.76. When using synonyms or related concepts as search queries, Dug's (0.28) far outperforms Elasticsearch (0.1) in terms of mean recall. Dug is freely available at https://github.com/helxplatform/dug, and an example Dug deployment is also available for use at https://helx.renci.org/ui.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 672 ◽  
Author(s):  
Ben Busby ◽  
Matthew Lesko ◽  
Lisa Federer ◽  

In genomics, bioinformatics and other areas of data science, gaps exist between extant public datasets and the open-source software tools built by the community to analyze similar data types.  The purpose of biological data science hackathons is to assemble groups of genomics or bioinformatics professionals and software developers to rapidly prototype software to address these gaps.  The only two rules for the NCBI-assisted hackathons run so far are that 1) data either must be housed in public data repositories or be deposited to such repositories shortly after the hackathon’s conclusion, and 2) all software comprising the final pipeline must be open-source or open-use.  Proposed topics, as well as suggested tools and approaches, are distributed to participants at the beginning of each hackathon and refined during the event.  Software, scripts, and pipelines are developed and published on GitHub, a web service providing publicly available, free-usage tiers for collaborative software development. The code resulting from each hackathon is published at https://github.com/NCBI-Hackathons/ with separate directories or repositories for each team.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 672 ◽  
Author(s):  
Ben Busby ◽  
Matthew Lesko ◽  
Lisa Federer ◽  

In genomics, bioinformatics and other areas of data science, gaps exist between extant public datasets and the open-source software tools built by the community to analyze similar data types.  The purpose of biological data science hackathons is to assemble groups of genomics or bioinformatics professionals and software developers to rapidly prototype software to address these gaps.  The only two rules for the NCBI-assisted hackathons run so far are that 1) data either must be housed in public data repositories or be deposited to such repositories shortly after the hackathon’s conclusion, and 2) all software comprising the final pipeline must be open-source or open-use.  Proposed topics, as well as suggested tools and approaches, are distributed to participants at the beginning of each hackathon and refined during the event.  Software, scripts, and pipelines are developed and published on GitHub, a web service providing publicly available, free-usage tiers for collaborative software development. The code resulting from each hackathon is published at https://github.com/NCBI-Hackathons/ with separate directories or repositories for each team.


Author(s):  
Laxmi Remer ◽  
Hanna Kattilakoski

AbstractThe topic of financial sustainability in microfinance institutions has become more important as an increasing number of Microfinance Institutions (MFIs) seek operational self-sufficiency, which translates into financial sustainability. This study aims to identify factors that drive operational self-sufficiency in microfinance institutions. To accomplish this, 416 MFIs in sub-Saharan Africa are studied and several drivers for operational self-sufficiency are empirically analyzed. Results indicate that these drivers are return on assets, and the ratios total expenses/assets and financial revenues/assets. The results imply that MFIs should encourage cost-management measures. They also reveal that there may not be a significant tradeoff in self-sufficiency and outreach. These findings will enable microfinance institutions worldwide to sharpen their institutional capabilities to achieve operational self-sufficiency and also provide policymakers with more focused tools to assist industry development.


GigaScience ◽  
2021 ◽  
Vol 10 (2) ◽  
Author(s):  
Guilhem Sempéré ◽  
Adrien Pétel ◽  
Magsen Abbé ◽  
Pierre Lefeuvre ◽  
Philippe Roumagnac ◽  
...  

Abstract Background Efficiently managing large, heterogeneous data in a structured yet flexible way is a challenge to research laboratories working with genomic data. Specifically regarding both shotgun- and metabarcoding-based metagenomics, while online reference databases and user-friendly tools exist for running various types of analyses (e.g., Qiime, Mothur, Megan, IMG/VR, Anvi'o, Qiita, MetaVir), scientists lack comprehensive software for easily building scalable, searchable, online data repositories on which they can rely during their ongoing research. Results metaXplor is a scalable, distributable, fully web-interfaced application for managing, sharing, and exploring metagenomic data. Being based on a flexible NoSQL data model, it has few constraints regarding dataset contents and thus proves useful for handling outputs from both shotgun and metabarcoding techniques. By supporting incremental data feeding and providing means to combine filters on all imported fields, it allows for exhaustive content browsing, as well as rapid narrowing to find specific records. The application also features various interactive data visualization tools, ways to query contents by BLASTing external sequences, and an integrated pipeline to enrich assignments with phylogenetic placements. The project home page provides the URL of a live instance allowing users to test the system on public data. Conclusion metaXplor allows efficient management and exploration of metagenomic data. Its availability as a set of Docker containers, making it easy to deploy on academic servers, on the cloud, or even on personal computers, will facilitate its adoption.


2021 ◽  
Vol 10 (3) ◽  
pp. 154
Author(s):  
Robert Jeansoulin

Providing long-term data about the evolution of railway networks in Europe may help us understand how European Union (EU) member states behave in the long-term, and how they can comply with present EU recommendations. This paper proposes a methodology for collecting data about railway stations, at the maximal extent of the French railway network, a century ago.The expected outcome is a geocoded dataset of French railway stations (gares), which: (a) links gares to each other, (b) links gares with French communes, the basic administrative level for statistical information. Present stations are well documented in public data, but thousands of past stations are sparsely recorded, not geocoded, and often ignored, except in volunteer geographic information (VGI), either collaboratively through Wikipedia or individually. VGI is very valuable in keeping track of that heritage, and remote sensing, including aerial photography is often the last chance to obtain precise locations. The approach is a series of steps: (1) meta-analysis of the public datasets, (2) three-steps fusion: measure-decision-combination, between public datasets, (3) computer-assisted geocoding for ‘gares’ where fusion fails, (4) integration of additional gares gathered from VGI, (5) automated quality control, indicating where quality is questionable. These five families of methods, form a comprehensive computer-assisted reconstruction process (CARP), which constitutes the core of this paper. The outcome is a reliable dataset—in geojson format under open license—encompassing (by January 2021) more than 10,700 items linked to about 7500 of the 35,500 communes of France: that is 60% more than recorded before. This work demonstrates: (a) it is possible to reconstruct transport data from the past, at a national scale; (b) the value of remote sensing and of VGI is considerable in completing public sources from an historical perspective; (c) data quality can be monitored all along the process and (d) the geocoded outcome is ready for a large variety of further studies with statistical data (demography, density, space coverage, CO2 simulation, environmental policies, etc.).


2021 ◽  
pp. 016555152199863
Author(s):  
Ismael Vázquez ◽  
María Novo-Lourés ◽  
Reyes Pavón ◽  
Rosalía Laza ◽  
José Ramón Méndez ◽  
...  

Current research has evolved in such a way scientists must not only adequately describe the algorithms they introduce and the results of their application, but also ensure the possibility of reproducing the results and comparing them with those obtained through other approximations. In this context, public data sets (sometimes shared through repositories) are one of the most important elements for the development of experimental protocols and test benches. This study has analysed a significant number of CS/ML ( Computer Science/ Machine Learning) research data repositories and data sets and detected some limitations that hamper their utility. Particularly, we identify and discuss the following demanding functionalities for repositories: (1) building customised data sets for specific research tasks, (2) facilitating the comparison of different techniques using dissimilar pre-processing methods, (3) ensuring the availability of software applications to reproduce the pre-processing steps without using the repository functionalities and (4) providing protection mechanisms for licencing issues and user rights. To show the introduced functionality, we created STRep (Spam Text Repository) web application which implements our recommendations adapted to the field of spam text repositories. In addition, we launched an instance of STRep in the URL https://rdata.4spam.group to facilitate understanding of this study.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2254
Author(s):  
Francisco Javier González-Cañete ◽  
Eduardo Casilari

Over the last few years, the use of smartwatches in automatic Fall Detection Systems (FDSs) has aroused great interest in the research of new wearable telemonitoring systems for the elderly. In contrast with other approaches to the problem of fall detection, smartwatch-based FDSs can benefit from the widespread acceptance, ergonomics, low cost, networking interfaces, and sensors that these devices provide. However, the scientific literature has shown that, due to the freedom of movement of the arms, the wrist is usually not the most appropriate position to unambiguously characterize the dynamics of the human body during falls, as many conventional activities of daily living that involve a vigorous motion of the hands may be easily misinterpreted as falls. As also stated by the literature, sensor-fusion and multi-point measurements are required to define a robust and reliable method for a wearable FDS. Thus, to avoid false alarms, it may be necessary to combine the analysis of the signals captured by the smartwatch with those collected by some other low-power sensor placed at a point closer to the body’s center of gravity (e.g., on the waist). Under this architecture of Body Area Network (BAN), these external sensing nodes must be wirelessly connected to the smartwatch to transmit their measurements. Nonetheless, the deployment of this networking solution, in which the smartwatch is in charge of processing the sensed data and generating the alarm in case of detecting a fall, may severely impact on the performance of the wearable. Unlike many other works (which often neglect the operational aspects of real fall detectors), this paper analyzes the actual feasibility of putting into effect a BAN intended for fall detection on present commercial smartwatches. In particular, the study is focused on evaluating the reduction of the battery life may cause in the watch that works as the core of the BAN. To this end, we thoroughly assess the energy drain in a prototype of an FDS consisting of a smartwatch and several external Bluetooth-enabled sensing units. In order to identify those scenarios in which the use of the smartwatch could be viable from a practical point of view, the testbed is studied with diverse commercial devices and under different configurations of those elements that may significantly hamper the battery lifetime.


Sign in / Sign up

Export Citation Format

Share Document