individual dataset
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 8)

H-INDEX

1
(FIVE YEARS 0)

Environments ◽  
2021 ◽  
Vol 8 (8) ◽  
pp. 71
Author(s):  
Johannes Ranke ◽  
Janina Wöltjen ◽  
Jana Schmidt ◽  
Emmanuelle Comets

When data on the degradation of a chemical substance have been collected in a number of environmental media (e.g., in different soils), two strategies can be followed for data evaluation. Currently, each individual dataset is evaluated separately, and representative degradation parameters are obtained by calculating averages of the kinetic parameters. However, such averages often take on unrealistic values if certain degradation parameters are ill-defined in some of the datasets. Moreover, the most appropriate degradation model is selected for each individual dataset, which is time consuming and then requires workarounds for averaging parameters from different models. Therefore, a simultaneous evaluation of all available data is desirable. If the environmental media are viewed as random samples from a population, an advanced strategy based on assumptions about the statistical distribution of the kinetic parameters across the population can be used. Here, we show the advantages of such simultaneous evaluations based on nonlinear mixed-effects models that incorporate such assumptions in the evaluation process. The advantages of this approach are demonstrated using synthetically generated data with known statistical properties and using publicly available experimental degradation data on two pesticidal active substances.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daisuke Endo ◽  
Ryota Kobayashi ◽  
Ramon Bartolo ◽  
Bruno B. Averbeck ◽  
Yasuko Sugase-Miyamoto ◽  
...  

AbstractThe recent increase in reliable, simultaneous high channel count extracellular recordings is exciting for physiologists and theoreticians because it offers the possibility of reconstructing the underlying neuronal circuits. We recently presented a method of inferring this circuit connectivity from neuronal spike trains by applying the generalized linear model to cross-correlograms. Although the algorithm can do a good job of circuit reconstruction, the parameters need to be carefully tuned for each individual dataset. Here we present another method using a Convolutional Neural Network for Estimating synaptic Connectivity from spike trains. After adaptation to huge amounts of simulated data, this method robustly captures the specific feature of monosynaptic impact in a noisy cross-correlogram. There are no user-adjustable parameters. With this new method, we have constructed diagrams of neuronal circuits recorded in several cortical areas of monkeys.


Author(s):  
Ashitha Ebrahim ◽  
Joby George

Liver diseaseis perhaps thedeadliest malignant grow th on the planet. In momentum contemplates, the capabilities for being chosen as key qualities in illnesses is bit low, constraining the precision of the anticipated key qualities in infections. To distinguish the key qualities of liver malignant growth with high exactness, and coordinated different microarray quality articulation datasets identified with the liver disease utilized. At that point recognize their basic DEGs (Differentially Expressed Genes) which will bring about more exact than those from the individual dataset. The datasets are on the whole human microarray quality articulation information recovered from the GEO ( Gene Expression Omnibus) database and need to discover differentially communicated qualities among wellbeing and liver malignancy conditions. In light of these qualities, a protein-protein association system can be built and dissected to recognize the qualities tests that are having a higher impact on the system. These quality examples are prepared by utilizing a neural system LSTM. From this prepared neural system, the key hubs can be recognized and they can be considered as the key qualities of liver malignant growth. In addition, the strategy can be applied to different sorts of informational collections to choose key qualities of other complex ailments.


Author(s):  
K. Spasenovic ◽  
D. Carrion ◽  
F. Migliaccio

Abstract. During a disaster, the activity of the crowd represents a very valuable source of the on-the-ground conditions shared by the affected citizens. The approach, presented in the paper, explores the relationship between the spatial distribution of crowdsourced image posts and damaged buildings in order to understand the potential of modelling the spatial distribution of damaged buildings based on geolocated images. The posts related to the hurricane Michael that happened in the United States in October 2018, showing the building damage of Panama City, have been collected by NAPSG Foundation and GISCorps volunteers. The building damage assessment, based on the analysis of high-resolution post-event imagery, has been performed by FEMA. Exploring the two available independent point datasets, the spatial pattern of each individual dataset has been analysed and furthermore the spatial relationship between them has been explored. A set of spatial statistics has been performed with R software. For this purpose, the distance-based methods have been used, that consider the mutual position of points to describe the patterns. The results shown the spatial relationship between the crowdsourced photos and different damage types. Furthermore, potential of crowdsourced images for improving the awareness of the structural damage after the hurricane have been discussed.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Stephanie Reyes-González ◽  
Camila de las Barreras ◽  
Gledys Reynaldo ◽  
Leyanis Rodríguez-Vera ◽  
Cornelis Vlaar ◽  
...  

AbstractObjectivesThe inter-individual variability of warfarin dosing has been linked to genetic polymorphisms. This study was aimed at performing genotype-driven pharmacokinetic (PK) simulations to predict warfarin levels in Puerto Ricans.MethodsAnalysis of each individual dataset was performed by one-compartmental modeling using WinNonlin®v6.4. The ke of warfarin given a cytochrome P450 2C9 (CYP2C9) genotype ranged from 0.0189 to 0.0075 h−1. Ka and Vd parameters were taken from literature. Data from 128 subjects were divided into two groups (i.e., wild-types and carriers) and statistical analyses of PK parameters were performed by unpaired t-tests.ResultsIn the carrier group (n=64), 53 subjects were single-carriers and 11 double-carriers (i.e., *2/*2, *2/*3, *2/*5, *3/*5, and *3/*8). The mean peak concentration (Cmax) was higher for wild-type (0.36±0.12 vs. 0.32±0.14 mg/L). Likewise, the average clearance (CL) parameter was faster among non-carriers (0.22±0.03 vs. 0.17±0.05 L/h; p=0.0001), with also lower area under the curve (AUC) when compared to carriers (20.43±6.97 vs. 24.78±11.26 h mg/L; p=0.025). Statistical analysis revealed a significant difference between groups with regard to AUC and CL, but not for Cmax. This can be explained by the variation of ke across different genotypes.ConclusionsThe results provided useful information for warfarin dosing predictions that take into consideration important individual PK and genotyping data.


2020 ◽  
Author(s):  
Daisuke Endo ◽  
Ryota Kobayashi ◽  
Ramon Bartolo ◽  
Bruno B. Averbeck ◽  
Yasuko Sugase-Miyamoto ◽  
...  

The recent increase in reliable, simultaneous high channel count extracellular recordings is exciting for physiologists and theoreticians, because it offers the possibility of reconstructing the underlying neuronal circuits. We recently presented a method of inferring this circuit connectivity from neuronal spike trains by applying the generalized linear model to cross-correlograms, GLMCC. Although the GLMCC algorithm can do a good job of circuit reconstruction, the parameters need to be carefully tuned for each individual dataset. Here we present another algorithm using a convolutional neural network for estimating synaptic connectivity from spike trains, CoNNECT. After adaptation to very large amounts of simulated data, this algorithm robustly captures the specific feature of monosynaptic impact in a noisy cross-correlogram. There are no user-adjustable parameters. With this new algorithm, we have constructed diagrams of neuronal circuits recorded in several cortical areas of monkeys.


2019 ◽  
Author(s):  
Mingze Bai ◽  
Chunyuan Qin ◽  
Kunxian Shu ◽  
Johannes Griss ◽  
Yasset Perez-Riverol ◽  
...  

AbstractMotivationSpectrum clustering has been used to enhance proteomics data analysis: some originally unidentified spectra can potentially be identified and individual peptides can be evaluated to find potential mis-identifications by using clusters of identified spectra. The Phoenix Enhancer provides an infrastructure to analyze tandem mass spectra and the corresponding peptides in the context of previously identified public data. Based on PRIDE Cluster data and a newly developed pipeline, four functionalities are provided: i) evaluate the original peptide identifications in an individual dataset, to find low confidence peptide spectrum matches (PSMs) which could correspond to mis-identifications; ii) provide confidence scores for all originally identified PSMs, to help users evaluate their quality (complementary to getting a global false discovery rate); iii) identify potential new PSMs for originally unidentified spectra; and iv) provide a collection of browsing and visualization tools to analyze and export the results. In addition to the web based service, the code is open-source and easy to re-deploy on local computers using Docker containers.AvailabilityThe service of Phoenix Enhancer is available at http://enhancer.ncpsb.org. All source code is freely available in GitHub (https://github.com/phoenix-cluster/) and can be deployed in the Cloud and HPC [email protected] informationSupplementary data are available online.


2019 ◽  
Vol 2 ◽  
pp. 1-5
Author(s):  
Brian Alan Johnson ◽  
Rajarshi Dasgupta ◽  
Shizuka Hashimoto ◽  
Pankaj Kumar ◽  
Akio Onishi

<p><strong>Abstract.</strong> Parks and other public green spaces (hereafter “urban green spaces”) provide many benefits to urban dwellers, but some residents receive few benefits due to a lack of urban green spaces nearby their home/workplace. Understanding spatial variations in urban green space accessibility is thus important for urban planning. As a case study, here we mapped urban green space accessibility in Japan’s highly urbanized Tokyo and Kanagawa Prefectures using a Gravity Model (GM). As the inputs for the GM, we used georeferenced datasets of urban green spaces obtained from various sources, including national government (Ministry of Land, Transportation, Infrastructure, and Tourism; MLIT), a commercial map provider (ESRI Japan Corporation), and a crowdsourcing initiative (OpenStreetMap). These datasets all varied in terms of their spatial and thematic coverage, as could be seen in the urban green space accessibility maps generated using each individual dataset alone. To overcome the limitations of each individual dataset, we developed an integrated urban green space accessibility map using a maximum value operator. The proposed map integration approach is simple and can be applied for mapping spatial accessibility to other goods and services using heterogeneous geographic datasets.</p>


2018 ◽  
Author(s):  
Ling Cai ◽  
ShinYi Lin ◽  
Yunyun Zhou ◽  
Lin Yang ◽  
Bo Ci ◽  
...  

AbstractWe constructed a lung cancer-specific database housing expression data and clinical data from over 6,700 patients in 56 studies. Expression data from 23 “whole-genome” based platforms were carefully processed and quality controlled, whereas clinical data were standardized and rigorously curated. Empowered by this lung cancer database, we created an open access web resource – the Lung Cancer Explorer (LCE), which enables researchers and clinicians to explore these data and perform analyses. Users can perform meta-analyses on LCE to gain a quick overview of the results on tumor vs normal differential gene expression and expression-survival association. Individual dataset-based survival analysis, comparative analysis, and correlation analysis are also provided with flexible options to allow for customized analyses from the user.


2018 ◽  
Vol 16 (1) ◽  
pp. 3-17 ◽  
Author(s):  
Laura Hoppe ◽  
Manne Gerell

It is well established that previous crime events are valuable indicators for the prediction of future crime. Near-repeat burglaries are incidents that occur in close proximity in space and time to an initial burglary. The current study analyses near-repeat victimization patterns in Malmö, Sweden’s third-largest city. The data, provided by the local police, cover a six-year time frame from 2009 to 2014. The complete dataset, as well as each year’s individual dataset, was analysed using Ratcliffe’s Near Repeat Calculator version 1.3. Results reveal significant near-repeat victimization patterns. For the full dataset, an observed/expected ratio of 2.83 was identified for the first week after an initial incident and an area of 100 metres surrounding the original burglary. Separate analyses of each individual year reveal both similarities and differences between years. Some years manifest near-repeat patterns at longer spatial and temporal distances, indicating a need for further studies on the variability of near repeats. Preventive strategies that include both private and public actors need to be intensified and focused on the first two weeks after a burglary.


Sign in / Sign up

Export Citation Format

Share Document